Mathematical modeling is helping companies across the globe forecast more accurately, optimize supply chains, assess risk, and keep customers from churning to competitors. However, recent market conditions (i.e. credit crisis) have shown that while models can provide an "air of certainty", solely relying on them for complex decision making can be very costly. Under what circumstances can mathematical models be trusted?


Data, by itself is of little value. The real value lies in the capturing, cleansing, management and analysis of data thereby making it more useful for decision making.

One of the ways companies analyze data is to build or "program" models to simulate, test, learn and predict outcomes. Indeed, models built upon statistical techniques are helping companies identify fraud, predict next best offers, determine customer churn, and assess credit risk among other valuable applications.

But modeling isn't a panacea, and a recent New York Times article, "How Wall Street Lied to Its Computers", shows us how companies can get it wrong.

Writer Saul Hansell notes that Wall Street traders have long had very sophisticated models on market behavior–devised by quantitative analysts–that were supposed to help them hedge their positions and allow them to essentially manage their risks (or enable them to take bigger risks).

However a key challenge emerged when, "The people who ran the financial firms chose to program their risk management systems with overly optimistic assumptions and feed them oversimplified data."
Even worse, many of the products (read: derivatives and derivatives of derivatives) weren't understood by the creators of the products and thus it was near impossible to accurately assess the risk of these products with a mathematical model.

Modeling isn't just for risk management, and can be a very valuable tool for companies assessing future scenarios, determining cause and effect, and allocating scarce resources. However the New York Times article highlights a great case study of pitfalls and key challenges when attempting to model a system, phenomenon or behavior.

First, understand that a model will only be as good as your assumptions. For example, in many risk management systems, models are designed to assume rational decision makers, a stable and relatively volatile-free marketplace, and that outliers generally have a limited effect on the entire population. Anyone who's invested in a 401K and tracked their stock portfolio recently knows the futility of these assumptions (Nassim Nicholas Taleb explains why here).

Second, mathematical modeling is only as good as your data. Hansell's article points out that it was in the best interest of traders to ensure the models didn't warn them of impending danger, so they took efforts to smooth the data and manipulate the amount of historical data their risk management systems could analyze so as to take more aggressive trading positions.

Third, modeling is only as good as the design and designer of the model. "There was a willful designing of the systems to measure the risks in a certain way that would not necessarily pick up all the right risks," says Gregg Berman of software company Risk Metrics. The design of an model should be checked for accuracy–not only of the accuracy statistical concepts used, but also that the model is not "tweaked" to produce desired results.

Last point: a model might be based on fair and accurate assumptions, sourcing clean and legitimate data, and designed properly–however it is of little use of politics stands in the way of recognizing and acting upon the output. All the analytical systems in the world are of little use if corporate politics dictate an outcome that is different than what the model prescribes.

No mathematical model is perfect–a model is just that–a model and not a silver bullet. Also such models are support tools that should be combined with good judgment, experience, and the input of others to effectively drive decisions.

That said, the time, energy, and investment dollars spent on mathematical modeling is close to worthless when poor assumptions, faulty/dirty data, bad design and corporate politics get in the way of good decision making.

Questions:
* Mathematical models are used by companies for customer segmentation, risk management, propensity to buy, loyalty management etc. Are you using models to help you make better decisions? If so, how?
* Do you think we often try to model things that are too complex–things that can't be modeled effectively? What might be the ramifications when we get it wrong?
* In the business world, do you think the use of mathematics sometimes overrides "common sense"?
* Mathematical modeling–done right–can be a powerful business tool. How can we teach future generations of business leaders to use these tools ethically?

Enter your email address to continue reading

Can Mathematical Modeling Be Trusted?

Don't worry...it's free!

Already a member? Sign in now.

Sign in with your preferred account, below.

Did you like this article?
Know someone who would enjoy it too? Share with your friends, free of charge, no sign up required! Simply share this link, and they will get instant access…
  • Copy Link

  • Email

  • Twitter

  • Facebook

  • Pinterest

  • Linkedin


ABOUT THE AUTHOR

Paul Barsch directs services marketing programs for Teradata, the world's largest data warehousing and analytics company. Previously, Paul was marketing director for HP Enterprise Services $1.3 billion healthcare industry and a senior marketing manager at global consultancy, BearingPoint. Paul is a senior contributor to MarketingProfs, a frequent columnist for MarketingProfs DailyFix, and has published over fifteen articles in marketing, management, technology and healthcare publications. Paul earned his Bachelors of Science in Business Administration from California Polytechnic State University, San Luis Obispo. He and his family reside in San Diego, CA.