Today’s post is about something completely different. We talk a lot about emerging markets, technologies and business models, trying to assess ramp up and acceptance that may lead to new growth engines. In order to do so, we quote market research, forecasts and specialists. However, as we approach the task of forecasting (or base our assumptions on other people’s forecasts), it may be worth our while to look into the forecasting process itself, in order to try and identify helpful success patterns, and more importantly, to identify its limitations and hindering factors.
As we each monitor our respective markets, we often encounter a surprising occurrence. Such surprises can come in one of two flavours: a tactical surprise, whereby we understand its origins and were aware of the occurrence’s probability within the realms of our business reason; and a strategic surprise, an occurrence which leaves us dumbfounded, comes from a completely new direction, beyond our metaphoric boxes and radars. Reaction to the former can be settled within a few dry tactical moves. Reaction to the latter requires a total systematic shift, the kind that makes us rethink and redefine the basics of our strategy and operations.
Sustaining a surprise in your own playground can never be good. This is especially true for strategic surprises, which can have catastrophic consequences. For this reason, we need to be aware of what impacts the process of forethought and deriving paradigms.
The line between predictable and unpredictable events is not always easy to draw; moreover, there is often a difference between what we perceive to be predictable, when in fact it isn’t so. One thing is certain, though: the element of surprise keeps hitting us again and again. Despite much investment in financial, technological and political research and analysis, somehow events such as the credit crisis manage to creep in from under the specialists’ noses.
“Foxes” vs. “Hedgehogs”
So how come we consistently miss the hints to life-altering events? One explanation comes from the comparison of a long line of predictions with their subsequent materialization (or lack thereof), done by a guy called Tetlock.
Tetlock found that forecast specialists suffer from over-confidence when it comes to their forecasts. They consistently overrate the probability to their prediction, and don’t calibrate well. When it comes to evaluating their forecasts they do better than your garden variety layman’s opinion, but not as well as basic statistic forecasting models (based on the extrapolation of past data).
Within the forecast specialists, Tetlock makes a distinction between “foxes” and “hedgehogs”. Hedgehogs know “one big thing”, and aggressively apply this “big knowledge” to a wide variety of other arenas (some more relevant than others). They display impatience with disagreement, and high confidence levels regarding their forecasting capabilities.
Foxes, on the other hand, know “many small things”, are skeptical of all-inclusive patterns and forecast models, and consider each forecast a separate challenge. Foxes display lower levels of confidence in their predictions, regardless of their success in the past.
Tetlock found that in most cases, foxes did better than hedgehogs in the calibration and materialization of their predictions. However, interestingly, when it came to extreme turning points, it was the hedgehogs who did better in predicting them (even though they generated many false alarms along the way). We therefore find a trade-off between a large amount of smaller, relatively accurate evaluations, and the ability to spot a game-changing critical moments, at lower levels of accuracy.
The Uncertainty Effect – Observation Changes Reality
Yoram Bauman, a PhD economist and standup comedian once defined macro economists as “the guys who predicted 9 out of the past 5 recessions”. While said with humour, this quote holds an interesting truth: more often than not, the mere act of prediction may have an impact on the actual predicted event. For example, warnings of a possible real-estate bubble made the Governor of the Bank of Israel take monetary and regulatory action, changing the game so that we’d never know whether the prediction was correct. Just as in quantum physics, the uncertainty effect makes it hard to learn from the predictive experience, for better or for worse, even in hindsight.
The Black Swan
Beyond the complexity of prediction, evaluation and calibration, extrapolation from past observations has its own problems when it comes to catching unusual events. If we see a man take the bus every morning for 60 years, we would predict he would do so tomorrow too, even though we also know he will not take the bus forever, and one day will simply not show up. The famous argument by David Hume asks: “if all swans we have seen are white, are therefore all swans white?” (Interestingly, black swans were discovered 100 years later).
Furthermore, when we add dynamisms to the reality we’re extrapolating, the ability to forecast accurately becomes even worse. As the base for extrapolation experiences addition and subtraction of the elements which comprise it, it is often difficult to point out what are the actual boundaries to the data we are collecting and “duplicating”.
One more parameter which may distort a prediction is the need for theory, or preconception confirmation: a certain theory is conjured based on partial information. Any addition of information will then conform to the original theory, contributing to its “correctness”. This is exceptionally true for controversial paradigms, where the same objective event can contribute to all “sides” and contradicting theories.
The Cognitive Effect – Common Sense Tops Basic Probability
Cognitive heuristics cause a biased evaluation of probability, based on instincts which may disregard the very rules of normative probability theory. In the famous case of “the three door test” (televised into “Let’s Make a Deal” game show), players tended to stick to the door they originally picked when one of the three doors was eliminated, even though the odds were favourable to the other remaining door. In another case, players evaluated the probability for a certain event to be higher the more the event felt real and present to them. Additionally, the attitude towards low probabilities is an interesting one. On the one hand, people tend to evaluate higher probabilities for farfetched events than what they really are (the sole justification of overpriced insurance policies and lottery tickets). On the other hand, people tend to ignore higher-probability events (such as a car accidents) whenever they please.
So What Do We Learn of All of This?
The practical lesson from all of this is to be aware of what goes into a market forecasts and estimations, with the acknowledgement that forecasts may be skewed by the boundaries of existing paradigms. When acting on a forecast, one needs constant validation from clients and sales personnel as well as the competition. In the end of the day, we need to find a healthy balance between questioning our every move vs. moving forward in one direction with the risk the world may turns to another.
In more general, systematic terms, the need to constantly validate and calibrate our estimations emphasizes the importance of planning: since we operate in a dynamic environment, we need to constantly return to the planning process and update the plan so it would suit the changes we are encountering. More importantly, we need to be flexible enough to act or react on it, quickly and efficiently, so as that no event would catch us by (total) surprise.
This article was based on an essay by Netanel Oded.
image credit: creativity4us.com
Danny Lev is the Corporate Strategy and Business Development Manager at a NASDAQ-listed High-Tech company. Passionate about the mix of technology with creativity and innovation with business, Danny specializes in corporate and start-up development within the High-Tech and Clean-Tech arenas.