3 Ways Our Bias to Oversimplify the Future Hurts Innovation

by Pete Foley

3 Ways Our Bias to Oversimplify the Future Hurts Innovation

Estimates that attempt to quantify failure rates for innovative products and services typically sit in a range somewhere between 70-95%. As an innovator, I find this an uncomfortably large number, especially given how much time and money we invest in innovation processes, consumer research and market modeling.  But as disappointing as these numbers are, the reality may be even worse, as they fail to take any account of missed opportunities.   The predictive tools and models that we use to green light so many failed innovations are the same ones we use to prune innovation pipelines.  This begs a question about how many potentially successful ideas have we abandoned because the flawed models that routinely fail to accurately predict success, also incorrectly predict failure?  We’ll likely never know, but it’s a sobering thought for any innovator or creative!

Of course, it’s easy to simply knock our existing processes and models, and quite a different challenge to come up with something better!  The unfortunate reality is that predicting the future is extremely hard, especially when that future involves completely new to the world products and services that are being launched into the complex emergent systems that we call markets – in other words, innovation.

Predicting Innovation Success is Really, Really Hard:  Personally I think it’s a bit unrealistic to expect us to accurately predict the market performance of innovation, any more than we can predict weather or earthquakes.  We can work to get better than we are today.  But innovation is not physics, and there are simply too many fuzzy and interdependent variables, and far few constants to expect high levels of accuracy, at least in the near future.  And as the pace of innovation increases, this is likely to get harder rather than easier.  Models generally attempt to use past experience to predict future outcomes.  This to some degree relies on similarities between past and future, and one quandary we face is that the more innovative something is, and the more it changes human behavior, the harder it is to look backwards to model and predict future performance.  In addition to this, we are in an arms race between the rate of innovation in products and services, and innovation in how we manage and predict innovation success.  The rate of innovation is accelerating, and somehow our predictive capability needs to keep up.  Hopefully we’ll get some support from emerging technologies such as AI, and eventually quantum computing, which will help with computational power and statistical modeling.   But even infinite computational power is only useful if we truly understand the causal relationships we are modeling.  So a key part of this arms race also lies in growing our understanding of the behavioral component.  A specific part of that human element is what I’d like to discuss today  – how some of our cognitive biases work against us accurately anticipating the future, and how we can at least partly address that challenge.  It’s only part of the story of course, but we need to understand and bring together the parts if we want to improve the whole, and this is hopefully a small part of that.

Nobel Laureate and father of Behavioral Economics Daniel Kahneman once said that we don’t know our future selves very well. This is insightful, true, but I think it is also only part of the challenge we face.  There are also numerous biases that push us to (unconsciously) oversimplify the future, and these lead to at least three distinct, and additional challenges with respect to predicting innovation success:

  1. We have a bias for making linear projections in a non-linear world. We only have to look at the stock market, employment, average salaries, GDP’s, market sizes, etc to quickly realize that hardly anything in the economic or behavioral world is linear, or even smooth.  But humans automatically simplify trends, and see them as smoother, more linear, and directionally more consistent than they really are.  Even if we intellectually acknowledge complexity, our behavior commonly still follows our gut, and simplifies the world, even for important decisions.  For example, if the stock market has been going up for the last few years, or even the last few weeks, people often invest as if it will continue to go up for eternity, even if intellectually they know that it won’t.

But it’s not just individuals who behave in this way, so do corporations.  Companies project and try to achieve consistent year on year growth, even though external socio economic factors that impact growth are far more fluid and variable.  During periods of economic growth we create innovation pipelines that largely assume almost eternal economic expansion.  It’s rare for a company to create a pipeline with contingencies for recessions or downturns, until they actually occur.  For technologies with long lead times, that is often too late, and leads to boom and bust R&D strategies that can lag years behind actual market conditions.

  1. The fixed context illusion. Most markets that support innovation are effectively arms races, and whether we like it or not, constantly changes, so we are always innovating against a moving target. It would be very convenient if the world froze while we moved an innovation from idea to market, but that just doesn’t happen.  Macro economics change, competitors innovate, and regulatory and global socio political conditions constantly evolve. Depending upon how fast our innovation cycle is, and how dynamic a category is, the market we start designing for often looks quite different to the one we launch into.  And to make it worse, if we are working on truly disruptive innovation, news of our launch may catalyze additional defensive changes in the market prior to, or during our launch.  Just as no battle plan survives contact with the enemy, no launch plan survives contact with the market unscathed. Of course, we know this intellectually, but it still surprises me how few innovation models fully account for inevitable advances in competitive products, or likely competitive responses, and so paint an unreasonably optimistic picture of the market we are launching into.
  2. We don’t know our future selves very well.To Kahneman’s point, we anticipate our future behaviors based on the context and emotional state we are in at the present moment, which does not necessarily match the context and emotional state that we will be in when we actually make a decision. This presents a huge challenge for innovation research.  For example, if we are relaxed when we take part in research, we predict the future behavior of a relaxed version of ourselves, but that may or may not represent a future reality.  And context plays an important role here too, as where we are has an impact on how we feel, as well as triggering different preferences, memories and comparisons, all of which influence choices and decisions.  This plays out in many ways, but as an example,  web based consumer survey may give a pretty good reading of what decisions we’d make if we were shopping from home using a computer, but it may not be predictive of our behaviors in a retail store, where context, cognitive load, emotional state, and memory triggers may be quite different.  This is a very real challenge, as it is often much cheaper and easier to run research in artificial settings,  but the further the context we test in is removed from the context people will make actual decisions, the greater the risk is that we’ll generate a misleading answer.

Why do we oversimplify the future? Unfortunately, the cognitive biases that helped our ancestors live long enough to become our ancestors often get in the way of our ability to predict how innovations will succeed or fail in real world, complex marketplaces.  Human cognition has evolved to automatically and unconsciously simplify the future, and to favor sort-term gains over longer term ones.  This makes evolutionary sense, as often our survival was favored by action over contemplation, and prioritizing long-term success over short-term survival would often have been a way to make long-term success irrelevant.  Key to this is a decision process that is called satisfycing in behavioral economics.  To survive and thrive, the actions of our ancestors didn’t have to be perfect, just fast and smart enough to pass our genes onto the next generation. As a result we have evolved to automatically simplify the future to the point where we can act quickly, and where our predictions are good enough to allow us to survive, and live to fight, and more importantly, breed another day. We trade accuracy for speed because accurately predicting our complex, multivariable, non-linear future would have slowed our thinking down to a point of near paralysis.  It’s a well-worn insight, but if we were being chased by a hungry predator, we didn’t have to run the perfect escape route, just one that was better than the slowest member of the tribe.  Our behaviors are optimized for survival behaviors such as avoiding predators, gathering berries, and hunting mammoths, but that creates a few weak spots when it comes to our ability to predict innovation success!

How do we Avoid Future (Over) Simplification? I’m not going to pretend to have answers for all of this, but there are some things I believe we can do to tip the scales a bit more in our favor.  Key to this is recognizing the biases that we have, and reigning them in.  Knowing when to slow down, being very thoughtful about our references and anchors, getting more comfortable with ambiguity, and with learning in market, rather than wanting to button everything up before we launch. Of course, the effects of our biases are not all bad, and clearly there is still a case for speed in innovation, and being first to market and learning as we go.  But those benefits are more nuanced than when facing a hungry predator, and there are also some counter benefits for taking time to understand and optimize before investing heavily in a new product or service.   The good news is that we already know a lot of what we need to do.  The trick mostly involves keeping that knowledge front and center, and staying vigilant that our instincts, biases, and defaults don’t take over, and drag us down a path that is driven by unconscious over simplification.

  1. Future proof references and anchors. Depending upon the time horizon of our innovation, one option is to future proof our references.   Our default is to innovate against today’s market.  But markets constantly evolve, and so will almost certainly have changed by the time our innovation comes to market.  This of course varies enormously by category and innovation cycle time, but as an example, in a market where performance increases by 20% every year, a 2-year innovation program needs a reference product that is 24% better than today on that vector (20 % compounded over 2 years).  This sounds obvious, but it is surprisingly common for innovation initiatives to be benchmarked and developed against the competitive product that existed when they started, rather than what will likely exist when they launch.  Of course, future proofing involves some estimating, and so inaccuracy, but that’s better than getting to the end of the innovation path only to find that the goalposts have moved.  And as a bonus, it also helps smooth turbulence during the innovation process as the market inevitable evolves, as we already have some market evolution built into our program, and so are (hopefully) buffeted less by competitive initiatives that occur during our innovation process.
  2. Don’t Slice the Salami! Never test solely against previous prototypes.  Always test against an independent reference.  It is all too easy to create a genuinely breakthrough product or service, and then cost save or optimize it through multiple iterations that erode the original benefits to the point of mediocrity, but do so in small enough steps that the iterations are never perceived as different from one another.   Simply keeping an external reference as an anchor in our process is a great cure for salami slicing.
  3. Don’t Force Unnatural Comparisons. This is a common temptation, and most often occurs in a couple of ways.  The first is where we fall in love with a technical difference between two products that isn’t noticeable under real world conditions.  But if we have to create an unnatural situation for a difference to become noticeable, then the chances are it won’t be noticed or meaningful to consumers.  The second is where the variety of options available to a consumer or shopper make it hard to generate statistically meaningful quantitative research data.  There is a case for eliminating noise from a test, and in reality we do this all of the time. But we need to be very careful about over simplifying the choices we offer consumers in research.  The very act of simplifying a test reduces cognitive load, and so potentially changes decision outcomes, as cognitive load is an important part of context (see below).  And in a multi-stage decision matrix, such as retail, forcing a shopper to take an unnatural path runs the risk of invalidating the data completely.  Forcing a Mercedes driver to chose between a Ford and a Chevy may tighten our statistics, but does it really tell us anything useful about consumer behavior?   There are a couple of exceptions.  If consumers will actually directly compare products in the real world, as occurs for example with packages in retail, then test away, this is real context.  And forced comparisons can be useful for claim support.  But it’s important not to believe our own publicity! If we have to force an unnatural comparison to show a difference, chances are consumers won’t notice it, at least unless we point it out to them.
  4. Context really, really matters. As alluded to above, we should always run consumer research in as natural a context as possible. Of course, it’s almost impossible to avoid some differences between a research environment and the real world.  Access to prototypes, real world contexts, budgets, recruitment, and the simple fact that panelists know they are being observed are among the many challenges we face. But we can often do a lot to make context more relevant.  For example, explore disguised choice tests where panelists don’t know exactly what is being tested, avoid techniques that place consumers in highly irrelevant situations (for example, focus groups, invasive neurological machines, or as above, tests where we force panelists to make choices they usually wouldn’t make).   Wherever possible, strive for situations where panelists are in realistic and relevant situations, with lots of distractions that mirror real world cognitive load, are authentically time pressured, where they have time to (almost) forget they are being tested, where they have free will, and where the research process is as uninvasive as possible.
  5. Accuracy is more important than precision. We have a bias to like simple comparisons, especially numerical ones with clearly differentiating statistics.  Unfortunately the search for clean numbers and simple relative comparisons often drive us to diverge so far from reality that we generate precise, but unpredictive data.  Of course, we can validate unrealistic methodologies to provide some reassurance that they have predictive value in the real world.  But this is fraught with danger unless validation has been carried out using very similar products and services in very similar contexts to what we want to test.  Validation is a valid approach for incremental innovation, but somewhat oxymoronic when applied to real breakthrough or disruptive innovation that fundamentally changes consumer behavior.  It’s very risky to use past behaviors to predict radically different new ones.  This is a hard problem, but I’d always advise running some tests in as realistic conditions as possible, even if they have to be qualitative, and if the behaviors we observe do not match with more constrained quantitative data, proceed with extreme caution.
  6. Our First Generation Innovation is Almost Certainly Flawed. We often think of our first generation product as an end point.But the reality is that we rarely, if ever get it right first time.  Whether we are thinking of phones, restaurant menus, or consumer products like Febreze, the first product we take to market is almost always simply a stepping stone that provides a gateway into the most valuable research we can ever access, real world markets.  The financial constraints of innovation can be a strong driver to lock ourselves into first generation designs, remove future agility, and build in ‘ghosts of evolution’ that it can be hard to escape from.  But try and think of an innovation that truly got it right first time, and that didn’t benefit from some degree of optimization based on real market experience.  They are few and far between.  If we take a long view, we are better to build in agility,  and like nature, be ready to grow fast when conditions are favorable, adapt to an ever changing environment, and use periods of plenty to build in resilience for inevitable down swings.  If we lock our initial design, either financially, or based on supply chains, systems or manufacturing investment, we run a very real risk of backing ourselves into an innovation corner that it can be very hard to escape from.

When we add all of this together, it’s not really that surprising that we struggle to accurately predict the success or failure of innovations, or why so many innovations fail, despite consumer research and favorable business model predictions.  Innovation will never be physics, and we’ll likely never become particularly accurate predictors, even though we’ll always feel pressure to at least claim predictive accuracy.  But even if we want to maintain existing predictive numbers, let alone improve them, then we are in an arms race with innovation itself, which as it catalyses behavioral change at increasingly faster rates, makes the job of predicting its own impact increasingly hard.  There is no simple answer to this quandary, but I believe part of the solution lies in leveraging cutting edge insight from behavioral science, combining it with increased computational and modeling power from AI, and perhaps most important of all, becoming more comfortable with uncertainty, and building agility into innovations that allow us to learn and adapt as we enter the most informative laboratory of all, the real world.

Build a common language of innovation on your team

Wait! Before you go…

Choose how you want the latest innovation content delivered to you:


A twenty-five year Procter & Gamble veteran, Pete Foley has spent the last 8+ years applying insights from psychology and behavioral science to innovation, product design, and brand communication. He spent 17 years as a serial innovator, creating novel products, perfume delivery systems, cleaning technologies, devices and many other consumer-centric innovations, resulting in well over 100 granted or published patents. Follow him @foley_pete

2 comments

  1. Edit/correction for a typo. A market where performance increases 10% every year, a 2-year innovation program needs a reference product that is 24% better than today on that vector (20 % compounded over 2 years).

    • Edit/correction for a typo. A market where performance increases 10% every year, a 2-year innovation program needs a reference product that is 24% better than today on that vector (10 % per year compounded over 2 years).

Leave a Reply