« How women can help you beat the competition | Main | Exploring new forms of economic leverage »

Why is perfect the enemy of better?

By Michael Raynor - January 18, 2011

King_diamonds_cardIn a few weeks at the annual World Economic Forum meeting at Davos, the concept of disruption will crop up in a number of contexts. This is a notion that I’ve spent over 10 years working with.

In my forthcoming book, The Innovator’s Manifesto, and in presentations I made in New York on the topic early this year, I offered evidence that it is possible to improve our ability to predict which innovations will succeed and which will fail through the application of Disruption theory. Disruption does not enable perfect predictions—if it did, I’d keep the knowledge to myself and simply reap the rewards of its application! But it can make us better than we are now.

This might seem a shocking claim. “Predictability” and “innovation” are often portrayed as the oil and water of management practice. For many, it seems that the goal of accurately predicting what will work is at best quixotic and for some, almost offensive. Most people seem to believe that the defining elements of successful innovations are fundamentally idiosyncratic. Every circumstance is unique, how one deals with a given circumstance is ultimately bespoke, and success lies in the inscrutable and largely intuitive choices of each decision-maker. Since innovation means making a bet about the future, the data are never dispositive, and so well-informed, highly experienced executives can look at exactly the same opportunity and come to completely different conclusions.

This phlegmatic acceptance of diverse points of view is something that has become increasingly intolerable in other fields, such as medicine. Who would accept a course of treatment based on the untested prejudices of a single clinician over that implied by carefully compiled evidence of what works best? The good news is that we need not make that choice based on personal preferences. With the birth of the “evidence-based medicine” (EBM) paradigm over the last 20 years, generally accepted methods have emerged for determining what constitutes the best evidence in support of specific tests, diagnoses, and treatments, along with processes for translating those findings into clinical practice. The results have been nothing short of miraculous: dramatically improved results at dramatically lower costs.

A key element of EBM’s success that has driven these results, and has much to teach the practice of management, is the ability to act on the best available evidence even when that evidence is imperfect. In other words, even though the state of art will often not permit a definitive diagnosis and course of treatment, using the evidence that is available to make our decisions is better than not using it. By acting based on the best available evidence, ineffective treatments are weeded out and incremental improvements are possible. These quantum leaps forward cumulate rapidly to transform clinical outcomes.

This might seem obvious, but as I have had the chance to share my findings on Disruption’s predictive power with executives charged with deciding which innovations to back, their reactions are often puzzling to me. “Sure,” they observe, “Disruption increases predictive accuracy. But it’s not perfect. You can’t tell me for sure whether this project will succeed or fail. And until your theory can do that, I’m going to stick with what I know.” Since the likelihood of the sudden emergence of a fully formed theory of innovation with total predictive accuracy is unlikely, practitioners change their approaches, if they change them at all, in ways that are subject to all the usual biases (salience, confirmation, bandwagon effects, and so on), and so  improvement becomes impossible.

Why, I wonder, do some (not you, I’m sure, but perhaps people you might know) resist an approach that is demonstrably an improvement over current practice simply because it is imperfect? Why is “better” not good enough? After all, as the saying goes, “in the land of the blind, the one-eyed man is king.” And if the evidence supporting an allegedly better alternative seems lacking in some way, ask yourself this:  what is the nature of the evidence supporting the view you currently hold, and if you had to “zero-base” your current decision-making heuristics, would you end up with the same set of principles that guides your current choices?

In his client projects, research, and books, Michael E. Raynor, Director, Deloitte Consulting LLP, explores the challenges of corporate strategy, innovation, and growth. He is the bestselling author of The Strategy Paradox , co-author of The Innovator’s Solution , and author of the forthcoming The Innovator’s Manifesto.


Feed You can follow this conversation by subscribing to the comment feed for this post.

Your defense of cautiously optimistic, pragmatic risk-taking is extremely lucid, Michael. I think, in this context, of a Japanese tradition of intriuging metaphoric relevance. If a mistake occurs during the ceremony, and a tea vessel falls and breaks, the pieces are carefully collected, and the vessel is carefully repaired and reglazed. The resulting product is considered more beautiful for its deeply layered human and earthly history. I would extend your thesis by suggesting that wholly human and unpredictable product development and marketing missteps, while challenging and to be avoided when possible, nevertheless may contribute to richer and more rewarding outcomes--IF proactive managers create ways to encourage transparent brainstorming around "mistakes" by team members at all levels.

The comments to this entry are closed.