by Leo Frishberg.
When Jim walked into the meeting he knew it was going to be a tough 60 minutes. The product definition team was seated around the table waiting to hear the latest results. What Jim had to tell them wasn’t going to be an easy pill to swallow: the results didn’t look good and their go/no-go decision was rapidly approaching.
That scene, or scenes very much like it, play out daily in industry. Project teams, tasked with defining, managing and often defending the course of a project, rely on data to help guide their decision making process. Data, in this case, serves to improve the predictability of as-yet-unknown outcome.
Roger Martin describes it succinctly in his book The Design of Business: organizations seeking to improve the predictability of an outcome work hard to improve data reliability. Organizations are placing big bets on their R&D investments – they want to have high confidence they’re on the right path. As a result, these organizations strive to reduce errors around their data capture and data analysis to increase their confidence in their predictions.
Martin contrasts that world-view with validity – meaning actual, true outcomes. Valid predictions are ones that came true. The problem, as Martin describes so eloquently, is when organizations presume validity from data used for reliability. That is, organizations that spend time reducing errors in their data to improve their predictive reliability also assume they are improving the prediction’s validity, which is clearly not the case. They can only know the prediction’s validity after it has come true.
Jim’s problem is that the tests he’s run have indicated greater error than the team had predicted; predictions based on prior data. His organization is falling into the trap of what Martin calls inducing validity from reliability. Just because the data showed a trend in the past doesn’t mean the next set of data will conform to that trend.
Jim’s organization views error as a negative outcome, a pessimistic point of view. Because he and his cohort rely on data to make predictions, they consider error a variable to be reduced or eliminated. The pessimistic view of error stems from several million years of real world experience and has a long-standing philosophical foundation. In brief, for the pessimist who is trying to predict the future, error is bad.
Presumptive Design relies on validity rather than reliability. That is, the results of Presumptive Design engagements are absolutely true – they really happened and there is no question about the outcomes. Because of this, Presumptive Design results are inherently optimistic, errors are not targeted for elimination, but rather they are a source of inspiration.
We’ll go into the optimistic view of errors in the next installment.