by Leo Frishberg.
In the first part of this topic, we outlined the notion of the pessimistic viewpoint about error, namely that if you are trying to predict the future you want to eliminate as much error as possible in the data you rely on. We referenced Roger Martin’s The Design of Business in which he distinguishes between reliability (an inherently analytic approach to data that is used for prediction) and validity (true and actual outcomes).
Since there is a pessimistic view of error there must be an optimistic view. In the optimistic view, error isn’t a variable to eliminate; rather it is a source of inspiration. To connect the dots, the pessimistic view of error occurs in the context of Martin’s reliability. So too, the optimistic view occurs in the context of validity.
Consider this thought experiment: instead of capturing a lot of historical data and mining it for reliability, use that data merely as the basis for a hypothesis about the future. Putting it in terms of Presumptive Design: presume past is prologue and operate as if the current trends will in fact be true. The only way to make such presumptions valid (in Martin’s terms) is to actually test them out and discover whether the outcomes came about as predicted.
In the “Rabbit Hole” example, the hypothesis expects an individual to achieve a specific goal. In actuality, the individual takes a very different path, leading to unpredicted destinations.
Play that thought experiment out a little further: If the results of tests confirm the hypothesis, the team has a valid answer. If the results violate the hypothesis, the results are just as valid (they did in fact occur), but they call the hypothesis into question. In the second case, the error provides inspiration for further discussion. In fact, the second outcome is more informative in many ways than the first – the test has revealed a new set of possibilities.
Presumptive Design goes much further than simply taking past trend data, making believe it is true and then testing those hypotheses. The method expects teams to actually make up a future state, based on nothing more than their best guesses. Remember, validity is based on the actual outcome of testing a hypothesis, it doesn’t matter whether the hypothesis makes sense or not.
That last part is a little counter-intuitive, so I’ll state it another way: if you take a wild guess at something and then test it, and your guess comes out true, that’s a pretty amazing outcome. If it comes out not true, you haven’t just learned your guess was wrong, you’ll learn in what ways it was wrong, an equally valid, and likely more informative, outcome. Based on the results, you will form a more accurate hypothesis, and so the process repeats, with each iteration approaching a more acceptable margin of error.