In the last blog article, we learned, how important it is to run experiments in a world full of uncertainty. The more uncertain and therefore complex the world is, the more experimentation helps us find the best solutions. As described in the article, it needs leadership support to provide the rights structures and fundamentals. But those structures are not enough. That is why in our next Business Agility Online ThinkTank we will introduce ways on how to run those experiments successfully and what are rather good and bad experiments and how a falsifiable hypothesis might work.
A falsifiable hypothesis is an assumption, which can be proven to be right or wrong. Such a hypothesis therefore gives us a possibility to evaluate, if an experiment was successful or not. This is one of the foundations of having a psychological safe environment. But how does a falsifiable hypothesis look like?
With a quick summary of all of this information, organizations and teams can get an easy overview on which experiments you ran and which of them had what outcome.
As you can see in the screenshot above, the falsifiable hypothesis follows an easy construct:
Similar as User Stories, writing good hypotheses will become an art:
Technically, the above-mentioned hypothesis follows the format of a falsifiable hypothesis, but does not help us to achieve better results by running experiments. This might be a representation of a user story similar as: “As a user I want to log in, so I am logged in”.
Let’s dissect, why the above hypothesis is not as good:
As you can see, our first hypothesis is neither falsifiable nor gives us any information on how we can actually learn from the experiment. Due to not being specific enough, there might be a good chance, that the same experiment might be run again within 12 months of the first time running it.
A better falsifiable hypothesis, therefore, would address the above-mentioned room for interpretations:
There might be still possibilities to improve this falsifiable hypothesis even better (and we would be happy to hear from you in the comment, on how), but as you can see, this one invites for far less ambiguity than the initial hypothesis. Therefore, this hypothesis can be proven to be right or wrong by 1 June, which makes it falsifiable by definition.
Obviously, the falsifiable hypothesis is only describing the experiment itself, but does not tell you how the experiment should be run. In the Experiment Canvas, this would be described in the Experiment setup.
Our example above would be perfect to run as an A/B Test, in which 50% of registered users get the discount, whereas the rest won’t get it. Come to our Business Agility Online ThinkTank on 28 April to learn, how you would setup and run this.