A/B Testing
In a Harvard Business Review piece called The Surprising Power of Online Experiments, authors Ron Kohavi and Stefan Thomke share insights on how accurate and rigorous A/B testing can yield very significant payoffs. In an A/B test the experimenter sets up 2 options; “A” is the control (the way things are already done) and “B” is the test modification; the “challenger”. By “experimenting with everything,” online marketers can increase revenue per search by 25%.
The authors posit that “too many organizations, including some major digital enterprises, are haphazard in their experimentation approach, don’t know how to run rigorous scientific tests, or conduct way too few of them”.
As they rightfully point out, the online world is often viewed as turbulent and full of peril, but controlled experiments can help us navigate it.
I began my career as a marketer in the magazine publishing world. We were constantly conducting A/B direct mail tests, often with surprising outcomes. There was often debate about whether the test results were indicative of what we could expect upon a larger roll out. But in the end, these tests were reliable and enabled us to received more, or more profitable, orders per campaign…and sometimes both!
In Articles