What is A/B testing (or split testing)
An A/B is nothing more than comparing multiple versions to see which one performs better. The most basic test has an A version and a B version. By defining what is a success beforehand, you can define which variation performs better. You can test anything you want, from Call to actions (CTA's) to titles, from images to texts. Although A/B implies there are only two versions, you could also test more versions at the same time. A/B tests can be done on websites, but most direct mail tools will also provide this functionality.
Statistical significance plays a big role when doing A/B tests. Your total audience versus the amount of people participating in a test heavily influence the outcome of a test. As a general rule of thumb keep in mind that you need at least 1000 actions to get any meaningful insights. Also keep in mind to give your audience enough time; in general a tests should run for at least a week.
Sometimes testing one element on a page isn't enough. When testing multiple changes at once we no longer call this an A/B test, but instead refer to it as a multivariate test. This type of testing requires many more pageviews to be significant.
How to do A/B or multivariate tests
Before actually running a test it is vital to set a clear hypothesis which defines what a succes will be. Do you want to compare two new versions, or will there be a version you want to compare to a currently running one? Are you going after more sales? Do you want more leads? Or is click-through what you're after? By defining everything up-front you avoid any gut-feeling choices afterwards.
The challenge when doing A/B tests is to divide your target audience into two (or, in case you have multiple variations, more) equally divided groups. Each group gets to see and use one version of your change. Luckily we rarely have to do these tests by hand. We can leverage services like Optimizely or Visual Website Optimizer. Even Google Analytics has basic support for A/B tests. These services help by showing a variation to each group automatically and give you the result afterwards.
More often then not there will not be any clear winner. Keep this in mind and change your hypothesis to test again. Testing is an iterative process which perfectly suits the Build-Measure-Learn loop which is at the core of the Lean startup methodology we support at Bravoure. If a winner is picked only by a small margin, test again to refine what works best for your audience. Keep track of all the hypotheses and changes you make so you can look back and see where you came from. Maybe the most important thing to remember: do not make other changes to your page during a test.
When testing with direct mails the same rules apply. Define your hypothesis up-front. You can do both A/B and multivariate tests. Your audience is as big as your mailing list. Many direct mail tools provide A/B testing functionality out-of-the-box. A common test type in these tools is multi armed bandit testing.
Multi armed bandit testing
In this type of testing your two 50% groups do not apply any longer. Instead, the test is split into two different phases:
The first phase will generally target about 10% (the actual size is usually a matter of choice) of your audience just like a regular A/B test. After this phase a winner is picked. In the exploitation phase, the remaining 90% of your audience will be served the winning variation.
Due to statistical significance the multi armed bandit test can be dangerous to rely on. After all, is 10% of your audience enough to draw conclusions?
What to test?
Check out our other blog post about 10 A/B tests to consider.
Where to go from here?
Contact us for a free consult at 020 - 262 99 09