Remember at school when we grew runner beans in jam jars with a piece of rolled-up blotting paper? We placed one in a dark cupboard, one we didn’t water, one we put somewhere cold, and another we placed in the sun and watered daily. The exercise was to determine how each element affected the plant’s growth. After a couple of weeks, the conclusion was that plants need light, warmth, and water to grow!

A/B testing is the same, though it involves experimenting with just two variants at a time and testing small differences.

Data-backed decisions

Considering many variables, it provides data-backed decisions and can be used across various communications. In 2000, Google famously used A/B testing to ascertain the optimum number of search results on its listings pages – the rest is history!

Also known as ‘split testing’, A/B testing brings a scientific methodology to marketing and removes the guesswork.

Although it has been around for some time, A/B testing is less commonly used than other marketing tools. It would have been expensive to run tests for magazine adverts or billboards in the past, but in the digital age, the costs are low. If done well, A/B testing provides insight into visitor behaviour that can significantly increase conversion rates.

How is A/B testing used?

A/B testing considers how small differences in a marketing campaign might influence customer behaviour. This might be the title of a newsletter or email’s title, the banner advert’s picture, the text on a call-to-action button or the web page layout.

The idea is to run two campaign variations with a controlled group of customers to see which version is the most successful. You can repeat the tests numerous times to fine-tune your content and improve the effectiveness of your marketing communication.

How does A/B testing work?

When setting up a test, you first need to consider all your business metrics and how you define the success of your marketing campaigns. This might be the number of sales, click-throughs, sign-ups, downloads, etc. You then set up your marketing campaign with two variables (version A and version B). Don’t be tempted to vary more than one thing at a time; you will never know which one made the difference.

To measure which is better, try them simultaneously, in identical circumstances, and select the most successful version.

If you are testing a website page, use the existing version as the control, set up a second for the test, and split your traffic equally between the two. If you don’t have the technical knowledge to do this yourself, several free tools are available that will help.

For example, Google Analytics Content Experiments. Plenty of organisations specialising in conversion rate optimisation (CRO) will run your A/B testing for you and make recommendations for your marketing.

Use test groups

If you are sending out an email or newsletter, you must put some effort into preparing your test groups beforehand. The two groups need to be identical – or as similar as possible. Firstly, you will need an equal number of contacts; ideally, you will want equal numbers of men and women. If you have the data available, consider age ranges, geographic locations and any other contributing factors. Look to run the test using a small percentage of your database, maybe 10%, and make sure you send them simultaneously so you minimalise any variance.

You will also need to determine upfront how long you are going to run the test and how many responses you need to quantify the results. Use past data as a guide, but be careful not to cut it off too soon or to leave it too long, as this may mean other factors have affected the result. If you are testing low volumes, you will need to determine how long a period you can realistically wait for and whether the test result will be reliable.

Examples of using A/B testing

Web page

If your company is a SaaS provider, your metric may be the number of sign-ups you receive. Different versions of your web sign-up page will help optimise the page and increase sign-ups.

For example, you may have an idea that changing the colour of your call-to-action button from blue to red would make it stand out better and increase sign-ups. In this case, you would use the existing blue design as your control, version A, and the new design with the red button as version B. Equally divide your website visitors between the two designs for a given period.

At the end of the test, you’ll see which one works best, and you can then use that one. You may then test the red design against another colour to test further or check your results.

Remember, when running such tests, you’ll need to make sure the sample size is statistically relevant. For example, if you normally get just two or three sign-ups daily, ten click-throughs won’t produce a relevant result.

The larger the sample size, the greater the reliability of your test results. However, the result will also depend upon the difference in performance. If you normally expect a 5% sign-up from your blue button, you will need to determine what change in volume will make the variation relevant.

If you can test in thousands, a 5.6% may mean a significant increase in business, but if you are only testing in tens, then the result will not be reliable. Whilst testing low traffic will never achieve significant scientific results, it will still provide a level of insight, but you will need to repeat the test frequently to ensure you are getting the best conversion rate.

Newsletters and emails

Your newsletters and email marketing campaigns face great competition in a crowded inbox. Making your message stands out could be the deciding factor as to whether your email is opened or not. Testing which title has the greatest click-through rate before mailing to the rest of your database could mean a significant difference in your campaign success.

Google Ads

Google Ads is the ultimate tool for A/B testing – it was made for it! You can create any number of advert variations and measure their success with Google Analytics. You might also test different landing pages on your website with the same advert to see how that impacts your results. You can then set up campaigns between Google Ads and Google Analytics to accurately record your click-throughs, sign-ups and sales by determining the page a customer needs to get to for the transaction to be qualified.

The campaign should run over a set period, say 7 days, with these advert variations delivered in equal numbers at the same time. You might also use a test to see which day of the week or time of day works best for your target audience.

With accurate results, Google Analytics allows the savvy marketer to schedule an advert with the most powerful title at the right time of day on the right day of the week. The result is that the advertising spend can be targeted where it is most effective, and an improved ROI can be achieved.

Market insight

A/B testing not only provides you with quantitative data that there is no argument against, but it also provides insight into customer behaviour that can be used across other areas of your marketing. If you know a red call-to-action button is more effective than blue, you might use this on other web pages. If you know you get a better response to one title than another on a newsletter, use that insight to change other text in your promotional materials.

Segmentation

Different versions of your web pages or campaigns are likely to appeal to different customer segments, such as those based on gender, age, geographic location, or industry. If so, use this intelligence to further target your marketing and match customers to particular products or services.

A/B testing summary

A/B testing should not be considered a one-off activity. People and trends change, so run regular checks to test your results for each new campaign or product. Keep testing on a regular basis, and remember that not every test will work. Be prepared to start again if you don’t get a decisive response.

Make sure you plan a period for your test and a minimum number of responses needed to make it meaningful—ending it too soon could result in inconclusive results, and dragging it on longer could lead to selecting a poor version.

And finally, don’t be tempted to let your instinct overrule a test result – sometimes the outcome can be surprising!