Article

Pick and choose

Using A/B testing to refine campaign messages

By Monica Kay and Alina Ojha

 

Deciding which message works for a given audience should be considered an extreme sport given all the context, nuances, and innuendo you have to sift through to make that choice. What if, instead of mind olympics, choices came down to “this or that”, and balance is restored? That is exactly what A/B testing entails. You can learn more about A/B testing and other ways to make sure your messaging campaigns are effective on our newly launched low cost message testing website, which also has a free guidebook resource.

A/B testing is a research method that compares two or more versions of a message or product to identify which one performs better, or which one the target audience likes more. It is important for civil society organizations (CSOs) to develop different types of messages in order to identify which one best encourages behavior change. Consider this hypothetical communication campaign to encourage informal sector workers — better known as jua kali artisans — in Kenya to register for social security. The first message developed for a poster read “Everyday more Juakali workers use Mbao Pension Plan to secure their future”. However, during A/B testing the CSO established that this message was not effective as it did not outline how exactly the product works, or its value to the customer. As such, the CSO revised the message to “Jua kali workers like yourself save Ksh.20 daily towards their retirement using the Mbao Pension Plan. Dial *000# for more information on how to secure your future”. The second message performed better as it explained what the product was, how to use it, why it’s needed, and ended with a call to action. Even if the product in this scenario is not adopted, recipients of the message would at the very least learn more about retirement savings.

It is important for civil society organizations (CSOs) to develop different types of messages in order to identify which one best encourages behavior change.

To implement A/B testing, two or more variations of a message or product are needed. Unlike other research methods that can have a low number of participants, A/B tests require a sample size ranging between 500–1000, if any meaningful conclusions are to be drawn. Sample size is also determined by the number of message variations being tested; the more variations there are, the higher the sample size required.

Once the sample size has been decided and participants recruited, the respondents are randomly assigned into control and treatment groups, depending on how many messages or products are being tested. In our example above, participants would be randomly assigned to message A or B. Randomization is essential since it’s the best way to create equivalent groups between comparison groups, meaning that common biases are avoided, and allocation to a particular group is fair. In the case of resource constraints, effective randomization can be achieved by a simple coin-toss (heads or tails), rolling a dice (odd or even), or by selecting every nth person from a list (for example, if you have a list of 35 respondents, you can select every 3rd person from the top of the list). Randomization can also be done on Microsoft Excel using the Rand function.

Data has been gathered, now what?

Analysis of the resulting data is not only guided by the research questions, but also by the data analysis plan, which the CSO must develop prior to data collection. This plan outlines elements such as the software to use, the characteristics (such as gender, age. etc.) and outcomes (such as number of sign-ups, views, etc.) to focus on, and actions to take if the data is corrupted in any way. Depending on the technical expertise and budget available, advanced software such as R or Stata are the best for analysis. However, Microsoft Excel works fairly well if resources are constrained. Check out the handy budget matrix in the LCMT Guidebook to help figure out the level of resources needed to implement the A/B testing approach.

The basis of A/B testing is finding the best variation of a message, therefore improving the overall impact of the campaign. This could mean a rewrite to simplify the message, or adapting the entire campaign to a specific location or demographic. Perhaps incorporating a local language, or using less text, may make your content more relatable.

In case you missed it, we just launched a brand new website with lots of resources on message testing, including our freely available Low-Cost Messaging and Testing (LCMT) Guidebook! Join our friend Jamaa as he walks you through how to leverage budget-friendly testing methods for big impact results.

About the Low Cost Messaging (LCM) project

The LCM project aims to support civil society organizations (CSOs) to identify low-cost methods of testing their communication campaigns, to achieve behavior change. The project assumption is that CSOs often work with limited funds, time, and resources, in turn restricting their ability to test the efficacy of their message campaigns. Most CSOs also perceive testing to be expensive, and the project seeks to demystify these misperceptions. By leveraging the LCM guidebook, CSOs can achieve impactful campaigns while maintaining a low-budget.

Follow us on social media to get updates every time we upload new content: TwitterFacebookInstagramLinkedIn and YouTube.

RELATED CONTENT
Scroll to Top