<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=400669150353674&amp;ev=PageView&amp;noscript=1">
A/B Testing in Marketing - Both Print & Digital

A/B Testing in Marketing - Both Print & Digital

Do you remember doing experiments in your high school science classes? It’s one way that many of us learned the importance of following a structured process. No matter if you were in chemistry or biology, each experiment involved deciding on an independent variable, creating a hypothesis, and taking steps to see if changing that variable proved or disproved your hypothesis.
 
A lot of us who went into communications, marketing, business, and the like probably thought we were leaving everything Bunsen burner related in our past. However, turns out conducting an A/B test is very similar to conducting a science experiment. A/B testing is a user experience research method used in marketing to compare two slightly different versions of the same piece of content, A and B, to see which version receives the best results. Without utilizing this method, it’s hard to be certain that any given campaign is running as well as it could be, which is why A/B testing is a crucial part of the process.
 
This method is most often used in digital marketing to optimize web pages, create engaging emails, and design social media advertisements. Although it’s easier to run A/B tests digitally because small and simple changes can be made quickly and the results come in just as quickly, it’s important to test your print collateral as well. This can make a huge difference in the success of your direct mail campaigns.
 
A lot of the time, marketers are simply relying on their pre-existing knowledge to make decisions about the color of a call-to-action button, the time of day to send an email, or the image used in an ad. However, there’s no real way of knowing which is best without running a test. A/B testing provides you with statistics that give you certainty, whereas following a gut feeling is basically just taking an educated guess.
 
The first and most important thing to note when A/B testing any piece of content is that, just like when you were sitting in your high school lab, only one variable of the “A” version should be different from the “B” version. All others should be controlled variables if you want to be sure of what change will make a piece of content perform at its best. Once you’ve thought of a variable you’d like to test, tie it into your overall hypothesis.
 
In case you need a reminder, a hypothesis is essentially a prediction of what will happen as a result of your test. You can create one by identifying the goal, then asking yourself how you think changing your chosen variable will help you achieve said goal.Your hypothesis will usually include metrics that most marketers are used to examining in some fashion. Examples include...
 
●Changing the title of this blog will result in increased traffic to the webpage.
●Changing the color of this call-to-action button will result in a higher click-through rate.
●Changing the offer on this mailer will result in more phone calls.
●Changing this featured image on my home page will result in a lower bounce rate.
●Changing the subject line of this email will result in a higher open rate.
 
Once you’ve picked an independent variable, stated your hypothesis, and created both the “A” and “B” versions of your content, it’s time to show them to your audiences. It’s important to show both versions to similarly sized groups made up of randomly selected individuals. Say you’re testing a piece of direct mail for example. If you send version “A” to more people than version “B” it will be harder to compare the two sets of statistics. Or, if you send “A” to a list of graphic designers and “B” to a list of people who work in higher education, then you’re essentially changing a second variable. How are you supposed to know if the version that performed better did so because you changed the image on the mailer, or if the mailer appeals specifically to graphic designers but doesn’t hit home with those in higher ed?
 
Another important thing to remember about the audience is to make sure that you don’t use up all your contacts on the test. It’s better to form two smaller groups than to split your list 50/50. That way, once you figure out whether version “A” or version “B” performs better, you can send the winner to the majority of your contacts for better results.
 
With content that doesn’t have a finite audience, such as a website, you’ll probably have to run the test for a decent amount of time to obtain enough views and collect enough statistics to truly know which version is performing better. This means it might take a while to figure out what will work for you. It can take anywhere from hours to days to effectively run an A/B test.
 
Actually, time itself is a variable, which is why it’s important to keep it controlled by sending both versions of your content at the same time or making them available to viewers for the same duration of time. Unless, of course, the timing is what you’re testing. Many marketers try sending the same email at different times to deduce which days and hours of the day result in the best open and click rates.
 
Before pulling the trigger on your next A/B test, you should also decide what results you’ll need to see to be convinced that your hypothesis has been proven. For example, how much higher does the open rate on email “B” need to be than that of email “A” for you to feel that the statistics are significant enough? Also, be sure not to get distracted by metrics that aren’t related to your goal. If you set out on a mission to find out what subject line results in higher open rates, don’t worry about your click rate just yet. Once you’re happy with the subject line, you can pick another variable to test with the goal of increasing your click rate.
 
Finally, run the test, examine your results, and take action based on the information you’ve gleaned. Then, think of what to run a test on next! Everything from design, to wording, to layout can affect how a piece of content performs. That’s why it’s important to run A/B tests on all of your content, and even try out multiple tests on the same piece of content. Simply pick a new independent variable to see if changing that will also result in improvements. Plus, you can even test the same variable again if the results come back inconclusive.
 
Running one test often shows us where there are other pieces of content that might be creating friction in your conversion path. For example, maybe you’re happy with a mailer you’ve tested but the website it directs people to could use some work. So, run another A/B test on parts of your site that you're questioning next. One of the great things about this method is that it often doesn’t cost very much while still being very effective, so don’t be afraid to run one after the other!
 
The important thing that A/B testing shows us is that we can get concrete statistics about creative content. You don’t have to have to have amazing marketing intuition or a crystal ball to ensure results. Stop crossing your fingers while being unsure if your success is purposeful or coincidental. Put on your lab coat and goggles and get after the valuable information that A/B testing will provide!
 
New Call-to-action

Leave a Reply