What Is A/B Testing and How Can It Increase Revenue?
Visitor behavior is everything in marketing, and the best marketing strategies use hard data to support every decision. This quantitative data is key to many brands’ continued success, but where does it come from?
Well, for starters, it doesn’t usually come from a generalized study! The most statistically significant results actually come from within a business. After all, how much traffic a competitor gets doesn’t really impact your business!
That’s why today’s blog post is dedicated to the art of A/B testing, one of the most reliable ways to study visitor behavior before you make a big change.
Understanding A/B Testing
At its core, the A/B testing method is a scientific way to tackle an unpredictable human experience. It’s a formulaic, evidence-driven approach to the chaotic human experience. Alternative names for the process include bucket testing and split testing.
In its simplest form, A/B testing requires a theory, a variant, and two groups.
You can run tests on anything in marketing, — from an email to an entire site — but you must have a goal in mind. Remember: It’s only science if you write it down!
Common subjects of A/B testing include the following concepts and metrics:
- The call-to-action
- Click-through rate
- Conversion rates (and conversion rate optimization)
- Design and layout
- Search engine optimization
- Subject lines
- User behavior
- Web page organization
How to Conduct Your Own Email A/B Testing
Now, let’s put that in the context of email marketing, shall we?
A basic A/B test will have two versions of a single email. These email campaigns may be nearly identical or wildly different, but the difference must have statistical significance. For example, a marketing campaign may change its subject line to see how it impacts the click-through rate. Or, perhaps, you tweak the wording of your call to action to improve your conversion rate.
The goal is to achieve statistically significant results and hopefully prove your theory.
Step 1: Choose Your Variable
The first step to running tests is theoretical.
No, literally! It’s all about theory!
Every worthwhile A/B test begins with a theory.
“I believe that x will cause y.”
So, let’s come up with one! For our example, I propose the following: If The Email Marketers changed the color of their CTA to something dull — say, light gray — our engagement would go down. This is an entirely hypothetical idea, but it’s far from unbelievable; brands constantly change their email designs.
With this idea, we now have the basis of our testing method. Our variable is the color of our CTA, and the click-through rate is our key metric.
Step 2: Create Your Email Campaigns
Once you have the theory, you need to design the test.
Continuing with our example, we would make two different versions of the same email. One will have a brightly colored CTA; the other will be a stern gray.
Note that this means you’ll have multiple versions of the same campaign. The same can be said for any split testing. If you’re testing something else, such as your website traffic, you’d have two versions of the same web page.
Your goal is to design different versions of the same content. They should be distinct enough to have statistically significant results but similar enough to rule out other elements. You would not, for example, test this idea by changing both the CTA color and the subject lines.
Think back to your elementary and high school labs.
You’re essentially running a marketing science experiment! You need one version — the control — as a “baseline.” The other email, your “variable,” is used to collect data. And to collect reliable data, you need to isolate each element.
Step 3: Round ‘Em Up
We have a theory and a plan. Now, we need the test subjects.
You’ll get the best results by dividing groups randomly.
One group will receive the control version; the other gets the variation.
Ideally, both groups are drawn from your target audience. While you absolutely want to pick targets at random, you also want to have a reliable audience. In other words, you’ll probably want to run this on your most active subscribers.
However, some situations require a different approach. If, for example, you’re testing a win-back campaign, your target audience would come from that specific segment. Likewise, if you’re testing elements within your welcome email, you’ll be focusing on new subscribers.
Nonetheless, we can cast a wide net for our little CTA example! This is not an audience-specific test; we can expand our sample size accordingly.
Step 4: Wait for the Results
Once everything is ready, the only thing left is the data!
Your test results may be surprising!
In our example, we’d be investigating the click-through rates and comparing them to our performance baseline. In all likelihood, the results will show that the colorful link garners more traffic than its stern gray counterpart.
Keep in mind that you need a baseline to compare your data to. Compare the results of your testing to any existing data you might have; this will give you a more comprehensive understanding of your results.
The Art of Multivariate Testing
In addition to A/B testing, there’s something known as multivariate testing. This is a more complex approach, but it’s a worthwhile option for businesses looking at significant changes to websites or emails.
The process for this approach is similar. A theory is posed, different versions are designed, and the results are tested. However, unlike split testing, a multivariate test changes multiple elements. For example, you may test both the CTA color and the subject line.
Now, here’s the catch: Multivariate tests are inherently more complex. Discerning which change altered your results may be difficult, and finding definitive positive results may be impossible! That’s not to say you should avoid multivariate testing altogether; a skilled team can tackle such a problem with ease — and note the emphasis on the “skilled” qualifier.
How to Manage Your Split Testing
With the definitions out of the way, let’s discuss the practical aspects of A/B testing.
As you may have guessed, handling such a task can be daunting. You can track your own results, carefully documenting each of the test results in an old-fashioned spreadsheet. There’s nothing wrong with this! Yes, it takes more time, but it’s a reliable way to handle simple tests.
However, modern technology has plenty of answers to your A/B testing woes.
Most email marketing software comes with baked-in A/B testing tools, which track and analyze your data for you. Similarly, many website administration suites have tools to test your web pages.
What to Look for in an A/B Testing Tool
If you’re on the market for an automated testing tool, you’ll need something that can handle every step of the process. While most platforms come with simple solutions, your needs may be more complex. Alternatively, you may want to test something your provider doesn’t account for, such as your overall conversion rates.
So, let’s look back at our process.
You’ll need something that can handle data, randomize audiences, and track statistically significant changes. Multivariate testing requires even more tools, including the capability to handle multiple pages at once. More specifically, you want a tool that can do the following:
- Create and manage multiple versions of a campaign
- Handle and randomize a target audience
- Handle client-side data (or server-side for more complex internal changes)
- Integrate with existing analytics software (such as Google Analytics)
- Track test results
Now, at The Email Marketers, we focus on your emails. It’s all there, in the name! So, this list is optimized for email A/B testing tools. Nonetheless, you’ll want the same features in any toolkit.
The Perks of an A/B Testing Tool
As I said, you can track your A/B testing manually. This is a perfectly valid option.
However, a worthwhile testing tool is well worth the setup. In addition to its most basic features, such as automated data tracking, many advanced models include machine learning. This is a fancy way of saying that the software learns from past results, and future tests suggest changes based on these pre-established experiments. Such observations are critical in the world of conversion rate optimization, and they may even increase your revenue!
Many modern solutions include ways to track and calculate statistical significance without your input, improving your statistical accuracy. And you’ll love these time-saving features if you’re running a multipage test or multivariate testing scenario.
Tips to Guarantee A/B Testing Success
At this point in the blog post, we’ve learned the basics and discovered the wonders of automated testing tools.
So, what’s left?
A lot, actually!
I can’t possibly end here! There’s still plenty to cover, and I still won’t be able to check everything off the agenda. A/B testing is a science. It’s one of those “easy to learn, difficult to master” situations. You can know the ground rules backward and forward, but putting it all together can become overwhelming.
Fortunately, there are ways to reduce the stress of A/B testing! Check out these tips to guarantee your first test is a statistically significant success.
Always Start With a Solid Theory
“Haven’t you already said this?” you may be wondering.
And, yes! I have said this, but it’s important enough to repeat.
Every split test must start with a clear theory. Wishy-washy goals won’t work; you need a precise vision. Anything less is a waste of your time.
“But,” you may be saying, “what is a so-called ‘solid’ theory?”
Think of your basis for A/B testing as a scientific idea. You’re positing a cause-and-effect scenario in such a way that a single variable, X, will lead to a desired result, Y. Again, most A/B testing starts with a statement like, “I believe that changing X will lead to Y.”
Need some examples? The examples below are great A/B testing ideas. So, let’s get that list started! I believe that…
- … avoiding “spammy” words will improve our deliverability.
- … encouraging users to share emails will boost the subscription rate.
- … including user-generated content will improve future direct response campaigns.
- … increasing the size of the newsletter’s CTA will increase the conversion rate.
- … placing the discount at the beginning of the subject line will increase open rates.
- … targeting visitor pain points will increase engagement.
- … shortening and testing ad copy will lower the bounce rate.
- … using action-oriented subject lines will boost conversion rates.
Oh! And you don’t actually have to test any of those. You can use existing data for many of your email marketing strategies.
Bright colors attract attention and drive more traffic to websites. Engaging brand storytelling can increase your open rates. Personalization has a statistically significant result — and a positive one, at that! — on conversion rate optimization. There’s enough data out there to tell us that these things are true; running your own tests is a waste of time.
Instead, you want to base your A/B testing on the elements that directly impact your business.
Know What It Means to Be “Statistically Significant”
I keep throwing around that phrase, so let’s clear the air.
You don’t actually need a landslide victory to have statistically significant results. In fact, plenty of studies have single-digit margins! It’s not the number that makes a result significant; it’s the method.
A statistically significant result is any result that can be definitively proven to have a singular cause.
Allow me to explain.
Let’s say you tweak your ad copy and make it shorter. You find that you see a tiny improvement in your overall conversion rate. Assuming you followed the correct procedures, even this minuscule number is statistically significant!
To ensure such a result, you’ll want to avoid confounding factors — outside elements that may taint the results. Unfortunately, you can’t create a wholly stable market; you can, however, keep your A/B testing clean. The best results come from tests that check off the following requirements:
- Can be repeated with similar results
- Change only one variable (unless you’re running a multivariate test)
- Use an appropriate sample size
- Utilize a random audience
Now, you can plan everything right and still fail to achieve statistical significance. Unfortunately, there are many things beyond our control. If, for example, a catastrophic event happens before you send your emails, you’ll probably want to discard the results. After all, you shouldn’t base your future business gains on an unforeseen change!
Test Your Scheduled Campaigns
For the final tip: Test your most frequently sent campaigns.
I’m not saying to neglect those one-off campaigns, but those seasonal emails are just that. They’re seasonal, and the results will take another year to confirm.
Fortunately, those newsletters are sent once or twice per month! That means you can easily gather more data, and you don’t have to wait a year to do so. Similarly, such campaigns have a wider audience, making it easier to obtain an appropriate sample size.
Get Perfect Test Results Every Time
You’ve finally reached the end of the blog post!
Celebrate; that was a lot to learn!
Now, if you’re a small business owner, you may be feeling overwhelmed. How can you possibly find enough time to make these different variations and test them? Where will you find the free hours to guarantee a solid testing environment?
Well, I have great news for you.
You don’t have to do all that A/B testing on your own! My team of experts understands what it takes to calculate statistical significance. At The Email Marketers, we’re A/B testing pros, and we’re ready to put that experience to work for you. Call me to schedule a free consultation and strategy session! I’ll show you how we can improve every element of your email marketing campaign, from the A/B testing and conversion rates to everything in between!
And don’t forget to check the rest of the blog for more email marketing tips and tricks.