Email Marketing 101: Split Test Optimization

Do A/B Split Tests End with the Open Rate?

Recently, a client of mine wanted to run a split test on an email. The test was a simple one: subject line A/B test. Fairly straightforward. You know the drill. Set up an email with two different subject lines and then kick back and see which one had a higher open rate.

Of course, the biggest let down of any split test is when the results are neck and neck, and I have spoken about strategies on how to maximize split test effectiveness. But a few things to remember before jumping to any conclusions. Firstly, the difference between 19% and 20% is not 1%. Since we’re talking about fractions, the improvement here is actually north of 5.2%.

So, is 5.2% big enough to confirm a knock-out punch?

Well, that all depends on your sample size. For example, if you are sending 5,000 emails to version A and another 5,000 to version B, then that is a significantly large enough sample to confirm version B the winner with 95% confidence. However, if you are only tossing this to a meager sample size of 2,000 people and get the same open rates, then your data just is not big enough to declare a winner.

Case closed… or is it?

The Proof in the Pudding is in the Eating

I would like to present an idea that the when you are split testing your subject lines, the split test does not end with the email. In fact, it might not even be the deciding piece of evidence in your data analysis. Sure, it’s important to analyze this—but as the often-misquoted adage states: the proof in the pudding is in the eating. And by that, I urge you to look at the absolute goal of the email.

  • Was it an onboarding campaign where you were looking to get more new users to log in to your app?
  • Was it a sale email with the goal to drive revenue for your online store?
  • Was it a refer-a-friend campaign where you were looking for new leads?

These are key questions because it will give you some direction and point you to where you ought to search for definitive, bottom line results for your A/B subject line split test. Before going any further, I’d like to challenge you and get you to think about your most recent split test.

Got it? Great!

Now in my case, I was driving traffic to a site. Therefore, for me analyzing the click through to open rate (CTOR) is where I need to look. Let’s go back to the second case I mentioned whereby a split test was sent to 2,000 users. Half got subject line version A, whereas the rest got B. The open rates were close, 19 and 20% respectively; however, at this volume, there is not any conclusive evidence… or is there?

The Email with the Lower Open Rate is the Winner!?

Looking into the click through to open rate reveals the gem. My scenario looks like this:

VersionSentOpenClicksOpen RateCTOR
A2,0001907519%39%
B2,0002006020%30%

Of course, I do not present the unique click rate because you have to, in email marketing, focus on the engagement levels.

And if you run the numbers, you’ll see that the difference between version A and B on clicks compared to opens yields a 95% confidence rate in favor of version A, despite having a lower open rate. There you have it—the next time you run an A/B split test on subject lines, be sure to continue digging because sometimes you’ll find shiny gold!

Why this happens is another story. And I answer that question; namely; Do Higher Open Rates Equal Better Email Campaigns?, in another article. Check it out!