B2B marketers can develop an A/B testing model to suit their objectives without the sample sizes we see our B2C colleagues work with every day. Read on to discover how…
Hero photo by Joanna Kosinska on Unsplash
If you google the question “What is the purpose of A/B testing for emails?” and then take a look at the brands that appear on the first results page, what do you see? It’s the brands NOT there that go part way to why I wrote this blog post.
It’s my experience in the pre-sales process of prospects looking at marketing automation that A/B testing is often a critical requirement, probably reflective of their use of consumer oriented email marketing platforms i.e. MailChimp, Salesforce Marketing Cloud etc. You can do A/B testing with both Oracle Responsys and Oracle Eloqua. It’s my observation that the Oracle Eloqua customers get on with their marketing and A/B testing often becomes a distant memory.
Or does it?
The fundamental difference I see in the marketers who seem to have A/B testing as a critical part of their strategy verses those who have little interest, is the space or industry they work in. I prefer not to use the terms B2B and B2C although that is one way to view the spaces I refer to.
In place of “B2B” & “B2C”, I believe a better way to define these two spaces is to look at them as a high value and low value transactions? For example, airline tickets (remember those?) are generally low value. Looking to purchase a new home, would be high value. Looking for shoes and handbags – low value, but it could depend on the brand.
High value & low value – try this as a case study…
Work with me here… You’ve just been given a campaign to design/build. You have to market the Ford and the Mercedes, how would you use Marketing Automation to help you in you campaign? I’ve provided some pricing below to help you think about how you might market them. However, price is probably not where you want to focus if you’re marketing the Mercedes, however price point may be a deciding factor for the Ford buyer.

US$189 per month for 36 months.

US$449 per month for 36 months.
My point is that the way you market the Ford v’s the Mercedes is likely to be quite different. The buyers journey for each will likely be quite different. Over the years I’ve worked with a couple of automotive manufactures, high-end manufacturers and I can assure you the buyers journey if very different and the campaigns developed by these companies are far more intimate.
In the example above both vehicles would fall into the classic definition of B2C. If we simply apply the term “B2B” to Oracle Eloqua users and “B2C” to Oracle Responsys users, I think we miss the point.
My own experience as a Hyundai customer is a very generic one. We’re 9 years into owning what was a brand new car purchased from a dealer and the contact we’ve had from Hyundai, or the dealer has been essentially nothing, from an email point of view. The price point for our Hyundai (AUD) sits in between the two examples listed above, but it’s in no way a “prestige” or “luxury” vehicle. (I’ve wanted a Mercedes Benz since I had my first toy car as a kid, it will happen.)
Be clear on the value or perceived value of your goods or services & determine an A/B testing strategy from there.
I’d suggest that by focussing your attention on the “value” of your item, whether in real terms i.e. dollars or in perceived value, your go-to-market approach will differ. Take the below as an example.
High value
The marketer delivering campaigns for higher value items or services, where the buying cycle can sometimes be measured in months or even years, my experience has shown less success with A/B testing.
What I have seen is marketers who have marketed low value items move into a high value organisation and plough ahead thinking they’re still selling shoes and handbags!
It’s generally a waste of everyone’s time. So, what does work?
Of the Eloqua clients I’ve seen have success with A/B testing, it requires a paradigm shift, a fresh set of eyes looking at the process that will best suit them and their objectives.
Low value
The marketer delivering campaigns for lower value items or services, where the buying cycle can sometimes be measured in minutes, can use A/B testing as a valuable tool to quickly gauge the effectiveness of email content, subject lines, calls-to-action, body copy and images.
In order for there to be any empirical value in the conclusions you draw from these A/B tests, you really need thousands of contacts, arguably tens or even hundreds of thousands of contacts.
Simple things like should the primary call-to-action button be green or red can be tested quickly across an A/B group. The measurement might last for 10-20 minutes and then the remaining contacts in your campaign are sent the winning email, A or B.
Suggested approaches for High Value/Eloqua marketers to gain value from A/B testing.
As with any testing, measurement or interrogation of a campaign and its performance, you need to have a specific set of objectives, metrics that you can measure against. I find it helps if you ask yourself “What does success look like?”.
Create a benchmark for yourself so that when you complete your A/B Testing, you at least know if you’re above or below your average.
Don’t get fixated with “industry averages” – I honestly think they’re a bit of a myth. What’s the value of a financial services firm measuring themselves against the unique open rates of a chain of gyms or universities? Sure, you can get industry breakdowns, but then you have to understand the various audience types and the campaign types.
Unless the “industry averages” you choose to measure yourself against are from a transparent source, I’d suggest you’re better off investing your energy into exploring your own data.
As Eloqua customers, you have massive amounts of data available at your fingers tips using Oracle Insight (OBIEE).
How big a sample do I need?
I googled “what is a statistically significant sample size?” and most results came back as relating to surveys and the folks at SurveyMonkey seem to own most of the search results. One of the search results suggested “Generally, the rule of thumb is that the larger the sample size, the more statistically significant it is—meaning there’s less of a chance that your results happened by coincidence.“
Your A/B testing follows a similar line of thought. I’d argue that A/B testing on 150 contacts is a waste of time.
The good people at SurveyMonkey title their article “5 steps to make sure your sample accurately estimates your population“.
Try this example, notice they use a consumer product as an example…
“So, for example, if you need 100 women who use shampoo to fill out your survey and you think about 10% of these shampoo-using women that you send the survey to will actually fill it out, then you need to send it to 100/10% women – 1000!”
Begin with your own benchmark so you have something to measure “success” against.
These two Oracle Insight reports are probably the best place to start. You can apply a date range e.g. the past 24 months or you may want to build a benchmark that looks at this data quarter on quarter or perhaps every six months. That way you can gauge movement in your average performance over time and see if you’re getting the changes you’re expecting.
Campaign Analysis Overview report
This report shows a comprehensive overview of campaign activity and performance. Since you can include multiple campaigns in the report, you can use this report to compare the performance of campaigns. You can also use this report to see how many campaign members there are, how many emails were sent as part of the campaign, and the overall performance of emails that are a part of the campaign.
- Folder location: Catalog/Shared Folders/Campaigns
- Subject area: Campaign Analysis
- Questions this report helps you answer: How have people engaged with my campaigns to date? What is the clickthrough rate? How much website traffic was generated?
Email Analysis Overview report
This report shows the performance of emails sent within the specified time frame. You can use this report to view how much an email was sent, what the bounceback rate was, the number of times the email was opened and clicked, and more. Since you can include multiple emails in the report, you can use this report to compare the performance of emails.
- Folder location: Catalog/Shared Folders/Email
- Subject area: Email Analysis by Send Date
- Questions this report helps you answer: How has an email been performing? What is the click-through rate? What is the conversion rate? What is the bounce rate?
How to create a Benchmark based on the Oracle Eloqua Email Analysis Overview report.
This sample data can be extracted directly from the Oracle Eloqua Email Analysis Overview Insight report. Insight produces the totals for you. Using simple colour coding e.g. reg and green, hopefully you can figure out that in 2020 this data shows improvement in Unique Open Rates, Click-to-Open Rates and that the Unsubscribed Rate has gone down from the previous year.
2020
Unique
Open Rate
4.32%
Unique
Clickthrough Rate
2.12%
Click-to-open
Rate
28.21%
Possible Forward
Rate
1.12%
Unsubscribed
Rate
0.04%
2019
Unique
Open Rate
4.19%
Unique
Clickthrough Rate
2.19%
Click-to-open
Rate
26.75%
Possible Forward
Rate
1.36%
Unsubscribed
Rate
0.06%
There are other data points from the Email Analysis Report that could be of value to help round out your own benchmark. For example the total number of emails sent in a given period. You could create an average of emails sent month on month or quarter on quarter.
You can also get an average of hard and soft bouncebacks which are often a reflection of the quality of your contact data. Hard bounces should be removed from Eloqua on a regular cycle and the CRM updated to show that the email address is invalid, helpful information for a smart sales person.
TIP…
In the example above, the report simply focusses on ALL emails. This is helpful at a glance, however it’s highly likely that different campaign/content types will perform very differently. For example, you may run awesome events and people are very keen to participate and the email performance of your event campaigns reflects that.
Those stats will differ significantly when compared to your newsletter campaigns and perhaps the lead nurturing campaigns. In combination, you’ll get a pretty flat view of things. Consider creating benchmarks based on Campaign Types or perhaps on Eloqua Email Groups. This will give you a more “apples for apples” comparison.
Before you start A/B testing – tips from the Oracle Eloqua Help Centre
- Identify the objective of the A/B test. Typically you want to identify one element of an email that you want to test. For example, you could focus on content elements like subject lines, headlines, or calls to action, or design elements like color choice, button design, and so on. This blog post can help identify the features you might want to test.
- Identify the metrics you’ll use to measure the effectiveness of your A/B test. When setting up the test, Oracle Eloqua lets you identify one metric that determines the winning email. The winning metric could be one of the following: total opens, unique opens, total clicks, unique clicks, total conversion, or click-through rate. Choose the metric that aligns with your goals for the test. If you’re looking for high engagement, you might want to focus on opens or clicks. If you’re focused on goal completion, the total conversion metric might be more appropriate. For more information on how these metrics are calculated, see Campaign reporting and metrics.
- Send the A/B test emails to enough contacts that you’re confident the results of the test are trustworthy. The size of your segment, the content of the email, and any time constraints should help you determine an appropriate test group size.
- Plan to capture the results of the A/B test and share them. You’ll want others to benefit from any new discoveries.
- You cannot change an A/B test after it has started. Have a look at this information on changing an active A/B test.
SOURCE: Oracle Eloqua Help Centre
Now that you have a real benchmark, based on your own data, you’re in a better position to measure success and A/B testing becomes a little more real.
Step 1. Setting your A/B testing objective
- What do you want to measure? Remember the point of A/B testing is to measure two things against each other – hence A/B. It’s not A/B/C testing, that’s multivariate testing
- Once you have your subject matter to test, let’s use the colour of a call-to-action (CTA) button in the email as an example. This requires all aspects of your two emails to be identical except for the colour of the button – that’s it, nothing else should be different.
It’s at this point that high value marketers can hit a road block. Their segment sizes are rarely in the tens or hundreds of thousands, they’re usually far more targeted and hence smaller.
In order that the A/B test actually have any empirical value, you need high numbers, large sample sizes. So, what do you do?
Step 2. Tailor an A/B strategy that reflects your audience
- Consider breaking down your testing based on audience or content types. For example, the performance of emails you send for events is likely to have a different response rate to your newsletters. Your thought leadership emails will be different again and it’s very likely that the lead nurturing campaign will yield very different performance statistics to all of the above.
- Take a step back and be clear about what you want to measure again, set your objective to suit either your audience type of your content.
Try the “High-Value” example below…
What is multivariate testing in marketing?
“In internet marketing, multivariate testing is a process by which more than one component of a website may be tested in a live environment. It can be thought of in simple terms as numerous A/B tests performed on one page at the same time.
A/B tests are usually performed to determine the better of two content variations; multivariate testing uses multiple variables to find the ideal combination.
The only limits on the number of combinations and the number of variables in a multivariate test are the amount of time it will take to get a statistically valid sample of visitors and computational power.
Multivariate testing is usually employed in order to ascertain which content or creative variation produces the best improvement in the defined goals of a website, whether that be user registrations or successful completion of a checkout process (that is, conversion rate).
Dramatic increases can be seen through testing different copy text, form layouts and even landing page images and background colours.
However, not all elements produce the same increase in conversions, and by looking at the results from different tests, it is possible to identify those elements that consistently tend to produce the greatest increase in conversions.”
SOURCE: Wikipedia
Step 3. Make changes based on your observations
It’s this final step that often gets lost or is rarely achieved. Time is invested in capturing the data, but the final step in the process is delayed or not explored further often with the excuse of “we’re busy”, “we don’t have time for this”. Plan the time before you begin the process. Consider a recurring quarterly review meeting, designate one or two team members to own the process, capture the data and report back to the team.
However you choose to review your A/B testing results, make sure the process is collaborative. Different team members will have varying views on individual campaigns, they may be more familiar with the audience and be in a better position to explains any spikes in engagement.
Try this “High-Value” example
Unlike our colleagues marketing airfares, shoes or handbags, we rarely have the number of contacts in our campaigns to derive any real empirical value from an A/B test, so a different approach is needed and more time is needed. What our colleagues in a low value environment may achieve in minutes, could take us weeks or even months.
I have a client that created an A/B testing strategy that took place over 12 months. I should point out that it was created on the back of a robust overall marketing plan of which Eloqua was just one part. This client doesn’t “shoot from the hip”, they know in March the campaigns they’ll be running in October – they’re organised.
One aspect of their A/B testing related to unique open rates and email subject lines.
It was essentially as follows:
- Select a range of subject lines for event emails. Some were as simple as “You’re invited [first.name]” while others were content/subject specific to the event. Segments were split 50/50 (or A/B) using Eloqua’s Multi-Step campaign canvas – not using Eloqua’s Simple Email Campaign.
- Once each event campaign had concluded the client captured the unique performance of each email making note of the unique open rate, given it was subject lines they were measuring
- The key difference here for A/B testing experts is that the entire segment was split into two groups A and B.
My top 5 tips for High Value B2B marketers & their A/B testing
- Set an objective for your A/B testing, what are you wanting to measure?
- Be clear about your audience, are you marketing high or low value products or services?
- Develop your own email engagement benchmark from your own Oracle Eloqua Insight data.
- Determine an approach to your A/B testing that is most appropriate for your team and audience size.
- Lock in dates for regular reviews of the data captured and determine possible changes in your email design or copy development to improve your overall campaign performance.