April 2, 2020
Email Marketing: Testing Your Emails

Email Marketing: Testing Your Emails


We’re going to focus on how to
run tests on you marketing emails to get results that you
can measure. Every email test you run should
have a strong purpose behind it. Each time you decide to test an
email, ask yourself the following questions: “Why am I
running this test, and what am I hoping to get out of it?” By testing your emails, you’re
focusing your email marketing efforts around data-driven
analysis, which gives you the next steps for improving the
next send. Let’s explore how to test your
emails to identify the right next steps to continue to send
emails that will provide value to your contacts. Before diving into the steps
you’re going to be using, Let’s first talk about A/B
testing. What is it, and how can you use
it to test your marketing emails? A/B testing is the inbound
answer to a controlled experiment. It’s defined as a
method of comparing two versions of such as a web page, an app,
or an email to determine which one performs better. In this case, we’re comparing
two versions of an email. You could use an A/B test to
pinpoint specific variations of your email and focus on how to
improve that asset. Which allows you to focus on on
data-driven analysis instead of a guessing game. Most email marketing tools will
have a specific feature that allows you to A/B test assets,
but you can also run an A/B test on your own, without these
tools. An A/B test allows you to test
variations of your email, alongside one another. Then you can review the results
to see which one performed better to get the data to back
up future decisions on your email sends. Now that you have an idea of
what A/B testing is, let’s move on to the steps
you’ll take to run tests on your marketing emails. The first step is to define the
goal and purpose of your test. Second, evaluate the segment of
recipients you’re sending to, third, design your test, and
lastly, review and start your test. These are your steps to get you
started on developing tests for your marketing emails. You will analyze and report on
these results as well but first we need to focus on creating the
tests. When you’re running a test on an
email, all you want to focus on is one element that you are
testing: the subject line, the body content, or the CTA you are
using. Think of the tests you are
running as experiments where you want a control and a variable. With this is mind, you can take
your first step in developing the test for your marketing
emails. The first step in any inbound
strategy is defining the purpose for
doing something. If you’re testing just to test,
you won’t discover results that give you actionable steps to
help you improve. While testing your marketing
emails consistently will help you improve over time, keep in
mind that doing something just to do something will not provide
valuable results nor provide value to those receiving your
emails. Take a look at how your emails
are performing and decide what you want to improve. Maybe a specific type of email
you’re sending is not yielding the results you want. Or maybe you’re going through a
rebranding and want to test different colors or logos. Whatever it is, make sure you
have a purpose before setting out to run a test. When setting this goal for your
email test, you’re also preparing to design your email
test later on. Take for example, looking at the
email elements you can test. Which elements can affect open
rates? It could be a few things, such
as the number of emails you send to a list, the subject line, and
the preview text. And which elements can affect
clickthrough rates? The email body copy, the body
design/layout, the body images, the CTA, and email signature. These elements can give you a
starting point for focusing your goal and purpose. From here, you can see what’s
working well and what’s not to draft a hypothesis of what you
want to test and thus improve. Now that you have a goal and
purpose for your test, you’ll need to evaluate the segment of
recipients you’re sending to. You can’t run an A/B test on
your email unless it goes to someone — and when you’re
testing an email, you need a minimum amount of recipients to
make the test conclusive. This is where statistical
significance comes in. Testing significance involves
doing some math to determine the number of people you want on
your email list in order to run a test. If you send an email to five
people to try and test a new subject
line. You might send 3 out of those 5
people the updated subject line and while they might all love it
you won’t be able to confidently say that the rest of your
contacts will. You need more people for the
results to be statistically significant. So how do you know how many
people to run a test with? HubSpot’s A/B testing tool for
example requires you to have at least 1,000 contacts on your
list to run a test. This is the total number of
contacts you wish to send a specific email to. To run your test you will need
to determine a percentage or a sample size from that 1,000
contacts to send your variations or versions of email to. You will have your Version A
which can be your control, the typical email you would send and
then you have your Version B the one in which has a variation
made to. Whether this is a change to your subject line,
body text, or other element. If you are testing under a 1,000
contacts you can run a 50/50 test for your email send. Where 50% get Version A and the
other 50% get Version B. Let’s say though you do have a
1,000 or more contacts that you want to send to. You will now need to determine
the sample size that will yield conclusive
results. If you are using a tool like
HubSpot then the tool can help make this calculation for you. You will select the percentage
you wish to send each variation and the number
will be set. But you can also determine that
sample size using a significance test calculator. This will give you the number
for each sample size that will yield conclusive results. This calculator will help you
determine the number of people that will receive each version
of the email: the control and the variation. Let’s walk through an example
together. You can see here on this sample
size calculator there are a few different options you will need
to fill out: the confidence level, interval
and the population. And then finally it will produce
a samle size. Let’s begin with the population. The population is the total
number of contacts, you want to send your email to. For example, 1,000 contacts. You can get an estimate of this
number by looking at the last four to five emails you have
send and how many people you sent it to. Once you have your population you will have to set a
confidence interval. You might have heard this called
“margin of error.” Lots of surveys use this. This is the range of results you
can expect once the test has run with the full population. And lastly, you need to look at
the confidence level. This tells you how sure you can
be that your sample results lie within the confidence interval. The lower the percentage, the
less sure you can be about the results. The higher the percentage, the
more people you’ll need in your sample size to test. For example in HubSpot, the A/B
testing tool uses the 85% confidence level to determine a
winner. In a tool like this, you can choose 95% as a base. Now let’s apply these values to
see what we get. We have our list of 1,000
contacts and we want to be 95% confident our winning email
version falls within a 5-point interval of our population
metrics. Here’s what we’d put in the
tool: Population: 1,000 Confidence Level: 95%
Confidence Interval: 5 And this would produce a sample
size of 278. This would mean that 278
people get Version A and another 278 get Version B. Each segment would receive one
of these versions. Then you would be able to see
which version performed better. For example, Version B with your
variation and then send that version to the rest of the
contacts from your original list who did not receive a variation. An A/B testing tool can help
you do this automatically, but you can also implement your
A/B test by creating different segments once you’ll know which
of the sample sizes you’ll need. Now that you know the purpose
and the goal of your email test, and you know the number of
recipients you need to make your test produce results,
you can move on to designing the actual test. The design will relate heavily
to your purpose or goal. Like other aspects of your
inbound strategy, your goal is tied directly to the content,
purpose, or outcome you’re producing. When you set your goal, you
identified areas in your email that need improving. Now it’s time to take that a
step further and figure out ways to improve them. An important aspect of testing
is to make sure what you’re proposing is feasible. If you don’t want anyone to
unsubscribe from your emails, don’t send ANY emails! Great experiment right? Not so much. When you’re hypothesizing, be
creative but also keep your ideas within the boundaries of
reality. You want to explore tests that
will provide long-term results for your business. Let’s look at an example of a
hypothesis and what type of test you might design. In this example, when setting
the purpose of your test, let’s say you identified that your
newsletter emails are not getting the open rates you’d
like and you want want to find a solution by running a test to
see how you can improve them. Your goal is to improve email
newsletter open rates from 11% to 15% during a business
quarter. Your hypothesis is that the
subject line contains characters and words that are triggering
the recipients’ spam filters. To test this hypothesis, you can
design a test to adjust the subject lines to avoid
exclamation marks and percentage signs and remove sales-y words
like “free” and “discount.” You want to aim to closely align
the subject line with what the email contains. And you’ll test if applying
these best practices improves your open rate. Another hypothesis and solution
for your low open rates is: You send too many emails, so
your contacts are less compelled to open them. And you can design a test to try
to reduce your email frequency for at least one month and
observe if email open rates improve. This is how you can tie your
goal to the design of your test to start to measure and improve
your email sends. Lastly, you’ll review and start
the test. This is an important step
because it means deciding how long you want to run your
test for. There is no magic number, no
perfect time of the week or even day of the month to run your
tests, but you want to run your test long enough to make sure
enough of your contacts have had time to interact with the
content. Some email A/B testing tools
will have you set a timeframe for the test, and at the end of
that time period, the tool will choose a winning email to send
to the rest of the contacts. This is why timing can be so
important. Your A/B test might not be
significant after an hour or even after 24 hours. To decide on this timeframe, you
can take a look at your past performance (remember, you want
to focus on data-driven analysis, not guesswork). One of the most common mistakes
people make is ending a test too soon. And this doesn’t just mean the
one A/B test. Make sure you’re testing many
emails to start to see how things are trending before
making an overall change to the way you send email. Maybe you choose to test a few
different elements over multiple email sends and multiple months. Analyzing these metrics will
help you decide on what you want to adjust for the time being. But for a single email send, the
time is still important. Take a look at past email opens
and clicks and see where things started to drop off. For example, what percentage of
total clicks did your email get during its first day? If you found that it got 70% of
clicks in the first 24 hours, and then 5% each day after that,
it’d make sense to cap your email A/B testing timing window
to 24 hours, because it wouldn’t be worth delaying your results
just to gather a little bit of extra data. If you use an email platform
that has an A/B testing tool then it will determine a
statistically significant winner. If not, you can determine the
winner yourself by calculating the conversion rates of the two
types of emails. But what happens if your test
fails? What if neither version performs
better than the other or it’s too close to actually determine
significance? If neither variation produced
statistically significant results, your test was
inconclusive. That is okay! This is why testing is
important. Not every test will produce
results for you to take action on immediately. This might mean adjusting your
goal or looking at the numbers you want to move. But most importantly, don’t be
afraid to test and test again. After all, repeated efforts can
only help you improve. This where you can start to see
how these tests are performing. You might decide to run the test
multiple times to determine what you want to change. These are the steps for
outlining the test you want to run on you marketing emails:
Define the goal and purpose of your test, evaluate the segment
of recipients you’re sending to, designing your test, and
review and start your test. Testing is great way to see how
your contacts are engaging or not engaging with your marketing
emails, and by following these steps, you’ll continue to prove
your ability to do data-driven analysis for your business.

2 thoughts on “Email Marketing: Testing Your Emails

  1. What type of educational content would you like to see on our channel? Tell us in the comments below!

  2. Hubspot education tutorial are more effective , informative and helpful.
    kindly make it more easy and practically, like using facebook,linkedin and other social media optimization using hubspot step by step. Thanks

Leave a Reply

Your email address will not be published. Required fields are marked *