Up Your Email Marketing Game Through Testing and Optimization

The EEC recently had a panel of experts from the brand and vendor sides of the business come together on a webinar to share their insights and tips on how to level-up your email marketing performance by elevating your testing best practices, tactics, and optimization. In this follow-up blog post, our panel continues the conversation by offering the following key take-aways:

1) What are the top one or two most important things to keep in mind when testing to improve email marketing response?  (possible answers: make sure you have statistical significance, know why you’re testing and want to learn, prevent bias)

Karen: 

1) Know your goals and KPIs for measuring them. In other words, know your WHAT and WHY. What do you hope to achieve (e.g. higher click-through) and what’s your rationale for wanting what you want (e.g., increase site traffic/drive up conversions/increase engagement?).  Identifying your what and why then allows you to define the measures which determine success. Ad hoc testing without a plan is wasted effort; your time and money are better spent elsewhere. Approach your tests with strategy and discipline and you’ll quickly and appropriately be focused on optimizing positive impacts to your business’ bottom line and justifying your use of resources.

2) Make sure your results are statistically significant and projectable to your entire list/audience. This means practicing sound math and testing to large enough segments that you achieve valid results. If the margin of error in your test design is high or your test segments too small to yield measurable results, differing outcomes from different test groups might not be due to your tested variables, but to random outside influences. If in doubt about a winning segment or treatment, retest to validate results. The last thing you want is to invest significant time, money and effort into testing and end up with low-confidence-results you then extend into standard practice.

Scott:

Look at direct and indirect conversion – openers and non-openers. Your emails may not drive an open or a click. But email has incredible influence in an indirect (read: non-trackable) way. If your main KPIs are orders and revenue, then you can look beyond the traditional email engagement metrics in your test results process. For example:

In a previous life, I ran a report that looked at every customer who had received an email and then converted in any channel within the next 48 hours. 40% of revenue from those customers came from non-openers.

Paul:

Know what you are testing for and make sure that there is enough time to accommodate that outcome. If you are testing for conversions, then a two-hour AB test won’t predict that outcome. We have seen the opposite with Send Time Optimization (STO). To understand the impact of STO, you need to look at no more than the first two hours, because after that the timing of the delivery is no longer a factor.

2) What’s the biggest factor that can tank test results or render a test useless?

Karen:

A failure to focus on testing that which impacts bottom-line business results the most. Anyone can easily test subject lines to see which improve email open rates, but do better open rates matter enough and translate into meaningful business gains downstream? If not, focus on testing in other areas that DO affect your most important channel, revenue or company goals – like segmentation, offer, creative, or landing page design – and continue to optimize from there.

Scott:

A failure to understand proper test duration. Can you really trust a time-of-day test that ran only on a random Wednesday? No. With a single-shot/single day test, too many anomalies can play a factor. The same is true if you run a test for way too long. You allow too much time for anomalies to play a role.

Run similar tests across various times to account for seasonality and short-term factors, like big promotions. If the test results hold true across those times, then you have something.

Paul:

Bias. Bias is list selection, offer etc. just creates extra noise that distracts from the signal. Eliminating bias when selecting an audience that you are testing with. Many lists are ordered based on when the email was added to the file. A randomized list with enough subscribers to be considered statistically significant is critical.

3) Can you share the best success you’ve seen as a result of testing? The biggest win?

Scott:

My biggest successes have come in audience tests. Testing hypotheses on audience segmentation with targeted messaging. Proving the ROI behind them to get support for additional creative resources. See where you can get quick wins, easy wins. Get momentum.

The EEC is proud to support informative events and offerings to help elevate email marketing best practices and be the source of knowledge-sharing and connections in our community. We hope you’ll join us at future EEC webinars. Thank you to our presenters and contributors to this blog:

Karen Talavera
Founder & Principal | Synchronicity Marketing

Scott Cohen
Senior Email Marketing Manager | Purple

Michele Grant
Founder & CEO | Block + Tackle

Paul Shriner
Chief Evangelist, Co-Founder | AudiencePoint