We email marketers seem to have more and more approaches to help us understand how emails are performing. However, some of these methods may not always be measuring what you may be thinking they measure. It is extremely important to make sure that the data that is being returned is valid and accurate, and to do so before you act on the information and—before you compare and contrast it—to make sense of your successes or develop strategies to best achieve your goals.
Usually, the overarching conversion goal is measured by revenue—or something similar, such as leads generated, signups etc. Make sure you keep an eye on your progress towards that overarching goal and that the metrics you are receiving align with the final results.
A basic example is the open rate of an email campaign. One way to manipulate them is with exciting subject lines, but if the action items in the email don’t align with the subject line you will not be able to convert any more customers, and possibly lose a few subscribers with that email, despite the fact that the open rates are higher. Open rates by themselves are only a good secondary measurement of success. They should only be measured against past performances of the same list for the same products sent through the same email services provider. Keep in mind that open rates are easy to manipulate by using more or less active portions of your list and that open rate calculations do not have an industry standard. Therefore most ESPs calculate open rates slightly differently from each other and some emphasize total opens while other focus on unique opens, which I feel is a better measurement.
One of our clients showed us recent reports from a third party provider that sampled open rates. In this case, the client’s open rates did not have any correlation to success of the email, and even in cases where there was limited inbox delivery of emails, the open rate report remained high.
Delivery analytics are another category that can be quite confusing. We often use jargon like delivery, deliverability and the more recent inbox delivery, all which are sometimes interpreted differently. To make sure that these are not being mixed up, it’s important to understand how they are derived. For the most part, delivery just means that the ESP delivered this particular percentage of the emails to the different internet services providers, without them rejecting them immediately. It’s kind of like when you get a shipping confirmation from an online retailer; there are still a lot of things that can go wrong and in the case of email, they frequently do. Deliverability is often used to cover either basic delivery to ISPs as well as delivery into the actual subscriber inbox, while inbox delivery should always be equated to that—like a FedEx delivery notification—the package has been delivered at the house, or in this case, the inbox.
After any email is delivered to an ISP, there can be a slow-down (throttling) of said emails to the actual recipients, and the ISP can even decide not to deliver them at all. And even if they deliver them, they can be delivered to the inbox or to the spam folder, the latter of course being pretty useless in almost all cases.
To complicate all of this, measurements are sometimes difficult and even conflicting, and when it comes to reading the signals correctly, with so many point measurements, there can be conflicting analytics for the same email campaign.
So how can you figure out how what to pay attention to and what to ignore?
Ultimately, if your conversions are normal, there are probably not any major issues going on. If your delivery drops, you are having issues where the ISP is not accepting addresses on your list. The ESP can immediately tell you what is going on there, as well as many of the deliverability services. Usually this is caused by some serious issues with the list that is being used. Most commonly, when an ISP blocks, it takes up to 24 hours for them to allow you to send messages again.
Delivery to the ISP is of course crucial, but your email isn’t going to be read unless it ends up in the inbox. There are many different measurements available to measure inbox delivery, none which is perfect in all situations, but some are less accurate than others. I have staked my career on monitoring email performance based on real actual subscribers of the customers of the companies we serve. In my mind, no metric can be stronger than one derived directly from a representative panel sample of the actual subscribers that signed up for your email. However, these metrics don’t always serve small senders or B2B senders perfectly. Campaigns have to be of a certain size to show up in a statistically directional manner in any consumer panel.
The traditional approach to monitoring inbox delivery is a seed based approach. Although this methodology is slowly decreasing in popularity, it’s still the most common measurement of inbox delivery. The strength of this method is that it is fairly representative for inbox delivery for smaller ISPs because they evaluate email campaigns that come in on a campaign level. This means that they don’t distinguish between inboxes based on behavior, which results in seed based email addresses getting the same treatment as real email subscribers. Most cable networks, corporate servers and smaller free services fall within this category. The weaknesses lie in the fact that there are extremely few seeds sent to each mailbox provider, usually never more than 15, often anywhere between 1 and 10 based on the size of the mailbox provider. So, 10 to 15 may go to Gmail, but only 1 or 5 go to Optimum Online, Frontier or Cox. This of course does not give a very representative sampling across the vast number of internet service providers, but it commonly does give indication of an issue being present.
Another issue is that the global ISPs—Gmail, Yahoo, Hotmail/Outlook.com and AOL—all assign different deliverability to individual emails in a single campaign, based on the inbox user behavior. In the case of seeds, there are no actual subscribers interacting with the email, so there is no behavior that the ISP can assign to the user, and the seed based email may therefore end up in the spam folder more frequently than actual email subscribers, causing false alerts of a spam issue. Equally bad is it that these inboxes never unsubscribe or mark messages as spam, creating the reverse problem because the ISP puts a significant percentage of a particular campaign into the spam folders of subscribers, but not into the seed email addresses’ folders and never alerting the ESP. All of these situations making seed based monitoring of the majority of commercial email to consumers very problematic.
But again, in some cases it’s the best available method. To improve the accuracy of problem detection, most ESPs and inbox monitoring providers also look at secondary data such as data returned from the ISPs to the sender, blacklist hits, and even modelled the likelihood of spam trap hits and used pixel or beacon technology to gain better understanding of the effect of the email. All of these are helpful when used together with the understanding that none of them is going to give a full view of the journey of the individual emails in a campaign.
This is why it continues to be essential to always monitor your KPI’s, revenues and conversions to make sure that all systems are operating perfectly. Take a look at what your current providers are offering, what new things are being offered and if those would serve you better than the current monitoring you have in place. It’s important to be critical in evaluating analytics providers in email and equally important to keep in mind that industry standards in this sector are few and far between.