There are a few things in the small world of email marketing that I believe can be simply attributed to peer pressure. Just like back in our school days, most of our impressionable brains feel the need to keep up with the “cool” email marketers. The fact that you are reading this article tells me that you are at least interested on some level in learning more about and improving your own email program.
We all read trade magazines, blogs, attend webinars, and watch twitter feeds looking for those nuggets that could make all the difference in our ROI. All of the “experts” seem to talk about the same things over and over again in these different mediums. Why do the topics seem recycled? The reason is because these really are the keys to success and they do work.
I wanted to talk about one of those “we hear this all the time” topics and put a bit of a different spin on it. Let’s talk about A/B testing. Yes, testing again. Testing seems to be the staple of many best practices discussions. All of us senders know we should test our email. The problem lies in the fact that, for most of us, we have no idea of how to pull that off. I break it down like this: 10% test correctly, 30% attempt testing, 40% plan on testing, and the other 20% could care less. I think these statistics mirror most things in our lives. We have the overachievers, those among us who make the attempt, those who continually plan to start tomorrow, and those who don’t even want to discuss it.
Why can’t most of us actually get good results from our testing? The answer lies in the peer pressure we talked about earlier. All the cool kids are doing A/B testing, so we feel like we have to do the same thing. There is a big difference in doing real testing with a purpose in mind, and sending two different email campaigns. Testing is all about the results, not the actual tests. If you are not in position to capture data or understand why results were different, testing is a waste of your time. It’s time to give up your seat at the popular table.
So you’re ready to test…
Step one before beginning a testing program is to determine what element you want to test. It is very important not to change multiple elements in a single test; that makes it impossible to discern what drives your results. Let’s say you decide to test subject lines. The rest of the email needs to be the same to determine true differences in the test. I would also highly recommend you anticipate results before testing. You won’t always be right – and it’s sometimes exciting to be wrong – and this will help to predict what you are going to do with the results.
Test quantity is something we often see handled in a less than optimal way. If you have a campaign going to 100,000 recipients, the way to test is not to send 50,000 to one group and 50,000 to the other. The proper way to test is to send 5,000 to each test group; analyze the results and send the other 90,000 the highest performing copy. The value in testing is to optimize each and every campaign right now. It’s too often that I see people testing a campaign 50/50, and then doing nothing in the future with the results.
The last piece of advice I’d like to leave you with today is to think historically. Proper testing can give you the future play book for your email programs. Historical testing results can help develop new campaigns, understand what works for different segments, and generally sharpen your program. Don’t miss the opportunity to get a letter sweater, a date to the prom, a convertible, and just generally be one cool email marketers. Testing is where it’s at, Daddy-O!!!
– Kevin Senne, Premiere Global Services