May 20, 2009

An Open Letter To E-Mail Marketers: Shopping Cart Abandoment E-Mail Campaigns

There appears to be some criticism about my view of shopping cart abandonment e-mail marketing programs.

So my fellow e-mail marketers, the vast majority of which act in an honest manner, marketing opt-in campaigns with integrity, let's consider the following:

Let's pretend that 100 customers abandon a shopping cart on Monday. On Tuesday, you send a targeted e-mail campaign, and you observe the following statistics:
  • 30 customers click-through the e-mail campaign, 50% of those individuals buy something, meaning that 15% of the customers purchased because of the campaign.
So these are good numbers, right?! I mean, who in their right mind would ever complain about an e-mail campaign that delivers a 15% response rate?

One of the challenges of e-mail marketing is that e-mail marketers like you and I are used to measuring "positives". We are driven to measure positive outcomes. Our metrics are calibrated to highlight anything we do that is good.

But what about the 85% that did not purchase? What if we angered 25 of the 85 customers, and they don't ever come back and buy from us again, because of our marketing program? Are we measuring this important KPI? Probably not ... because it is truly hard to measure negatives, isn't it?

There are three things we can to do prove that shopping cart abandonment e-mail campaigns are good for us, and good for the customer.
  1. Execute e-mail campaign mail/holdout groups. If 15 of 100 customers purchase in the shopping cart abandonment e-mail campaign, and 11 of 100 customers purchase in the holdout group, then we got an incremental 4 customers to purchase. 4 is still better than 0, right? But we do need to measure the incrementality of our marketing activities, don't we? We cannot take credit for orders that would have happened anyway.
  2. Follow the mail/holdout group for a year. See if, at the end of twelve months (or even three months), the group that received these type of marketing campaigns spent any additional money. If so, good, it means that as a whole, the campaigns are working. But what if the groups have equal performance, when measured over the long-term? If this happens, then we are simply shifting demand, we're not actually creating demand.
  3. Quickly identify customers who do not interact with these campaigns, and create a field in your database, so that we don't necessarily send these campaigns to that audience.
If any marketing campaign works, e-mail or otherwise, then we'll observe an improvement in at least one of the following metrics/KPIs:
  • An increase in the annual customer retention rate, maybe from 44% to say 47%.
  • An increase in the annual customer reactivation rate, maybe from 13% to say 15%.
  • An increase in orders per retained/reactivated customer, from 2.25 to 2.35 as an example, measured annually.
  • An increase in average order value, from $125 to maybe $132, measured annually.
  • An increase in new customers, measured on an annual basis.
  • An increase in customer profitability, measured on an annual basis.
As an e-mail marketing community, we need to demonstrate to others that shopping cart abandonment e-mail marketing programs increase one or all six of the metrics I just listed, while not angering other customers. Given the tools listed in this blog post, that's not hard to do, is it?

And guess what? The long-term testing is just as likely to prove that the value of this marketing program is more than what is illustrated by traditional metrics as it is likely to prove that the value is less. When you convert a customer to a purchase, their future value is significantly increased --- so the testing may show that this style of marketing is essential.

Let's have a balanced perspective ... marketing works positively for some, works negatively for others. The sum of the two can be measured via testing. This is what I'm advocating in the article --- summing the positive, negative, and incremental outcomes. To only focus on half of the metric set is misleading.

We can do this kind of testing!

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.

Well, You Got Me Fired

I'd run what I now call a "Merchandise Dynamics" project for a brand. This brand was struggling, badly. When I looked at the d...