Posted by Dave McCue on May 20th, 2013
In a previous post we discussed the importance of compliance when it comes to the opt-in processes of your mobile marketing program. So, once the appropriate opt-in processes are in place, how do you start building your mobile database?
Once you’ve determined the type of mobile marketing communications you want customers to sign up for, the larger challenge is getting them to sign up. If you’re already communicating with customers through email campaigns, direct mail, social media, etc, use those channels to promote your mobile program and clearly state why it is different/beneficial (in other words, explain why it makes sense for an email subscriber to also become a mobile subscriber). Use those channels to drive traffic to a mobile-friendly web form where customers can sign up. This can be done through URLs, QR codes, and mobile keywords—for example, “text JOIN to [mobile short code]” would initiate an auto-responder SMS with a link to the opt-in form.
You can also include the URL/QR/Keyword on printed materials/signage inside any brick and mortar locations, incorporating an offline acquisition component to your efforts around database building. And, of course, any website properties your business operates (including your main website, blogs, social profiles, etc) should include an appropriate call-to-action to drive new mobile sign-ups.
If all of the above sounds like too much of a focus on your mobile program (at the expense of other channels), keep in mind it does not need to be exclusive to the mobile channel. The sign-up form you are driving traffic to could be used for multiple channels, where visitors submit their contact information in addition to selecting the channel(s)—email, mobile, etc—through which they would like to receive communications.
Most important is the understanding that mobile is meant to be used as a very strategic channel. If your goal in building a database is simply to acquire as many phone numbers as possible, you might be falling back into the “broadcast” mindset that is not typically effective in mobile marketing.
Posted by Rob Ropars on May 17th, 2013
In previous blog posts, I’ve discussed the status and ongoing updates to the Canadian Anti-Spam Legislation (aka “CASL”). Industry groups across North America and Email Service Providers (ESPs) have expressed serious concerns about the law’s wording, reach and requirements. To that end, the primary government agency for administering and enforcing CASL, the Canadian Radio-television and Telecommunications Commission (CRTC), met with various groups recently to discuss CASL.
In a meeting follow-up, the CRTC issued a report summarizing the topics and initial thoughts. It’s clear to many of us who have been following this topic that the impact on businesses, marketers and ESPs will be profound if the CRTC proceeds on its current course.
Unlike the CAN SPAM Act in the US and other anti-spam measures around the world, CASL is by far the most restrictive and punitive. They place strict rules around what constitutes consent, retroactive requirement of consent, definitions of Commercial Electronic Messages (CEMs), and the scope of covered marketing efforts/technology beyond email marketing. The punishment for noncompliance is extreme.
The topic of consent and the retroactive requirement, for example, is of paramount importance to marketers. Many will have trouble proving 100% confirmed opt-ins, thereby risking serious list attrition in attempting to now acquire consent. Going forward from the start of the law’s enforcement, few would dispute the need to comply with new opt-ins. However, trying to reconfirm existing marketing lists could be impossible.
So what’s next? At this point, the CRTC held initial meetings in February 2013 and have since issued a report in early April. They are currently reviewing the report’s feedback and comments and they will be publishing further compliance and communication materials prior to CASL coming into force. In the meantime, those of us in the industry will continue to push for clarity and reasonable, judicious application of the law.
In the end, any law that makes it more difficult for businesses to communicate with customers is something that should be avoided. Legitimate marketers are in agreement that opt-in practices should be in place, but not ones with onerous restrictions that cripple one of the most efficient and cost-effective communication vehicles available. The business of business can’t suffer in the wake of overly broad laws and regulations.
From everything I’ve read and heard, the actual enforcement may not begin until 2014. In addition, there is a ramp-up period so there will be time to finalize your compliance efforts. But take heed if you’re in Canada, marketing to consumers in Canada or acting as the marketing engine for Canadian clients. If things continue with the CRTC as they have been thus far, we may all be challenged with obtaining opt-ins from people you’ve been mailing for years. List attrition is a very real possibility, one which will impact overall marketing and communication efforts for years to come. Hopefully the CRTC will take our concerns into consideration and provide a more reasonable process for compliance.
Report on the Informal Consultation of 25 February 2013 among Industry and Consumer Groups and CRTC Staff on Canada’s Anti-Spam Legislation
Guidelines on the interpretation of the Electronic Commerce Protection Regulations (CRTC)
Canada’s Anti-Spam Legislation
Canadian Radio-television and Telecommunications Commission
Posted by David McMurray on May 13th, 2013
I recently completed a survey analysis for a financial institution that asked respondents to share three words or phrases that best described the institution. The task seemed easy enough until I realized that there were nearly 11,000 words and phrases provided by respondents, and coding the comments into themes would take forever. Still, the information was important to analyze so that the client could draw the appropriate conclusions.
This project was easily reduced to a manageable size by using sampling. To understand the 11,000 comments, it wasn’t required to code every comment. In my February Sampling Strategies blog post, I wrote that a random sample of 400 is adequate for any general population larger than about 5,000. “Sampling” makes most people think first about randomly selecting survey participants, or analyzing survey data sets. Could the principles of sampling also apply to my coding project? Statistically it should, so I now had an opportunity to put these sampling principles to the test.
Here was my process for coding a sample of the comments.
- First, I randomly sorted all the comments. I used the Excel Random Number generator (=RAND()) and then sorted the comments by that number. This ensured that the comments were adequately mixed up, and not biased by any content or other parameter.
- Just for good measure, I selected 500 words and phrases (instead of the required 400) and coded them. This resulted in approximately 40 codes, which were each assigned percentages based on the number of comments in that code.
- To further test the principles of sampling, I selected an additional 200 randomly selected comments (total of 700). I then coded these additional 200 comments and combined the findings with those from the original 500. If the principles of randomness played out properly, there should be very little difference between the analysis of the 500 comments and the 700 comments.
- The analysis of the most common themes is shown below. Of course, there are differences between coding the 500 and the 700, but the differences are virtually insignificant. Most importantly, the conclusions I drew about the comments were unchanged between the two tests. Statistically, I could code all 11,000 comments, and the analysis would not render significantly different results.
||500 QTY Percent
||700 Qty Percent
This is actually the second time I have conducted this coding experiment, and the results have been the same both times. Sampling saved a lot of time and money, and delivered the same results in the analysis. It’s nice to know that you can trust the principles of sampling when it comes to coding comments – and even better to have a second opportunity to prove it.
Posted by Dave McCue on May 10th, 2013
Banks and Credit Unions who incorporate email into their client communications are seeing high engagement rates and low unsubscribe rates.
These and other findings from Harland Clarke Digital, compiled through an analysis of email campaigns sent by over 100 banks and credit unions during 2012, indicate the increasing acceptance of digital communication between financial institutions and their customers. While many banks and credit unions have avoided digital channels, this research indicates positive trends for those who have utilized the email channel. This research is available now from Harland Clarke Digital — click here to download your copy.
Among the findings:
- The wealth of information available about account holders and members is ideally suited for the segmentation and targeting capabilities of email, but the collection of email addresses continues to be a challenge for many financial institutions (below 30% for banks)
- Unsubscribe rates for financial email campaigns averaged less than 0.3% in 2012
- Notifications, surveys and new account onboarding messages saw the highest engagement rates among email campaign types during 2012.
Also includes research around campaign frequency, deployment size, and more…
Download the full research study here
Posted by Jay Mooney on May 7th, 2013
In our last post we covered some tried-and-true kickstarters for a testing roadmap. Let’s jump into the next level and ask the tough questions about your testing plan.
Is that the right offer?
I’m not asking you if you are setting up the offer correctly in the subject line — that’s more of a basic email 101 test. But, have you looked at your offer and analyzed to see if you are really getting customers moving? You know what I am going to say…Test it!
• Free Shipping vs. % Discount
• Free Shipping vs. Free Bonus Product
• 20% Discount vs. $10 off your purchase this week.
Develop a grid of offers for your company, then test each of them. See which one drives the results you need to maximize the response, the revenue, and the profit you are looking for.
Earlier this year, John Joseph wrote about the benefits of targeted messaging and increasing relevancy through segmenting customer data. These tactics are vital to improving the performance of an email marketing program. However, what additional information can you learn from observing your audience and their interaction with email?
Here is one idea:
Let’s start with the time of day.
Example: You have identified that you’ve optimized your open and click-throughs by sending at 11:35 a.m. (Let’s assume 65% open between Noon and 5:00 p.m.)
Great! You already know that a larger part of your audience is opening their email during the day, most likely while they are at work. Everyone else is checking email at home after they walk the dog or eat dinner, and now you hope they scroll through to the bottom of their inbox to find all the buried messages. So, would you get an even higher overall result if you split your list into an a.m. and p.m. deployment? Test it.
Take a standard send that you do routinely, and capture your open rate and click-through rate. Then pull your open data and split your audience into an early send and a late send. Your a.m. email surfers will find your email close to the top before they go to lunch, and your p.m. audience will find your email at the top when they get home or when they’re opening up an iPad on the commuter train.
Now combine the open and click-through data for both sends of this single message to determine if you really do have two distinct audiences, compared to the single send. You can add this as an attribute to your email list so that it easier to split your list and schedule sends based on this behavior.