Controlled content experiments and A/B split testing are a missing part of most SMB digital marketing. Here are some tips to help introduce testing into your marketing and give you an idea of what to expect.
In 2008, an employee at Microsoft made a seemingly minor suggestion. He thought it would be better if the link to Hotmail opened in a new browser tab instead of opening in the same tab.
This was kind of controversial at a time because few sites opened links in new tabs. But they decided to test it in the UK with about 900,000 users. The results were encouraging. Engagement of users who opened Hotmail increased by 8.9%.
In 2010, they rolled this change out to 2.7 million users in the US with similar results. They then experimented with having every search initiated on MSN open in a new tab, and clicks per user increased by 5%.
This turned out to be one of the best ways to increase user engagement that Microsoft ever introduced, just by changing a few lines of code. Today, most major websites, including Facebook and Twitter, use this same technique.
Small changes can have a big payoff. Major brands like Amazon know this, such as when they tested moving credit card offers from the home page to the shopping cart page – and boosted profits by tens of millions every year.
Have you tried variations of the content on your homepage, landing page, or product pages? Have you tested different types of offers on social media? Do you know how your audience responds to variations in your offer, different types of media, or new customer service policies?
If you’re like the typical small business owner, the answer is no. Chances are, you set up your website and promote your online content in one way, based almost entirely on your intuition.
Online content experiments often reveal surprising results, typically from what seem like minor changes. If you’re not running online tests and modifying your campaigns based on the data, you’re missing one of the best ways to gain a competitive advantage.
The A/B Split Test
When it comes to this type of testing, you’ve probably heard the term A/B split test, particularly in reference to Google Adwords. Testing results on ad copy in Adwords is one of the easiest ways to run an A/B test because the system automates the rotation of the ads.
In more general terms, A/B tests mean running two pieces of content against each other (an A/B/C test would compare three treatments). The “A” in the current presentation of the content – it’s the “champion”. “B” contains a modification which is an attempt to make an improvement. This is the challenger.
You then expose the content equally to your sample audience and collect the data.
The challenger treatment added a statement saying they would never use people’s info to spam them:
Based on intuition, the testers were confident that adding this statement would increase conversions.
It didn’t. It reduced conversions by 18.7%.
They hypothesized using the word “spam” – even when make assurances against it – was having a negative impact.
So they removed it and challenged with a different treatment:
An increase in sign-ups of 19.47% over the original control. This based on changing one word.
When you split test like this, the key is to focus on single change that will provide statistical data you can trust. You can test images, word choice, product features, value propositions, description, calls to action, or policies like “free shipping”.
Audience and Timing
Two big factors that influence how SMBs approach content testing are the sample audience size and time requirements needed to run the tests.
At the enterprise level, it’s suggested that businesses who have at least a few thousand website visits per day can effectively run tests. However, many small businesses may only have that many visitors per month, particularly when they’re starting out and need this data the most. Those time frames can make it difficult to run these experiments.
That’s why we mentioned Google Adwords. Today, you can also look at Facebook Ads.
For most smaller businesses, you’ll initially need to run a paid ad campaign with controlled traffic flows and audience targeting to get actionable data. Then, when you get data that clearly shows a winner between champion and challenger, you can modify your content more broadly.
For instance, we often run tests with eCommerce sites to test variations in shipping policy. One treatment will have free shipping only after a certain amount is reached in the cart, the other will advertise free shipping on all orders.
These tests are worth running. Depending on the products and cross-selling opportunities, some sites do better with free shipping with a designated total cart minimum. Others sites fare better by offering free shipping upfront and rolling the cost into the product price.
Within a few months, we usually have enough data to decide on the winning content. We then make those modifications campaign wide.
We are also finding fertile testing ground on social media, particularly Facebook. With custom audiences and lookalike audiences, we test brand messaging using engagement metrics. Then we get data on advancement through the sales funnel by testing retargeting content with engagement audiences. For some businesses, we can also test direct-response call to action content on Facebook Lead Ads.
These are all manageable campaigns that smaller business can derive useful data from.
Learn to Fail Fast
The other main reason SMBs don’t test their content is that they are afraid to fail.
In order for there to be a champion in a split test, there also has to be a loser. For this process to work, you have to run one campaign that will later be ruled sub-optimal.
A mantra at Marketing 360® is that marketing is all about doing more of what works and less of what doesn’t. This implies that you understand what doesn’t work, which means you run a challenger campaign that may temporarily hurt your bottom line.
The key here is to fail fast. You run your experiment and gather the data needed to determine what content is superior. Then you implement improvements and punt the loser.
Optimal marketing is not guess work. Even serendipity needs proof. There are always variations you can test, and what your audience responds best to is often a surprise.
The only way to find what works best with your ads and content is to experiment, test, and analyze data. Then make informed modification and continue to analyze.
Data analysis requires a trained eye, which is a strong reason to work with a marketing consultant.
And consider this if you think this all seems like too much. Among businesses that experiment and test, virtually none retain the exact form of their original idea. Statistically, the prospect that you’ll create all your campaigns and content and end up with optimal material on your first try is basically zero. Virtually all tests result in discoveries that improve long-term performance.
This is one of the biggest differences between successful businesses and also-rans. The successes don’t just think they have the best campaigns and content, they prove it.