The #1 Marketing Platform® for Small Business

Access Plans & Pricing Now

Contact Info

What services are you interested in? (Select all that apply) *

(Access code sent via text)

Business Info

1-2
3-5
6-10
11-20
21-50
50+
New
1
2
3+
0-100k
100K-250k
250k+

No credit card required

Already have an access code?

Marketing 360® Blog

What is Statistically Significant Marketing Data?

Data analysis is foundational in digital marketing.  Campaigns are data-driven, built around the fundamental metric of website conversions.

With this vital role, you’d think data would be treated with the utmost rigor.  But too often, it’s not.  Marketing reports are often based on hunches and spurious analysis – viewed through rose-colored lenses.

In the SMB digital marketing sphere, the biggest problem is failing to establish if the data is statistically significant.

Statistically significant is data where the variables involved are caused by something other than chance.  When data is statistically significant, it means the collateral used in the campaign had a measurable, ongoing impact.

Ascertaining the statistical significance of data can be complex, but we’re going to keep it high-level by looking at three issues that often obscure data’s real meaning.  These are also where novice mistakes are most common.

 

Time Frames

Recently, a marketing executive new to our company came to me excited about a positive result she’d seen with a client.

It was remarkable.  After doing some manual optimization of products in a client’s shopping feed, she reported this data:

  • 139% increase in conversions
  • 79% decrease in cost per conversion
  • Spend dropped by 52%

Impressive indeed.  I asked her to provide more details on the optimization work she’d done.

By the time she did that, the story changed.  She had 1-2 weeks of strong data, but over the last two weeks it reversed.

The mistake she made was the most common of all when it comes to analyzing data.  Her time-frames for the analysis were too short.

For almost every type of digital marketing campaign, a time-frame of just a week or two is insufficient.

Short time frames don’t meet the standard of statistically significant.  Random chance can cause data to spike or dip for no ascertainable reason.  Outside trends, seasonality, and changes in the market cause spikes unrelated to the tactic being tested.

There is no set rule for how long a campaign must run to develop statistically significant data.  The results should be consistent and the process repeatable.  Testing processes must account for false positive/negative errors.

The mistake novices make is to discover a positive data point, then jump on it as an opportunity to report a success.

Statistically significant data comes from rigorous analysis.  For most website marketing, that’s a process that takes months, not days or weeks.

 

Sample Size

There’s a strong correlation between campaign time-frames and the audience sample size.

Sample size is the number of people you need to engage with your campaign to establish statistical significance.

I recently came across research reports from a well-known marketing firm and discovered that they based their conclusions on the responses of two people.  Not what you’d call rigorous analysis.

The marketing executive who saw her results shift didn’t have the sample size to back-up her conclusion.  With just a handful of conversions over the span of one week, changes in data could be (and turned out to be) random.

As you let campaigns run and gather data, you want to look for patterns and repeatable results.

For example, say you’re split testing two landing pages.  Conversion-rates are 2% higher over two-weeks with a sample size of 100.

That means in two weeks one version got 2 more conversions than the other.  To get statistical significance, you’d want to run this campaign for at least 3 months.

Say, on the other hand, you split test the two pages and in two-weeks one page has 80 conversions and the other 20.  Now your sample size is showing a large variation you can start to track.  Another two weeks of this data and you could conclude the content one landing page was outperforming the other, allowing you to optimize your campaign.

 

Seeing Roses

Inadequate time-frames and sample sizes explain how marketing veers away from statistical significance.  But they don’t explain why this is so common.

The why comes not from the data, but from the people interpreting it.  Two things cause these problems.

The first is the problem of viewing data through rose-colored lenses.  Marketers and business owners are strongly motivated to see positive results in their efforts.  It reflects well on their actions, and may even be necessary to justify their paychecks.

This was common with SEO work.  An SEO would optimize a series of terms, get them to rank on page one, then use that data to prove the validity of their work.

The problem was that these terms often didn’t drive enough traffic to create a relevant sample size.  The data point was being isolated outside the goals of the campaign.

The marketing executive who came forward with data after only a week was looking for recognition.  While her work showed promise and was worth noting, it didn’t have the statistical significance needed to justify making modifications to the campaign.

Beware of rose lenses when analyzing and interpreting data.  The desire to mine for positive results can be really strong with marketing.

Marketers want to prove their worth.  Business owners are desperate to see results and avoid sunk costs.

Confirmation bias is a problem with data interpretation.  This is where you interpret the data in a way that confirms your assumptions while you ignore data that refutes them.

For data to help your business, you must establish statistical significance and interpret it objectively.

Be patient, diligent, and clear about cause and effect.

When you find an important data point that leads to profitable optimization, the payoff can be huge – and well worth the wait.