When measuring the sales lift of a marketing campaign, 45% of marketers use some form of basic pre-post marketing analysis, according to The Lenskold Group and MarketingProfs 2007 Marketing ROI & Measurements Study.
A pre-post analysis compares the average sales levels for a period prior to the marketing campaign with the sales levels during and possibly following the campaign. Although that sales lift calculation is fairly easy and the data is similarly easy to access, the question remains: Is it accurate and reliable?
I've met marketing professionals from Fortune 500 firms who admit that every time a pre-post analysis shows a positive lift they attribute the lift to marketing, and when it shows a decline in sales they attribute the decline to non-marketing factors.
The reality is that sales fluctuations are driven by more than just a single marketing initiative, and executives are savvy enough to know that you can't take credit for the upside and no responsibility for the downside.
So the typical pre-post measurement is not accurate enough to support major marketing decisions; moreover, if it is not managed correctly and improved, Marketing can take a significant hit on credibility.
But don't give up on pre-post measurement altogether. This article will explain what you need to know to improve its accuracy and how it should fit into your mix of measurement methodologies.
Pre-Post Analysis Limitations and Potential
How to calculate the lift in sales from a marketing initiative? For the most part, all measurements must work to identify the sales "baseline" (i.e., the sales that would have happened in the absence of marketing), which is then compared with the actual sales.
A pre-post analysis assumes that the average sales levels prior to marketing would continue during the marketing period and are therefore the baseline sales level.
For example, a four-week average sales level of 500 units may be compared with a one-week sales level of 550 units during a campaign period, and the lift is assumed to be 50 units.
The problem is that the methodology does not isolate the impact of just the marketing initiative being measured. Sales levels are influenced by many factors, including competitive marketing, economic conditions, other marketing and sales contacts within your firm, product life cycles, and even the weather.
So the uplift in sales may not be your uplift. And it's even possible that although your marketing did have a positive impact, the pre-post analysis shows a decline in sales during the post-marketing period.
Other measurement methodologies can isolate the impact of specific marketing initiatives and have advantages over a pre-post analysis. Market testing establishes control groups to determine the baseline sales. Modeling uses many detailed data points in extensive analyses to strip out the influence of possible external factors and attribute the lift above the baseline to specific marketing initiatives.
Pre-post analysis does have a role in marketing measurements. Modeling requires significant budget, data, and staff resources that are not always available. Market testing requires the right conditions and provides a certain number of measurement opportunities within a given period. Add the low cost and low data requirements for pre-post analysis, and it makes sense to use it as an ongoing measurement to provide directional feedback on marketing performance.
There are three critical success factors for using pre-post analysis measures:
- Use this methodology for the appropriate type of objectives.
- Improve your pre-post analysis techniques for better reliability and lower margin of error.
- Present the results as directional in order to set expectations and maintain credibility.
Aligning Measurement Methodologies to Objectives
Given the limitations of pre-post analysis, it is best not to choose it for measurements that will guide multimillion-dollar campaigns or set your strategic direction.
It's a sufficient measure for monitoring the effectiveness of smaller, tactical initiatives in which the decisions are made primarily to compare the relative value of different initiatives or to assess whether certain types of marketing are effective enough to continue.
Remember that even for those less-critical decisions, improving the reliability of pre-post analyses is necessary. When assessing effectiveness using pre-post measures, look for recurring patterns rather than for one incident that had a positive or negative impact. Otherwise, you can easily eliminate an effective campaign based on a sales fluctuation in decline. Marketers are also eager to act on highly positive results and so may find that the next run of the campaign is not nearly as successful as the first.
When to use pre-post analysis in your measurement plan:
- For general monitoring of performance
- For low-cost, low-risk marketing initiatives
- When directional information is all the budget allows
- As a way to spot emerging trends
Pre-post measurements can detect when expected outcomes do not occur, which may indicate that external factors are influencing marketing effectiveness.
For example, if prior marketing initiatives generally led to a 3-5% lift in sales but that does not occur for a specific initiative, further analysis, market testing, or modeling may be necessary to assess how external factors are influencing performance.
Improving Pre-Post Measurement Reliability
Marketers like the pre-post analysis because it is very easy to calculate. In fact, many make the inaccuracies worse in an attempt to make the math simple.
Frequently I'll see a measure for a four-week campaign compared with a pre-marketing period of four weeks, or a 13-week campaign compared with a 13-week pre-marketing period (and so on).
If your objective is to predict the sales level during the campaign impact period, why is the best predictor four weeks sometimes but 13 weeks other times? Clearly, that is a shortcut, and no analysis was done to determine how to use sales-trend data to predict the baseline.
The pre-post methodology can become much more effective with analyses designed for the specific purpose of improving predictive accuracy of sales-trend data and minimizing the variance that hurts predictability. Establish one standard pre-post analysis structure that will provide the best possible estimate of baseline sales. Even though that approach cannot eliminate the influence of external factors, it can minimize the margin of error within the sales data.
Here are some key steps to improving pre-post reliability:
- Minimize the impact of seasonality with a comparison to prior-year sales (matching products and sales distribution to the degree possible). If you ran promotions during the same period in the prior year, that seasonality adjustment will hurt and not improve measurement accuracy.
- Determine the amount of time during which the average is most predictive of the baseline sales, and use that time period consistently. If the period is too short, it will not eliminate short-term fluctuations. If it is too long, it may not reflect current market conditions. There is a mathematical solution to that problem, and it can be validated by projecting the baseline sales during non-promotional periods.
- Remove high-variance data, which may include select markets, products, customer segments, or sales channels that are unsteady and so can distort the pre-post sales comparisons measurements.
- Track a broader product set beyond those promoted to (1) understand the halo effect and possible cannibalization from the lift in promoted products and services and (2) watch for sales fluctuations that may indicate the influence of non-marketing factors.
- Account for trends over time so that sales increases or decreases due to other factors (product life cycle, economy, competition, etc.) do not influence the pre-post comparison.
- Run Analyses of Variance (ANOVA) tests on your pre-post campaign analysis to determine whether your measured sales lift is above or below your margin of error. If the lift is above (or below) the variance, at the least that eliminates the problem of sales fluctuations—though not the impact of external factors.
- Identify repetition in the findings to help draw conclusions. If certain campaigns consistently show a sales lift over the pre-marketing period, it is less likely that the results are from an unrelated driver.
- Use outputs from more-sophisticated marketing-mix modeling or market-trend analyses (if run within your company) to incorporate adjustments to the pre-post analyses for the influence of external factors, such as changes in competitive activity or weather conditions.
Running those analyses and establishing a standard methodology allows your organization to use pre-post measurement for directional insight with increased confidence.
Plus the discipline of analyzing and understanding data at a more-detailed level puts you one step closer to modeling.
Disclose Limitations to Maintain Credibility
You now know the limitations of pre-post analyses but also recognize that it belongs in your overall measurement plan. Take steps to improve the accuracy and reliability of pre-post analyses so you can use the directional insight to support low-risk decisions. The final step is to properly present the results to executives and other stakeholders.
Too often, marketers present the pre-post analysis as a conclusive measurement to help build confidence in their marketing. Although it shows good discipline to have a measured result, it can backfire when the next pre-post measure shows a negative result that requires excuses for why that measurement approach is no longer valid.
Present the pre-post analysis as a directional measurement and an indicator that marketing may be working. Indicate that other factors that are not detectable with this methodology could influence the results. With full disclosure, you maintain credibility and do not have to make excuses when sales decline in future measures or when repeated campaigns don't deliver the same lift.
Communicate that other, more-advanced measurement methodologies are available if stakeholders need a more-conclusive measure. In fact, make the case for better measurements that go beyond just tracking lift and also support strategic testing and diagnostics that guide performance improvements.
It is impossible to measure everything, and measurements are not perfect. But precise and directional measurements are valuable when used appropriately and integrated with diverse methodologies into a cohesive measurement plan.