Incrementality is the Best Way to Prove Your Advertising is Working. Here’s How to Measure It
The number one question advertisers are asking now is, “How can I measure incrementality?” In other words, “Did my ad campaign cause the consumer to convert?”
There have been many methods for measuring incrementality, including looking at conversions pre-and post your digital marketing efforts or using a complicated model. But in recent years, marketers have understood that the only true way to measure causality is to use a scientific approach.
In public health, the gold standard in measuring the efficacy of any intervention is a randomized control trial (RCT). For example, if we take two equal populations who have similar behavior and give one population a pill and another a placebo we can then measure the effectiveness of the pill we administered.
The same can be done in marketing. In Incrementality Testing, users are separated into two equal populations. One group (test) receives an ad and another group (control) doesn’t. Once we observe the conversion rates of both these populations we can measure their incremental conversions. More importantly, we can accurately measure the cause and effect of our marketing efforts.
In order to create solid lift measurement tests, you need to follow an experimental design approach:
Setting up your test
Measuring incrementality is simple if you follow these steps:
Randomization ensures that the populations in test and control are statistically equivalent. This allows us to compare behaviors of the groups.
The key to a strong Lift Measurement test is creating a hypothesis. This can be created by looking at previous observational data. Without a strong hypothesis, the results of your test will be pointless.
3) Primary outcome
Decide the behavior you are trying to observe. For example, if we show an ad for pants, we hope that the desired behavior would be the customer would then buy pants. In your test that would be the conversion event.
Select a start and end date of a reporting cycle to ensure that reporting results align to the hypothesis of the test (minimizing distortions due to influences of time).
5) Expected use of outcome
The results of your study should contribute to the advancement of knowledge of your marketing campaign. You will use results to make strategy, budgetary and/or optimizing results.
Understanding your results
1) Lift, incrementality, incremental conversion, confidence, confidence intervals what do they all mean?
A common confusion with any test are the metrics you receive. It is essential that you have defined your results before you receive them.
This will give you the likelihood to convert. For example, let’s say your lift is 30%, this would mean that if we show your ad to a consumer they are 30% more likely to convert.
The percentage of conversions you received because of your ad. Let’s say we run our test and get a 20% incrementality, this means we received 20% more conversions because we showed an ad. Or alternatively we would have lost 20% had we not shown this ad.
c) Incremental Conversions/Revenue
Generally, a portion of sales would have occurred despite the campaign. Your incremental conversions/revenue are conversions that occurred as a result of your campaign. These conversions wouldn’t have occurred if your ad was not shown.
1) Confidence %
Your confidence is the % of distribution that sits above zero. For example, if we get a 90% confidence We are 90% sure that the lift is > 0. This is a good, positive signal we look for to validate expected results.
2) Confidence Intervals
This provides more detail into the health of a measurement. A confidence interval is a range of values that you can be certain contains the true mean of the population. If for example you use a 95% confidence interval, we are saying that we are 95% certain that interval we calculated has the true lift of the population, the narrow the confidence interval the better. The confidence interval is a measure of how precise your lift/incrementality are in your test.
1) Why do my lift/incrementality numbers vary so much?
a) Sensitivity — Lift/Incrementality are calculated based on small conversion rates. In a recent campaign the measured conversion rate for the test group was .03% and .02% for the control group. This would equate to a 33% incrementality.
If the test group conversion rate increased to .04% this equates to a incrementality of 50%.
a) Seasonality – The control group response rate can vary drastically depending on different buying seasons. For example, if we ran a test during Black Friday, the control group response rate may be higher than other times of the year as users are more likely to purchase during this time. This would in turn dampen the influence of your ad campaigns.
b) Brand Awareness – Some products are more familiar to consumers than others. If we ran a test for Snapple and another test for Honest Tea, we should expect larger lift/incrementality for Honest Tea vs Snapple. Snapple is more recognized by consumers than Honest Tea. This means that Snapple consumers are more likely to convert even without the influence of an ad campaign. This means the ad campaign run for an Honest Tea would have a larger impact on consumer behavior.
1) Can you tell me results at the strategy or tactical level?
This is an almost impossible question to answer unless your test is set up appropriately and you have a well-defined hypothesis. Take for example a prospecting and remarketing campaign that are run in tandem. In this scenario, there will be a natural flow of users from the prospecting campaign to the remarketing campaign, causing overlapping populations. As a result, there may be some users who receive two ads then convert (see below).
It becomes almost impossible to understand and attribute the conversion to the appropriate ad. Additionally, this type of behavior isn’t seen on the control side as they are held out to all ads.
In order to understand tactic/strategy level results it is best to test each strategy one at a time to avoid overlapping audiences and audience flow.
Incrementality testing can be an incredibly powerful tool if implemented correctly. Many public health organizations continue to use it to measure causality for clinical interventions. Testing in marketing is still nascent and still needs more exploration.
In order to have an effective test you need to:
1. Ensure you determine what you would like to learn from the test before campaign and incremental lift test(s) are set up.
2. Determine the primary goal you are testing (Visits, Checkouts, Signups).
3. Ensure the results of your study contribute to the advancement of knowledge of your marketing campaign.
4. Make sure the results will be used to make strategy, budgetary and/or optimization decisions.
Understand that your results will continue to change because the behaviors of your clients are also constantly changing. In order to make your tests useful have a test plan at the beginning of ever year and continue to iterate on your hypothesis. Running one or two tests won’t provide you with the answers you need. Measurement for measurement’s sake is doesn’t advance learning. Don’t just measure to say you did it, and then let the results linger in a PPT print out on your desk, measurement should be actioned on. If you can’t make a decision or take an action based on what you have learned, then that measurement isn’t useful.