What Is Incrementality?

Incrementality measures the causal impact of a marketing activity. It answers the question: how many conversions would not have happened without this specific ad exposure? This is fundamentally different from attribution, which assigns credit to touchpoints without determining causation.

The distinction matters because many conversions attributed to ads would have occurred organically. A customer searching for your brand name and clicking a branded search ad was likely going to buy anyway. The ad gets credit, but the incremental value is close to zero. Incrementality testing quantifies this gap between attributed and incremental conversions.

The concept is borrowed from clinical trials in medicine. Just as a drug trial compares a treatment group to a placebo group, incrementality testing compares an exposed group (sees your ads) to a control group (does not see your ads). The difference in conversion rates between the two groups is the incremental lift.

For e-commerce brands, incrementality testing is essential because ad platforms have financial incentives to take credit for as many conversions as possible. Platform-reported ROAS is almost always higher than the true incremental ROAS. Understanding this gap prevents you from over-investing in channels that look efficient but are not actually driving growth.

Types of Incrementality Tests

There are three primary approaches to measuring incrementality, each with different trade-offs between precision, cost, and complexity.

Conversion Lift Studies are the simplest to implement. Both Google and Meta offer built-in conversion lift tools that randomly split your audience into test (sees ads) and control (does not see ads) groups. The platform measures the difference in conversion rates between the two groups. These studies require significant budget (typically €5,000+ over 2-4 weeks) and sufficient conversion volume (100+ conversions per group) for statistical significance.

Geo-Lift Tests use geographic regions as test and control groups. You select matched pairs of regions with similar demographics and purchasing patterns, then run ads in the test regions while pausing in the control regions. By comparing sales trends between the two groups, you can isolate the impact of advertising. Geo-lift tests are particularly useful for measuring offline impact and for channels where user-level randomization is not possible.

Holdout Experiments involve temporarily pausing a specific campaign, channel, or tactic and measuring the impact on overall performance. This is the simplest approach but also the riskiest, as you are deliberately reducing spend with no guarantee of learning. Holdout experiments work best for testing whether specific campaigns (e.g., branded search, retargeting) are truly incremental.

Designing a Geo-Lift Test

Geo-lift testing is our preferred methodology for e-commerce incrementality measurement. Here is how to design one properly.

Step 1: Define the hypothesis. Be specific about what you are testing. "Are Meta prospecting campaigns incremental?" is better than "Does Meta advertising work?" The more focused the hypothesis, the cleaner the test.

Step 2: Select test and control regions. Match regions based on population size, average income, historical sales volume, and seasonality patterns. Use at least 2-3 test regions and 2-3 control regions to reduce the impact of regional anomalies. In Greece, you might use city-level splits (e.g., Athens vs. Thessaloniki) or postal code clusters.

Step 3: Determine test duration. The test must run long enough to capture sufficient conversion volume and account for day-of-week and weekly cyclical patterns. For e-commerce, we recommend a minimum of 3-4 weeks, ideally 6 weeks for channels with longer consideration cycles.

Step 4: Calculate required sample size. Use statistical power analysis to determine the minimum detectable effect (MDE) your test can identify. For a typical e-commerce geo-lift test, you need enough conversion volume in each group to detect a 10-20% lift with 90% confidence. If your expected lift is smaller, you need more regions or a longer test period.

Step 5: Run the test. Apply the treatment (ads on/off) in the designated regions. Monitor data quality throughout the test — check for contamination (users in control regions seeing ads through VPNs or travel) and external confounds (competitor promotions, weather events, etc.).

Step 6: Analyze results. Compare conversion rates and revenue between test and control regions, controlling for pre-test differences. Use causal inference methods (difference-in-differences, synthetic control) rather than simple comparisons to account for baseline trends.

Interpreting Incrementality Results

Incrementality test results typically reveal uncomfortable truths. Here is what to expect and how to act on the findings.

Branded search is rarely incremental. We consistently find that 60-80% of branded search conversions are not incremental — customers would have found your site organically. This does not mean you should pause branded search entirely (competitors may bid on your terms), but it means the true value is much lower than platform-reported ROAS suggests.

Retargeting is partially incremental. Retargeting campaigns typically show 30-50% incrementality. Many retargeted users were already in the purchase funnel and would have converted without the reminder. The incremental value comes from accelerating purchase timing and reducing cart abandonment.

Prospecting is highly incremental. Upper-funnel prospecting campaigns often show 70-90% incrementality because they reach users who would not have discovered your brand otherwise. Despite lower attributed ROAS, prospecting campaigns frequently have the highest incremental ROAS.

These findings reshape budget allocation. The common practice of concentrating spend on high-ROAS retargeting and branded search may be less efficient than investing more in prospecting campaigns with lower attributed but higher incremental returns.

Building an Incrementality Testing Program

Incrementality testing should not be a one-time exercise. We recommend building a continuous testing program with a quarterly cadence.

Q1: Channel-level tests. Measure the incrementality of your largest channels (Google, Meta, TikTok) using platform conversion lift studies. This establishes baseline incrementality rates for budget planning.

Q2: Campaign-type tests. Within your top channels, test specific campaign types: branded vs. non-branded search, prospecting vs. retargeting, PMax vs. Standard Shopping. Use geo-lift tests for cross-channel comparisons.

Q3: Creative and audience tests. Test whether specific creative approaches or audience strategies drive higher incremental lift. This informs your creative strategy and targeting framework.

Q4: Budget optimization. Use the accumulated incrementality data to reallocate budgets based on incremental ROAS rather than attributed ROAS. Run holdout experiments to validate the optimized allocation.

Document every test with clear methodology, results, and recommendations. Over time, you build an incrementality knowledge base that makes budget decisions increasingly data-driven rather than platform-dependent.