The DAC Blog

Stay informed with actionable insights, expert strategies, and innovative ideas to drive your marketing success

Explore featured blog posts

main article image
None (Draft)
Incrementality in analytics: Breakthrough strategy or marketing jargon?
8 min

Media measurement has never been easy. In today’s fragmented landscape, consumers encounter multiple ads across various platforms, devices, and time frames. This creates a complex web of interactions, making it nearly impossible to isolate the true impact of each ad. The rise of digital tracking provided new ways to measure behavior, but even with advanced analytics, crucial blind spots remain. For example, factors like price, product availability, and offline conversions often go unaccounted for. The industry’s growing reliance on third-party data has also been disrupted by the deprecation of cookies and increasing limitations within “walled gardens” like Meta and Google. These challenges make it harder than ever to measure the actual contribution of media investments. As marketers remain under pressure to deliver short-term results, the need for accurate measurement has never been more critical. Many turn to ad tech solutions that promise AI-driven insights and performance optimization. However, these tools often fail to deliver measurable business impact. Incrementality has emerged as a potential answer—helping marketers determine the real value of their media spend. But is it truly a breakthrough strategy, or just another industry buzzword? The pros and cons of incremental testing While incrementality is often presented as a solution to modern media measurement challenges, its real-world application comes with significant limitations. Due to real-world constraints—such as budgets, channel mix, creative variations, and behavioral shifts—there is no feasible way to determine the true incremental impact of each ad run without continuous long-term testing for every scenario. And that simply isn’t realistic. Despite these challenges, every marketer knows that relying on last-click attribution or any other non-data-driven modeling leads to either duplication, bias toward certain channels, or, in most cases, both. Without a more robust measurement approach, marketing performance can be easily misrepresented. Only by using incremental measurement—and validating its results—can we begin to hold our media investments accountable. Yet, even with incremental testing, skepticism remains. Marketers find it increasingly difficult to trust the results, especially when major platforms introduce their own measurement tools. Meta’s A/B testing tool, for example, has been criticized for its inability to provide unbiased results due to divergent delivery. Instead of distributing ads evenly across test groups, Meta optimizes delivery by prioritizing users more susceptible to the ad’s messaging. This means reported results reflect both targeting effects and actual ad impact, inflating the perceived lift. Given these challenges, some marketers dismiss incrementality as just another industry buzzword—an appealing concept that fails under real-world conditions. But does this mean incrementality should be disregarded entirely? Not quite. While no method is perfect, when applied correctly, incrementality testing remains one of the best tools available for estimating the true impact of media investments. Why incrementality matters more than ever Studying the incremental effects of media provides valuable insights that go beyond simple return on investment (ROI). It helps marketers determine not just how much revenue their media is generating, but also where their next marketing dollar should be allocated for maximum impact. One of the key benefits of incrementality testing is identifying saturation points—the level of investment at which a media channel’s effect plateaus. Simply put, this tells marketers when spending more won’t lead to additional conversions, allowing them to reallocate budgets more efficiently. Incrementality also helps measure the true efficiency of media spend (MROI) by distinguishing which tactics are actually driving results versus those that appear effective due to flawed attribution models. Without incrementality testing, many marketers unknowingly allocate budgets toward channels that look good on reports but don’t contribute to true business growth. However, applying incrementality testing isn’t always straightforward. While continuous testing provides the most accurate results, it comes at a cost. Running experiments across multiple scenarios can be expensive and time-consuming, making it difficult to apply in every situation. That said, the insights gained from even a few well-executed tests can significantly improve marketing efficiency—helping businesses make better-informed decisions about their media strategy and avoid common pitfalls like wasted spend and over-reliance on misleading attribution models. Common pitfalls in attribution and analytics When analyzing media performance, marketers often rely on ad servers and analytics platforms to measure success. However, these tools come with significant limitations, which can lead to misguided budget decisions and misinterpretation of marketing effectiveness. A common challenge is conversion tracking through pixels. Platforms like Meta Ads and Google Ads use their own pixels to track conversions, but this often leads to overcounting when multiple platforms take credit for the same sale. Floodlights, which help track conversions across platforms, reduce duplication but fail to account for external influences such as seasonality, competitor activity, or broader market trends. Although GA4 has moved away from a last-touch approach toward a data-driven attribution model, credit is still primarily assigned based on click interactions. As a result, paid and organic search are often overrepresented, while upper-funnel tactics—which rely more heavily on impressions and view-based engagement—remain undervalued. [CC1] The result? Media budgets are optimized toward existing category demand, rather than efforts that drive true incremental growth. When companies optimize media using last-touch attribution, they often fall into what we could refer to as a “doom loop”: Budgets shift toward channels that appear to have the highest last-touch return on ad spend (ROAS). Over time, brand awareness weakens as upper-funnel media investment decreases. Demand declines, leading to lower overall revenue. To compensate, marketing budgets are cut further, reinforcing the cycle. This is why incrementality testing is essential. Without it, businesses risk over-investing in channels that look effective on paper but don’t actually contribute to long-term growth. How to measure incrementality: The two main approaches There are multiple ways to measure incrementality, but two methods stand out as the most reliable: Incrementality testing and marketing mix modeling (MMM). Each serves a distinct purpose and can be used depending on the specific measurement needs of a business. Incrementality testing: Best for new channels Incrementality testing compares a control group (not exposed to a marketing channel) against an exposed group (those who see the ads) to determine the true impact of a given channel. The key is to select groups that behave similarly before the intervention—this ensures that any difference in performance afterward is due to the media and not external factors. One of the best approaches within incrementality testing is geo-testing, where certain geographic regions are used as test and control groups. Geo-testing is considered the gold standard because it eliminates many of the biases that come with individual user-level tracking. Recent advancements in technology have made geo-testing faster and more efficient, giving marketers a more accurate way to measure the lift generated by media investments. Marketing mix modeling (MMM): Best for evaluating past performance MMM uses statistical regression models to estimate the relationship between media spend and business outcomes. Unlike incrementality testing, which focuses on direct comparisons, MMM looks at historical data to analyze trends over time. One limitation of basic regression models is that correlation does not always imply causation. However, modern MMM techniques integrate causal inference frameworks and separately account for factors like seasonality, external influences, and market trends, significantly improving accuracy. [CC2]  This makes MMM an excellent tool for businesses looking to evaluate past media performance and optimize future budget allocation. Both methods—when applied correctly—help marketers move beyond unreliable attribution models and toward a deeper understanding of their media’s true impact. The choice between them depends on the business goal: testing new channels? Use incrementality testing. Optimizing long-term budget allocation? Use MMM. How marketers can take action Understanding incrementality is one thing—applying it effectively is another. While no measurement approach is perfect, marketers can take specific steps to ensure they’re making more informed, data-driven decisions. Consult an analytics team to establish a measurement framework: Incrementality measurement requires expertise. Partnering with analysts can help define the right approach based on your business goals, data availability, and budget constraints. Use marketing mix modeling (MMM) to assess past performance: MMM can help you understand how different media investments have historically driven business outcomes. This allows for more strategic budget allocation moving forward. Experiment with new media channels and validate their incremental value: Before scaling investments in new channels, run incrementality tests to determine their true impact. Geo-testing and control/exposed experiments can provide clear, actionable insights. By implementing these steps, businesses can move beyond outdated attribution models and ensure that marketing investments are driving real, measurable growth—not just shifting credit between platforms. There is no perfect way to measure the impact of every marketing effort. The loss of third-party cookies, attribution gaps, and conflicting platform data make it harder than ever to track conversions accurately. However, incrementality testing and marketing mix modeling (MMM) provide better insights than outdated attribution models. While not flawless, they help marketers make more informed, data-driven decisions. By consulting analytics professionals and implementing rigorous testing, businesses can ensure their marketing investments drive real, measurable growth—not just shifts in attribution.

Read More
Explore all DAC blog posts

Discover expert perspectives and actionable ideas to optimize your marketing strategy today.

Stay Forward

Subscribe to our monthly newsletter.