There's a specific kind of conversation I've had at least a dozen times in the last year. A founder or CFO asks me, mid-audit, why their Google Ads dashboard is showing one number for conversions while their Shopify dashboard is showing a different number while their Meta dashboard is showing a third number. They expect this to be a data hygiene problem — something I can fix by reinstalling a pixel or checking a GTM tag.
It isn't. It's a structural reality of how paid media measurement works in 2026, and no amount of tag auditing will make those numbers match. What I can do — what any honest operator can do — is explain which of those numbers is closest to reality, why they're all wrong in their own specific ways, and what to do instead of hoping someone ships a pixel update that fixes it.
This post is the explanation I give to every client in the first week of an engagement. Bookmark it, send it to your CFO, or hand it to anyone who asks why your reporting has gotten so confusing.
A quick reality check on where we are
Let me establish the baseline, because a lot of people don't realize how broken the classic measurement stack has become.
Third-party cookies in Chrome are, as of early 2026, effectively deprecated. Not "deprecated soon" — gone. Chrome's Privacy Sandbox is the replacement, and it works through a completely different set of APIs (Attribution Reporting API, Protected Audience API, Topics API) that do not produce the kind of user-level reporting we had in 2019. Firefox and Safari have been on this trajectory for even longer. That era of measurement is over.
iOS has been broken for paid media measurement since iOS 14.5 in 2021 and has gotten progressively worse with each subsequent iOS release. Meta's Conversions API (CAPI) partially addresses the blind spot, but partially is the right word. Depending on the account, CAPI recovers 40-75% of the conversion data that would have been visible pre-iOS 14. Not 100%. Never 100%.
GA4's attribution is, charitably, directionally useful. Its "data-driven attribution" model has gotten meaningfully better since 2023 — it's no longer the black-box embarrassment it was at launch. But it's still modeling, not measuring, and the modeling is based on an increasingly thin set of observable signals.
This is the environment. Now, what works?
The three measurement systems I actually use
I run every client's measurement across three parallel systems, because no single system tells the truth. The three systems triangulate each other. When they agree, I trust the number. When they disagree, the disagreement itself is information.
System one: platform-reported conversions. This is what Google Ads and Meta Ads Manager show you. It's the data the platforms use to optimize their bids, so it's the data that actually matters for campaign management. It is also systematically inflated — every platform is graded on the conversions it can claim credit for, and their optimization algorithms reward views, post-engagement conversions, and other soft attributions that a rigorous accountant would not count. I use platform-reported conversions to optimize campaigns. I do not use them to report business performance to leadership.
System two: server-side conversion tracking to a single source of truth. For most clients this is Shopify or their CRM. I set up Conversions API from Meta, Enhanced Conversions from Google, and server-side GTM if the account has the volume to justify the complexity. The goal is to get first-party conversion data flowing to the platforms as cleanly as possible, but also — critically — to record that data in my own warehouse where I can audit it later. Platform-reported numbers drift. Source-of-truth numbers don't.
System three: incrementality testing. This is the part most advertisers skip and absolutely should not. At least once per quarter, I run a holdout test on a significant channel — typically Meta prospecting or YouTube Demand Gen, because those are the channels with the biggest gap between reported conversions and actual incremental value. You take 20-30% of your audience, hold them out from the campaign, compare conversion rates on the holdout group against the test group, and back into a rough incremental lift number. This number is almost always 30-60% lower than the platform-reported ROAS. That gap is your "attribution inflation" factor, and knowing it — roughly — is more useful than any dashboard.
The measurement stack, specifically
Here's what I actually install on every client account. This is deliberately unglamorous because the interesting work is in how you use it, not what you use:
GA4 with data-driven attribution, Enhanced Measurement enabled, and custom events for anything that matters to the business (form starts, form completions, calendar bookings, pricing page views, demo requests, etc.). GA4 is not my favorite tool. It is the lowest common denominator, and I use it because every other tool in the stack assumes you have it.
Server-side Google Tag Manager for any account with $30K+/month in spend, in order to preserve as much conversion signal as possible across iOS and ad blocker environments. This is not optional if you're at meaningful scale. The setup cost is modest (~$1,500-3,000 one-time). The ongoing infrastructure cost is trivial. The measurement recovery is significant.
Conversions API (Meta) and Enhanced Conversions (Google) with proper first-party data hashing (email, phone, any deterministic user identifier the business has). This is where server-side GTM becomes critical, because the platforms both accept CAPI/EC data directly from your server, bypassing the browser entirely.
An independent data warehouse — BigQuery, Snowflake, or even Postgres for smaller clients — where I dump GA4 data, platform data, and CRM data into a single location that I can query without platform mediation. This is the thing that lets me triangulate "system one vs. system two" differences. Without it, you're trusting each platform's interpretation of its own contribution.
UTM discipline — boring, unsexy, and still one of the highest-leverage things you can do. Every paid campaign uses a standardized UTM schema. I maintain a Google Sheet (yes, a Google Sheet — sue me) that serves as the UTM registry so that two campaigns in two different platforms never produce colliding source/medium values.
What I've given up trying to measure
Worth naming explicitly, because I think most operators are still trying to measure things that are genuinely unmeasurable in 2026, and it's making them crazy:
View-through conversions at a precise level. You can measure that they exist directionally. You cannot measure them at the click/conversion accuracy of the pre-iOS-14 era. Stop trying.
Cross-device attribution for logged-out users. Gone. If a user sees your Meta ad on mobile, closes the app, opens Chrome on their laptop three hours later, and converts — that conversion is not getting attributed to Meta unless the user is logged in on both devices with the same identifier. In practice, this means a meaningful chunk of your conversions are appearing in "direct" or "(not set)" GA4 buckets.
Precise LTV attribution by acquisition channel. The math looks clean in a dashboard. The underlying data doesn't support the precision the dashboard implies. I use cohort-level LTV analysis (customers acquired in Q1 have an LTV of $X) rather than channel-level LTV ($Y LTV for Meta acquisitions), because the channel-level cohorting depends on attribution that isn't reliable.
How to present this to leadership
The hardest part of operating in this environment is managing the measurement narrative internally, not the measurement itself. Here's the framing I use with founders and CFOs:
"Your paid media dashboards are showing you a version of the truth. Not the whole truth, not a lie, but a version that is optimized for running the campaigns rather than reporting them to you. The number that represents actual business performance is between what Google Ads shows you, what GA4 shows you, and what your CRM shows you, weighted toward your CRM. We should make strategic decisions based on the CRM number. We should make campaign optimization decisions based on the platform number. We should never conflate the two, and we should never expect them to agree."
That framing has saved me countless hours of internal politics on client teams. It moves the conversation from "which dashboard is right?" (unanswerable) to "which dashboard is right for which decision?" (tractable).
If your measurement stack is still pretending we're in 2019 — last-click attribution, single-platform reporting, no server-side tracking, no incrementality testing — that's one of the most common things I find in audits. Start there. It's not glamorous work but it's the foundation for everything else.