Identity-graph pricing looks opaque from the outside. A prospective buyer asks three vendors for an MAID-to-HEM match rate quote against a seed file of 1M records and gets back three numbers that differ by 3x, accompanied by three different unit-price structures and three different minimum commitments. The instinct is to pick the lowest; the consequence is usually a match-rate disappointment that kills the program on renewal. The real story is that identity-graph economics are driven by a small number of observable inputs — seed cohort freshness, signal density per MAID, regulatory ring-fencing of sensitive categories, and the vendor's own panel-composition over time — and once you understand those inputs, the pricing curve stops being arbitrary. For the conceptual foundation see identity graphs 101: MAID to HEM, CTV IDs, and household resolution; for the GSDSI catalog-side surface see MAID Feed and Core Email File.
Key Takeaways
Match-rate quotes only mean something relative to a named seed cohort — a 60% match on a 30-day-old email file means something very different from 60% on a 36-month-old file, and vendor quotes against undated seeds are not directly comparable.
MAID cohort decay runs roughly 3-7% per month under Apple App Tracking Transparency — a graph that was 250M unique MAIDs a year ago carries fewer observable-today identifiers than the catalog sheet implies, and pricing should reflect what's reachable this week, not what was once instrumented.
Signal density per MAID (app count, POI visits, CTV exposures) drives the downstream usefulness more than the headline MAID count — a 150M-MAID graph with dense signal history is commercially more valuable than a 250M-MAID graph of thin records.
The FTC's 2024 X-Mode/Outlogic order and InMarket order define the sensitive-category ring-fence every MAID vendor now prices around — health, reproductive, religious, and protest-site geofences add compliance cost that the final rate reflects.
State privacy acts tracked by IAPP carry opt-out and deletion SLA obligations that scale with MAID throughput — buyers procuring against state residents should expect per-resident compliance surcharges, not a single blanket rate.
What a Match-Rate Quote Actually Measures
A match-rate quote is a compound number. It is the product of (1) the vendor's graph coverage against the buyer's target population, (2) the freshness of the buyer's seed file relative to the vendor's refresh cadence, (3) the signal types both sides agree to match on (email-to-MAID vs email-to-HEM vs HEM-to-HEM with confidence threshold), and (4) the vendor's opt-out and sensitive-category filtering which strips records before the match is computed. Two vendors quoting against the same seed file can return wildly different match rates because they are filtering different records out pre-match. A buyer comparing quotes should ask each vendor for three numbers: the raw graph size, the match rate pre-filtering, and the match rate post-filtering — if only the post-filter number is shared, the buyer is comparing vendors' compliance postures rather than their actual graph coverage. For the deeper methodology framing, see identity graphs 101: MAID to HEM, CTV IDs, and household resolution.
MAID Cohort Decay Under ATT
The single largest hidden cost in MAID economics is cohort decay. Before Apple App Tracking Transparency rolled out in 2021, iOS IDFA availability was effectively 100% of instrumented app sessions; post-ATT, observable IDFA on iOS is roughly in the 20-30% range depending on app category, and on Android the analogous AAID trajectory is slower but directionally similar as Google Privacy Sandbox proposals mature. The consequence: an MAID graph sized at 250M unique identifiers a year ago does not contain 250M identifiers reachable this week. A reasonable reachable-today decay assumption is 3-7% per month against the historical cohort, with heavier decay for iOS-skewed app panels and lighter decay for Android-dominant or CTV-adjacent panels. Graph pricing that doesn't account for this — a flat rate against historical cohort size — silently drifts out of money over the contract term. A vendor that ships a monthly observable-cohort diagnostic (this week's reachable count vs last week's) is pricing honestly; a vendor that only quotes historical size is selling a depreciating asset at fixed price.
Signal Density Beats Headline Record Count
The second mispricing trap is optimizing for headline record count at the expense of signal density per record. A 250M-MAID graph where the median MAID has 3 apps and 2 POI visits observed is commercially less valuable than a 150M-MAID graph where the median has 12 apps and 35 POI visits observed — because identity resolution only earns its fee when downstream use-cases (audience targeting, cross-channel measurement, fraud detection, B2B account resolution) can actually do something with the resolved record. Signal density drives: (a) stable audience segments at week-over-week consistency, (b) polygon-accurate foot-traffic attribution, (c) reliable CTV-to-MAID overlap for cross-device measurement, and (d) B2B device-to-domain joining for account-level signal. A graph that is thin on any of these dimensions will not reliably deliver, regardless of headline MAID count. Buyers should ask vendors for the median and p90 signal-count-per-record distribution, not just the top-line graph size — and should structure pricing so that the vendor is paid against useful signal delivered, not instrumented records claimed. For the downstream use-cases that actually consume this, see Audience Targeting solution and Cross-Channel Measurement.
The Sensitive-Category Ring-Fence Is Now Priced In
Every MAID vendor operating in the US now ring-fences sensitive categories — health facilities, reproductive-care sites, places of worship, protest and political-rally locations, domestic-violence shelters, substance-use recovery centers. The FTC's consent orders against X-Mode/Outlogic and InMarket are the template; every subsequent vendor procurement contract should carry matching sensitive-category exclusion reps. The operational cost: a vendor maintaining a 2-million-POI sensitive-category blocklist, running monthly geofence-exclusion audits against inbound panel data, maintaining an opt-out pipeline accessible per IAPP's state-privacy tracker, and carrying E&O insurance against downstream buyer misuse — none of this is free, and all of it is priced into the per-MAID unit rate. A buyer negotiating aggressively against a mainstream vendor's rate card may be offered an opt-out of some of these protections in exchange for a lower unit price; this is an uninsurable trade for any institutional program and should be declined. Privacy-safe audience targeting after third-party cookies walks through what the post-ATT, post-cookie identity layer actually looks like when it is correctly procured.
MAID Graph Procurement Diagnostics
The working checklist for any buyer evaluating an MAID/HEM identity graph:
What is the reachable-today graph size versus historical cohort size, and what is the month-over-month decay trend? A vendor who can't show this is selling a depreciating asset at fixed price.
What is the median and p90 signal-count-per-record distribution? Thin-signal graphs do not deliver on downstream use-cases regardless of headline MAID count.
What is the match-rate pre-filter vs post-filter, against a named seed cohort? Comparing post-filter-only numbers across vendors compares compliance postures, not graph coverage.
What is the sensitive-category ring-fence definition, POI blocklist size, and audit cadence? Anything weaker than the X-Mode/InMarket consent-order template is uninsurable.
What is the state-privacy-act compliance architecture — per-state opt-out pipeline, deletion SLA, sensitive-category exclusion by state? Expect a surcharge for states with CCPA/CPRA analogs; a flat rate means the vendor is under-pricing compliance risk.
What are the contractual reps on MNPI, B2B-vs-consumer data segregation, and downstream-misuse indemnification? These define the vendor's insurance posture and by extension the buyer's.
A vendor that scores clean on all six is shippable for an institutional identity program. A vendor that stumbles on cohort decay or sensitive-category ring-fencing is pricing against a past-tense graph and carrying forward regulatory exposure that will surface on audit. For the catalog-side surface see MAID Feed, Core Email File, and for the combined activation see Audience Targeting. Per IAB Tech Lab's identity guidance, the center of gravity for addressable identity in 2026 is durable first-party email plus authenticated CTV IDs plus device-level MAIDs for the remaining reachable cohort — any vendor pricing only against one of those surfaces is leaving defensible coverage on the table.
Frequently Asked Questions
What drives MAID graph pricing?
Four inputs: (1) reachable-today graph coverage against the buyer's target population; (2) signal density per MAID (app count, POI visits, CTV exposures) which drives downstream usefulness; (3) sensitive-category ring-fencing operational cost, including POI blocklist maintenance, geofence-audit cadence, and E&O insurance coverage shaped by the FTC's X-Mode and InMarket consent orders; (4) state-privacy-act compliance cost with per-resident opt-out pipelines and deletion SLAs. Buyers who understand these inputs can price vendors predictably; buyers who only compare headline rates end up surprised.
Why are match-rate quotes from different vendors hard to compare?
A match-rate quote is a compound number: graph coverage × seed-file freshness × match-key agreement × vendor pre-match filtering. Two vendors can quote against the same seed file and return very different rates because they filter different records pre-match (sensitive-category exclusions, opt-outs, low-confidence matches). A buyer should ask each vendor for raw graph size, pre-filter match rate, and post-filter match rate — sharing only the post-filter number hides the vendor's compliance posture.
How much does MAID cohort decay actually matter?
It matters a lot, and it is under-disclosed. Post-Apple ATT cohort decay runs roughly 3-7% per month against the historical MAID cohort, with heavier decay on iOS-skewed app panels. A graph marketed as 250M MAIDs a year ago does not contain 250M reachable-today identifiers. A vendor that ships a monthly observable-cohort diagnostic is pricing honestly; one that quotes only historical size is selling a depreciating asset at fixed price.
What's the right way to evaluate signal density inside an identity graph?
Ask for median and p90 signal-count-per-record distribution — not just the top-line MAID count. A 150M-MAID graph with dense signal history (12 apps, 35 POI visits per median record) is commercially more useful than a 250M-MAID graph of thin records, because downstream use-cases like Audience Targeting and Cross-Channel Measurement only earn their fee when the resolved record carries enough observable signal to segment or attribute against.