The question most foot-traffic buyers ask last — and should ask first — is "how many devices do I actually need to answer my question?" Panel size translates directly into confidence-interval width, and the width determines whether the read is analytically useful or just directional. Too-small panels produce noisy chain comparisons that flip ranking quarter-over-quarter; right-sized panels produce reads that hold up under audit. This post is the working math: how MAID panel depth maps to chain-level, store-level, and DMA-level analytical questions, and the rough rules of thumb buyers can use to pressure-test a vendor's panel against their research need. For vendor-side quality framing, see geospatial data quality framework and why POI data quality makes or breaks foot-traffic analytics.
Key Takeaways
Panel depth translates to confidence-interval width — too-small panels produce reads that flip quarter-over-quarter because noise exceeds signal.
Chain-level reads are the easiest ask; store-level reads need roughly 10× the panel depth of chain reads for comparable confidence.
A ~60–80M MAIDs/month US panel supports defensible chain-level reads for mid-size chains and DMA-level breakouts for top-50 markets.
The MRC measurement standards and Census BDS business-size distribution are the external reference points for calibrating your vendor's panel claims.
Why Panel Size Is the First Question
Foot-traffic analytics is a sampling problem. The vendor observes visits to POIs from devices in the panel; extrapolating to total visits requires knowing what fraction of the visiting population the panel captures. The smaller the panel relative to total visitors, the wider the confidence interval on any derived metric — visit counts, dwell distributions, O/D shares, year-over-year deltas. The common procurement failure mode is evaluating a panel on raw device count ("100M MAIDs!") without translating to confidence-interval width for the specific analytical question at hand. A 100M panel on a quiet chain in a mid-density DMA might still produce noisy reads; a 40M panel concentrated in the chain's geography might produce cleaner ones. Scale is a means, not the end.
Chain-Level Reads: The Easy Ask
Chain-level reads ("how did Chipotle's national traffic move quarter-over-quarter?") are the easiest analytical ask because aggregation across hundreds or thousands of stores smooths store-level noise. Rough working rules:
A ~60–80M MAIDs/month US panel (the scale of GSDSI's Global Mobility & Location Data in the US) supports defensible chain-level reads for any chain with 100+ US stores.
Smaller panels (20–40M MAIDs/month) can still work for national chains but confidence widens — expect ~5–10% wider confidence intervals and more noise at the quarterly frequency.
Weekly-cadence chain reads need roughly 2–3× the panel depth of quarterly reads for the same confidence because you're slicing a smaller time window.
For sub-50-store chains, even a large panel will produce noisy reads at the chain level — aggregate up to category or co-op benchmark rather than relying on chain-level precision.
The Census Business Dynamics Statistics provides the baseline chain-size distribution you should calibrate against; most vendor panels are not honest about how they handle the long tail of small chains.
Store-Level Reads: Roughly 10× the Panel Depth
Store-level reads ("how did store #347 perform week-over-week?") need roughly 10× the panel depth of chain reads for comparable confidence, because you've removed the aggregation denominator. Working rules:
For top-quintile-traffic stores in high-density DMAs, a ~60–80M MAIDs/month US panel supports defensible weekly store-level reads with ~10–15% confidence-interval width.
For median-traffic stores, expect 25–40% confidence-interval width on weekly reads — tolerable for directional research, not enough for investment decisions against that specific store.
For bottom-quintile-traffic stores, store-level weekly reads are not defensible from any commercially-available panel; aggregate to monthly or pivot to chain-level.
Dense urban DMAs produce narrower confidence intervals than low-density rural DMAs for the same panel depth — a 100K-visit urban store is "higher signal" than a 50K-visit rural store even though both are above median.
The polygon-vs-radius mechanics covered in geofencing best practices interact with store-level panel sizing — bad polygons shrink effective panel depth because they contaminate visits with adjacent-business noise.
DMA-Level Reads for Origin/Destination Work
DMA-level O/D reads — where visitors to a store originate, or where residents of a DMA shop — are somewhere between chain and store in panel-depth demand. Working rules:
A ~60–80M MAIDs/month US panel supports defensible DMA-level O/D reads for top-50 DMAs.
For DMAs 51–100, expect roughly 2× the confidence-interval width; aggregate multiple months if the read matters.
Below top-100 DMA, O/D reads become noisy enough that many vendors flag them as exploratory; cross-reference against Census LEHD LODES for the baseline worker-flow structure.
Cross-border O/D reads (international visitors to US metros) have different mechanics — panel composition by country matters more than raw depth; the 700M+ international MAIDs across 150+ countries in the GSDSI identity graph are what buys coverage here.
The Three Panel Diagnostics to Run Before Signing
Pressure-test any panel claim before licensing:
Ask for the panel's observed-to-expected visit ratio for a known-high-traffic reference location (big-box retailer in a top-10 DMA). If it's far from 1.0, panel bias is larger than headline numbers suggest.
Request the DMA-level panel composition — which DMAs are over- and under-represented vs. population share? Adjust confidence intervals accordingly.
Request the supplier-SDK composition — is the panel concentrated in one SDK or diversified? Single-SDK panels carry concentration risk (regulatory, business continuity).
For identity-layer considerations that extend beyond raw visit panels, see identity graphs 101. For the MRC mobility audit standards buyers should reference during diligence, the external framework is continually being updated — check the published version current at the time of your procurement.
Frequently Asked Questions
How many MAIDs/month do I need for a chain-level foot-traffic read?
For a national chain with 100+ US stores, a ~60–80M MAIDs/month US panel supports defensible quarterly chain-level reads. Smaller panels (20–40M) can work but widen confidence intervals by 5–10%. Weekly chain reads need roughly 2–3× the panel depth of quarterly reads. Sub-50-store chains are not reliably readable at the chain level on any commercial panel — aggregate to category benchmark instead.
What panel depth supports store-level weekly reads?
Top-quintile-traffic stores in high-density DMAs can be read weekly at ~10–15% confidence-interval width on a ~60–80M MAIDs/month panel. Median-traffic stores widen to 25–40% (directional only). Bottom-quintile-traffic stores are not reliably readable weekly — aggregate to monthly or pivot to chain-level. Polygon quality from POI & Geofencing interacts heavily with effective panel depth; bad polygons shrink it.
How does DMA mix affect panel confidence?
DMA composition matters as much as raw depth. A 60M-MAID panel concentrated in top-20 DMAs will produce narrower confidence intervals for those markets and wider ones for smaller DMAs. Request the DMA-level composition from your vendor before licensing and adjust your analytical expectations accordingly — calibrate against Census BDS business distribution and LEHD LODES worker-flow data.
What diagnostics should I run before signing a mobility-data contract?
Run three pressure-tests. First, observed-to-expected visit ratio against a known high-traffic reference location; a ratio far from 1.0 flags bias. Second, DMA-level panel composition vs. population share. Third, supplier-SDK composition (single-SDK panels carry concentration risk). The MRC mobility audit standards are the external reference; GSDSI publishes its Global Mobility & Location Data methodology openly for this kind of diligence.