TL;DR — Quick Summary
Field data (CrUX, RUM) captures real users and is what Google uses for ranking. Lab data (Lighthouse) provides reproducible testing for debugging. Both are essential — use lab data for diagnosis and iteration, field data for measuring real impact.
What is Field Data vs Lab Data?
Field data collects metrics from actual users in real-world conditions — diverse devices, networks, and locations. The primary field source is CrUX (Chrome User Experience Report). RUM tools provide additional granularity.
Lab data is collected in a controlled, simulated environment (Lighthouse, WebPageTest). Lab data is reproducible and provides detailed diagnostics; field data reflects reality but is aggregated.
Key differences:
- •Field data = reality (but noisy, aggregated, delayed). Lab data = simulation (but reproducible, instant, detailed).
- •Field data is evaluated at p75 over 28 days. Lab data is a single snapshot.
- •Field captures all interactions (INP). Lab only captures loading (TBT as proxy).
- •Field reflects device/network diversity. Lab simulates one device/network.
History & Evolution
Key milestones:
- •2005 — Early RUM tools emerge for measuring real-user performance.
- •2012 — WebPageTest and Lighthouse establish lab testing as standard practice.
- •2017 — CrUX launches, giving public access to Chrome field data.
- •2020 — Core Web Vitals announced. Google explicitly states field data (CrUX) is used for rankings.
- •2024 — INP replaces FID, highlighting a field-only metric (lab has no direct equivalent).
- •2025–2026 — The field-vs-lab distinction is well understood. Best practice: diagnose with lab, measure with field.
How Field Data vs Lab Data is Measured
Field data tools:
- •CrUX (Google's public dataset, powers PSI and Search Console)
- •Web Vitals JS library (custom RUM)
- •SpeedCurve, DebugBear, Datadog, New Relic (commercial RUM)
Lab data tools:
- •Lighthouse (Chrome DevTools, CLI, CI/CD)
- •WebPageTest (detailed waterfall analysis)
- •Chrome DevTools Performance panel
Both:
- •PageSpeed Insights (shows CrUX field + Lighthouse lab in one view)
Key rule: Field data (CrUX) determines Google rankings. Lab data (Lighthouse, WebPageTest) is for debugging and iteration.
Common Causes of Poor Field Data vs Lab Data Scores
Common causes of field-lab divergence:
- 1Device diversity — Lab tests one device; field includes budget phones with 2GB RAM.
- 2Network diversity — Lab simulates 4G; field includes 3G, congested networks.
- 3Geographic diversity — Lab tests from one location; field includes users far from servers.
- 4Browser extensions — Lab uses clean Chrome; real users have ad blockers, password managers.
- 5User behavior — Lab tests cold loads; field includes back-forward navigations, cached visits.
- 6Third-party scripts — Lab may not trigger all third-party loads; field captures full impact.
Frequently Asked Questions
Field data (CrUX). Google explicitly uses CrUX field data for the Page Experience ranking signal. Lighthouse lab scores do not directly affect rankings.
Lighthouse simulates one device on one network. CrUX aggregates millions of real users on diverse devices and networks. The p75 of real users is often worse than the lab simulation.
Yes — very common. If many of your real users have slow devices or connections, field data will be worse than lab data. This means you need to optimize for the long tail of slower experiences.
Test on budget devices (Moto G Power, Samsung A-series). Throttle to 3G in DevTools. Optimize for the slowest 25% of your user base, not just the median.
WebPageTest is lab data — it runs controlled tests from specific locations and devices. It does not capture real-user data.
RUM tools are field data — they capture metrics from real users. They provide more granular data than CrUX (per-page, per-device, per-region breakdowns) but aren't used by Google for ranking.
No — Lighthouse is essential for debugging. Use Lighthouse to identify problems and verify fixes quickly. Then wait for CrUX field data to confirm the improvement is reflected in real-user experience.
CrUX uses a 28-day rolling window. Improvements begin showing immediately as new fast data enters the window, but full improvement takes 28 days as old slow data rolls out.
For step-by-step optimization, platform-specific fixes, code examples, and case studies, read our full guide:
The Ultimate Guide to Website Performance Measurement, Tools & Data: Lab, Field & Everything Between in 2026Struggling with Field Data vs Lab Data?
Request a free speed audit and we'll identify exactly what's holding your scores back.