Official statement
Other statements from this video 5 ▾
- □ La vitesse de page est-elle surévaluée comme facteur de classement Google ?
- 4:54 Faut-il vraiment respecter la limite de 500 Ko par page imposée par Google ?
- 7:25 Pourquoi corriger une recommandation Lighthouse n'accélère pas toujours votre page autant que promis ?
- 11:21 AMP est-il vraiment inutile pour le classement Google ?
- 14:02 Faut-il vraiment viser un score Lighthouse de 100 pour mieux ranker sur Google ?
Lighthouse measures performance under perfect lab conditions, not what your actual users experience. Martin Splitt emphasizes that these scores are tested from your machine with a stable connection, while your mobile visitors often navigate on unstable networks. To evaluate the real experience, you need to combine CrUX and Google Analytics — two sources that capture real-world data.
What you need to understand
What’s the difference between lab data and real-world data?
Lighthouse simulates page loading in a controlled environment: stable connection, powerful CPU, optimal conditions. It's useful for diagnosing technical issues, but it doesn’t tell you what your real users go through. A site can score 95/100 on Lighthouse and perform poorly on a mobile device in 3G.
Real-world data comes from the Chrome User Experience Report (CrUX) and reflects Core Web Vitals metrics measured from real visitors, in real conditions. Fluctuating connections, multitasking, low battery, it covers it all. This dataset is what Google uses to evaluate user experience in its ranking algorithms.
Why does Google emphasize this distinction so much?
Because too many SEOs optimize for Lighthouse instead of optimizing for real users. You can spend weeks scraping for 5 points on PageSpeed Insights without it changing anything for the Core Web Vitals measured by CrUX. The result: zero impact on ranking.
Google Analytics complements CrUX by allowing you to segment your audiences. You might discover that your iOS visitors load the page in 1.2s while your Android users struggle at 5s. CrUX aggregates, Analytics disaggregates — and it's this granularity that allows you to take effective action.
How does CrUX collect this real-world data?
CrUX relies on Chrome users who have opted into sharing usage data. This represents millions of real visitors loading your pages in their daily context: train, office, couch, subway. Metrics are measured at the time of loading, not simulated afterwards.
The report is updated monthly and accessible via PageSpeed Insights, BigQuery, or the CrUX API. But beware: CrUX only reports data if your site receives a sufficient volume of Chrome traffic. Smaller sites may not appear in the public dataset.
- Lighthouse = controlled environment, ideal for diagnosing reproducible technical issues
- CrUX = real user experience, the only metric that matters for Core Web Vitals ranking
- Google Analytics = real-world segmentation, to identify which audience segments are suffering the most
- A good Lighthouse score does not guarantee a good CrUX score if your users navigate under degraded conditions
- CrUX requires a minimum volume of Chrome traffic to appear in the public dataset
SEO Expert opinion
Is this distinction truly respected in SEO practice?
Let’s be honest: the majority of SEOs and developers still optimize for Lighthouse. It’s easier, quicker, it gives a colorful score that reassures clients. The problem is, this score does not reflect the real-world reality — and Google knows it very well. Hence Splitt's insistence on the distinction.
I have seen sites with a Lighthouse score of 98/100 fail miserably on CrUX because their infrastructure couldn't handle peak load. Conversely, sites with a Lighthouse score of 75 but a solid infrastructure and good CDN passed the CrUX thresholds without an issue. The production environment matters more than the lab.
Is CrUX truly reliable for all sites?
No, and this is where it gets tricky. CrUX aggregates data from millions of Chrome users, but it creates two major blind spots. First point: if your traffic primarily comes from Safari (iOS), you only capture part of the real user experience. Second point: sites with low Chrome traffic do not appear in the public CrUX data — you are blind.
Google Analytics and Real User Monitoring (RUM) then become essential. Tools like Sentry, Datadog, or even a simple custom script allow you to measure Core Web Vitals across 100% of your traffic, regardless of browser. [To verify]: Google has never explicitly confirmed whether non-Chrome data influences ranking, but everything indicates that CrUX remains the primary source.
Should we ignore Lighthouse after all?
Certainly not. Lighthouse remains an indispensable technical diagnostic tool for identifying reproducible issues: JavaScript blocking rendering, unoptimized images, lack of caching. It’s a starting point, not an end in itself.
The right approach? Use Lighthouse to detect issues, then validate the real impact via CrUX and Analytics. If a change improves Lighthouse but degrades CrUX, you have a problem with infrastructure or real load. If Lighthouse stagnates but CrUX improves, you have optimized what truly matters — the final user experience.
Practical impact and recommendations
What concrete steps should you take to measure real performance?
First step: check your presence in CrUX. Go to PageSpeed Insights, enter your URL, and scroll down to the section "Discover what your real users are experiencing". If you see data, you are in the dataset. If not, install RUM.
Second step: cross-reference CrUX with Google Analytics. Create custom events to track LCP, FID, and CLS on your critical pages. Segment by device, connection, geography. You will find that your Android users in Southeast Asia are struggling while your desktop visitors in Europe are cruising.
What errors should you avoid in data interpretation?
Never rely solely on Lighthouse to make ranking decisions. I’ve seen teams spend entire sprints optimizing micro-details that only affected the lab score, with zero impact in the real world. CrUX is the ground truth, Lighthouse is an indirect indicator.
Another pitfall: ignoring the distribution of metrics in CrUX. PageSpeed Insights shows you the percentage of URLs that pass thresholds (good/to improve/bad). If 60% of your URLs are "good" but 40% are "bad," you have a consistency problem — likely unoptimized heavy pages or infrastructure buckling under certain loads.
How can you maintain effective monitoring of these metrics?
Set up automated dashboards that aggregate CrUX (via BigQuery or the API), Google Analytics, and your RUM data. Implement alerts when metrics degrade on critical segments. A 10% drop in mobile LCP is a red flag.
Integrate these metrics into your CI/CD pipeline. Tools like Lighthouse CI can run in pre-production to detect regressions before deployment. But remember: always validate the post-deployment impact on CrUX, not just on Lighthouse.
- Check your site's presence in the public CrUX dataset via PageSpeed Insights
- Install a Real User Monitoring (RUM) tool to capture 100% of traffic, all browsers
- Create Google Analytics events to track LCP, FID, CLS by audience segment
- Cross-reference CrUX and Analytics data to identify segments that suffer the most
- Never optimize solely for Lighthouse — always validate the impact on CrUX
- Set up automatic alerts for degradation in real-world Core Web Vitals
❓ Frequently Asked Questions
Lighthouse peut-il influencer le ranking Google même si ce sont des données de labo ?
Mon site n'apparaît pas dans CrUX, comment mesurer ma performance réelle ?
CrUX prend-il en compte les utilisateurs Safari et Firefox ?
À quelle fréquence CrUX est-il mis à jour ?
Peut-on améliorer CrUX sans toucher à Lighthouse ?
🎥 From the same video 5
Other SEO insights extracted from this same Google Search Central video · duration 14 min · published on 27/07/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.