What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

Google does not rank pages based on a specific Lighthouse score. Pages are categorized into groups (slow, average, fast). Moving from a Lighthouse score of 90 to 95 does not make a difference for the ranking. The important thing is to get out of the 'slow' category and reach the 'fast' category.
14:02
🎥 Source video

Extracted from a Google Search Central video

⏱ 14:32 💬 EN 📅 27/07/2020 ✂ 6 statements
Watch on YouTube (14:02) →
Other statements from this video 5
  1. La vitesse de page est-elle surévaluée comme facteur de classement Google ?
  2. 4:54 Faut-il vraiment respecter la limite de 500 Ko par page imposée par Google ?
  3. 7:25 Pourquoi corriger une recommandation Lighthouse n'accélère pas toujours votre page autant que promis ?
  4. 8:47 Pourquoi Lighthouse ne reflète pas la vraie performance de votre site ?
  5. 11:21 AMP est-il vraiment inutile pour le classement Google ?
📅
Official statement from (5 years ago)
TL;DR

Google does not rank pages based on a specific Lighthouse score. Pages are grouped into three buckets: slow, average, fast. Moving from 90 to 95 on Lighthouse does not change the ranking. The SEO goal is to move out of the 'slow' bucket and into 'fast', not to optimize every millisecond.

What you need to understand

What is a bucket ranking system?

Google does not operate like an exam where every point matters. The engine uses a threshold-based categorization system to assess page speed. This ranking structure consists of three main buckets: slow, average, fast.

In practical terms? A page with a Lighthouse score of 75 and another at 95 can very well end up in the same 'fast' bucket. Their treatment by the algorithm will be identical. It is crossing the thresholds that matters, not linear score progression.

Why does Google adopt this approach instead of a continuous scoring method?

A bucket system simplifies the algorithmic processing of billion of pages crawled daily. Comparing precise scores to the hundredth would slow down calculations without providing measurable user value.

This logic also reflects the reality of the actual user experience. A user does not perceive the difference between 2.4 and 2.5 seconds of loading. The thresholds correspond to tangible experience levels — where the user shifts from a 'fast' perception to 'acceptable' and then 'frustrating'.

How do these buckets relate to the Core Web Vitals?

The Core Web Vitals (LCP, INP, CLS) define the metrics that feed this bucket system. Each metric has its own thresholds: good, needs improvement, poor. Google aggregates this on-the-ground data from the Chrome User Experience Report.

The Lighthouse score remains a diagnostic tool in a lab, useful for identifying technical barriers. But the final ranking relies on real user data, not on a DevTools simulation.

  • Google ranks pages into three speed buckets: slow, average, fast
  • Crossing the thresholds matters, not the linear progression of the Lighthouse score
  • The Core Web Vitals measure the actual user experience via CrUX
  • A Lighthouse score of 90+ does not automatically guarantee the 'fast' bucket
  • The practical goal: move out of the 'slow' bucket to reach 'fast', not aim for perfection

SEO Expert opinion

Is this statement consistent with real-world observations?

Yes, in principle. Tests conducted on dozens of e-commerce and media sites show that a site can improve its Lighthouse score from 65 to 85 without visible ranking gains. However, moving from a CrUX 'poor' to 'good' on LCP often triggers positive movements in the SERPs.

The critical nuance: Martin Splitt does not specify where the exact thresholds of these buckets lie. Google publicly communicates the bars for the Core Web Vitals (LCP < 2.5s, INP < 200ms, CLS < 0.1 for 'good'), but nothing confirms that the ranking algorithm uses exactly these values. [To be verified]

What apparent contradictions need to be clarified?

Splitt talks about Lighthouse, but Google ranking relies on the CrUX (real user data). A site can score 95 on Lighthouse and fall into the 'slow' bucket if its real visitors, on 3G mobile in the outskirts, experience an LCP of 4 seconds.

Conversely, a mediocre Lighthouse score in DevTools does not prevent a good ranking if the CrUX ground data is solid. This confusion between lab metrics and field data is the source of most SEO misunderstandings about speed.

Attention: Do not rely solely on Lighthouse to guide your speed optimizations. Consult the CrUX report in PageSpeed Insights or Search Console to see how your real users experience your site.

In what contexts does this bucket rule apply differently?

For ultra-competitive queries, where dozens of pages compete in the 'fast' bucket, other signals (content, links, authority) take over. Speed then becomes an elimination filter, not a discriminating factor.

In low-competition niches, even a page in the 'average' bucket can rank first if the content significantly surpasses the alternatives. The relative weight of speed varies depending on the overall quality gap between the top 10 candidates.

Practical impact and recommendations

What should you do concretely to change buckets?

Stop tracking Lighthouse points as an end goal. Focus on the three Core Web Vitals measured by CrUX: LCP, INP, CLS. Identify in Search Console or PageSpeed Insights the URLs that fail on these real metrics.

Prioritize strategic pages: e-commerce categories, high-traffic organic landing pages, key product pages. Optimizing a contact page or legal mentions will yield zero ranking ROI. The bucket is calculated URL by URL, not at the domain level.

What mistakes should be avoided in this quest for speed?

Do not sacrifice the actual user experience for a lab score. Lazy-loading all your images might improve Lighthouse but degrade INP if the user scrolls quickly. Removing visuals to save weight can kill your conversion rate.

Avoid the obsession with the perfect bucket on 100% of pages. A site with 10,000 URLs does not need every product listing to be in 'fast'. Focus your resources on the 20% of pages that generate 80% of organic traffic.

How can I effectively track my progress between buckets?

Use the Core Web Vitals report in Search Console to monitor the volume of URLs in each category (good, needs improvement, poor). This is your strategic dashboard — far more reliable than the average Lighthouse score of your site.

Implement CrUX monitoring via BigQuery or tools like Treo, CrUX Dashboard, to track the weekly evolution of your real metrics. The public CrUX data has a 28-day latency — real-time monitoring via RUM (Real User Monitoring) helps bridge this delay.

These technical optimizations often require cross-disciplinary skills (front-end development, server infrastructure, CDN, waterfall analysis) that few internal teams fully master. For quick and sustainable gains, support from an SEO agency specialized in web performance can significantly accelerate your transition to the higher bucket.

  • Audit the real Core Web Vitals (CrUX) via Search Console and PageSpeed Insights
  • Identify strategic URLs that are blocked in 'slow' or 'needs improvement'
  • Prioritize LCP (hero image, server TTFB, blocking CSS) and INP (third-party JS, event handlers)
  • Validate optimizations based on ground data, not just in the Lighthouse lab
  • Monitor weekly progress via CrUX Dashboard or RUM
  • Do not sacrifice UX or conversion for a cosmetic score
The goal is not to reach 100 on Lighthouse, but to cross the threshold that shifts your key pages from the 'slow' or 'average' bucket to 'fast'. Focus your efforts on the Core Web Vitals measured by CrUX, not on lab metrics. Prioritize pages with high SEO and business impact, track your progress in Search Console, and validate each optimization based on the actual user experience.

❓ Frequently Asked Questions

Un score Lighthouse de 100 garantit-il le meilleur ranking possible ?
Non. Lighthouse mesure la performance en laboratoire, pas l'expérience réelle des utilisateurs. Google classe selon les données CrUX terrain et d'autres signaux de ranking. Un score parfait sans bon CrUX ne suffit pas.
Quels sont les seuils exacts des buckets lent, moyen, rapide ?
Google ne communique pas les seuils internes des buckets de ranking. Les barres publiques des Core Web Vitals (LCP < 2,5s, INP < 200ms, CLS < 0,1) servent de référence, mais rien ne prouve que l'algo utilise exactement ces valeurs.
Le bucket se calcule-t-il au niveau du domaine ou de chaque URL ?
Chaque URL est évaluée individuellement. Une page peut être dans le bucket 'rapide' tandis qu'une autre du même domaine reste 'lente'. C'est pourquoi il faut prioriser les pages stratégiques.
Faut-il optimiser toutes les pages d'un gros site e-commerce ?
Non, c'est inefficace. Concentrez-vous sur les catégories, pages produits phares et landing pages organiques qui génèrent le trafic SEO. Optimiser 20% des URLs stratégiques suffit souvent pour l'essentiel des gains.
Pourquoi mon score Lighthouse est bon mais Search Console signale des problèmes CWV ?
Lighthouse simule un chargement lab idéal. Le CrUX mesure vos vrais utilisateurs, sur mobile 3G, avec des connexions variables et des configurations matérielles hétérogènes. L'écart reflète la réalité terrain versus le labo.
🏷 Related Topics
Domain Age & History AI & SEO JavaScript & Technical SEO Web Performance

🎥 From the same video 5

Other SEO insights extracted from this same Google Search Central video · duration 14 min · published on 27/07/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.