Official statement
Other statements from this video 36 ▾
- 1:02 Faut-il ignorer le score Lighthouse pour optimiser son SEO ?
- 1:02 La vitesse de page est-elle vraiment un facteur de classement Google ?
- 1:42 Lighthouse et PageSpeed Insights ne servent-ils vraiment à rien pour le ranking ?
- 2:38 Les Web Vitals de Google modélisent-ils vraiment l'expérience utilisateur ?
- 3:40 La vitesse de page est-elle vraiment un facteur de ranking aussi décisif qu'on le prétend ?
- 7:07 Faut-il vraiment injecter la balise canonical via JavaScript ?
- 7:27 Peut-on vraiment injecter la balise canonical via JavaScript sans risque SEO ?
- 8:28 Google Tag Manager ralentit-il vraiment votre site et faut-il l'abandonner ?
- 8:31 GTM sabote-t-il vraiment votre temps de chargement ?
- 9:35 Servir un 404 à Googlebot et un 200 aux visiteurs est-il vraiment du cloaking ?
- 10:06 Servir un 404 à Googlebot et un 200 aux utilisateurs, est-ce vraiment du cloaking ?
- 16:16 Les redirections 301, 302 et JavaScript sont-elles vraiment équivalentes pour le SEO ?
- 16:58 Les redirections JavaScript sont-elles vraiment équivalentes aux 301 pour Google ?
- 17:18 Le rendu côté serveur est-il vraiment indispensable pour le référencement Google ?
- 17:58 Faut-il vraiment investir dans le server-side rendering pour le SEO ?
- 19:22 Le JSON sérialisé dans vos apps JavaScript compte-t-il comme du contenu dupliqué ?
- 20:02 L'état applicatif en JSON dans le DOM crée-t-il du contenu dupliqué ?
- 20:24 Cloudflare Rocket Loader passe-t-il le test SEO de Googlebot ?
- 20:44 Faut-il tester Cloudflare Rocket Loader et les outils tiers avant de les activer pour le SEO ?
- 21:58 Faut-il ignorer les erreurs 'Other Error' dans Search Console et Mobile Friendly Test ?
- 23:18 Faut-il vraiment s'inquiéter du statut 'Other Error' dans les outils de test Google ?
- 27:58 Faut-il choisir un framework JavaScript plutôt qu'un autre pour son SEO ?
- 31:27 Le JavaScript consomme-t-il vraiment du crawl budget ?
- 31:32 Le rendering JavaScript consomme-t-il du crawl budget ?
- 33:07 Faut-il abandonner le dynamic rendering pour le SEO ?
- 33:17 Faut-il vraiment abandonner le dynamic rendering pour le référencement ?
- 34:01 Faut-il vraiment abandonner le JavaScript côté client pour l'indexation des liens produits ?
- 34:21 Le JavaScript asynchrone post-load bloque-t-il vraiment l'indexation Google ?
- 36:05 Faut-il vraiment passer sur un serveur dédié pour améliorer son SEO ?
- 36:25 Serveur mutualisé ou dédié : Google fait-il vraiment la différence ?
- 40:06 L'hydration côté client pose-t-elle vraiment un problème SEO ?
- 40:06 L'hydratation SSR + client est-elle vraiment sans danger pour le SEO Google ?
- 42:12 Faut-il arrêter de surveiller le score Lighthouse global pour se concentrer sur les métriques Core Web Vitals pertinentes à son site ?
- 45:24 La 5G va-t-elle vraiment accélérer votre site ou est-ce une illusion ?
- 49:09 Googlebot ignore-t-il vraiment vos images WebP servies via Service Workers ?
- 49:09 Pourquoi Googlebot ignore-t-il vos images WebP servies par Service Worker ?
Martin Splitt asserts that the overall Lighthouse score is a rough indicator, not an absolute target. The obsession with 100/100 leads to wasting time on diminishing returns optimizations beyond 95. What truly matters is identifying which individual metric (LCP, FID, CLS) is critical for your site type and prioritizing its optimization, even if it means sacrificing the overall score.
What you need to understand
Why does Google downplay the importance of the overall Lighthouse score?
Lighthouse generates a composite score that aggregates several performance metrics with predefined weights. The problem? These weights are generic and may not correspond to the actual priorities of your site.
A e-commerce site where users compare products requires smooth interactivity (excellent FID/INP). A content media site must quickly display its main article (critical LCP). Blindly aiming for 100/100 could lead you to optimize secondary aspects detrimental to your business instead of focusing on the factors that truly impact user experience and ranking.
What does a score of 95 really mean compared to 100?
Beyond 90-95, you enter a zone where each point gained requires an exponential technical investment for marginal impact. Splitt puts it bluntly: it's fine-tuning with diminishing returns.
A score of 5 or even 50? That's urgent — your site has structural issues massively degrading the experience. Between 95 and 100, you are likely optimizing micro-details that will not change user behavior or Google ranking.
How do I identify the metric that truly matters for my site?
Splitt proposes a pragmatic approach: analyze your individual metrics according to your site type. An interactive web application (SaaS, online tool) should prioritize FID (or its successor INP) because users constantly click, type, and interact with the interface.
Conversely, an editorial or e-commerce site lives or dies by its LCP — if the product image or article title takes 4 seconds to display, the user will leave. CLS matters for everyone but becomes critical on mobile where every shift leads to frustrating accidental clicks.
- A high overall Lighthouse score does NOT guarantee that YOUR critical metrics are good
- Analyzing metrics individually according to site type (content, app, e-commerce) is more relevant than the composite score
- A score of 95+ indicates that the essentials are done — going beyond often leads to unproductive fixation
- A very low score (below 50) signals urgent structural issues that need to be addressed
- Lighthouse weights are generic and may not reflect your business priorities
SEO Expert opinion
Does this approach truly change the game for SEOs?
Let’s be honest: many agencies and clients have found themselves trapped in a race for the perfect score following the rollout of Core Web Vitals as a ranking factor. Splitt's statement highlights a real-world reality that experienced practitioners already know.
What's new is that Google is stating it explicitly. This legitimizes an approach we were already advocating: optimizing for the user and critical business metrics, not for a number displayed in a tool. The challenge? Convincing a client that a score of 93 is sufficient when they see a competitor at 98 remains difficult.
What nuances should we add to this statement?
Splitt does not specify at what exact threshold the returns truly become diminishing. He mentions 95, but it's an approximation — some sites can benefit from optimizations up to 97-98 depending on their competitive context. [To be verified]: does Google apply different thresholds depending on the vertical?
Another crucial point: this logic applies to the composite score, but Google has never stated that the individual Core Web Vitals thresholds (good/improve/bad) are negotiable. An LCP of 3.5s remains problematic even if your overall score is 92 thanks to excellent other metrics.
In what cases does this rule not fully apply?
If you operate in an ultra-competitive niche where all major players already have scores of 95+, every point can constitute a micro-advantage. General news sites facing giants like Le Monde or Le Figaro cannot afford to neglect details.
Similarly, for Progressive Web Apps or sites with a heavy application dimension, aiming for excellence across all metrics remains relevant because the user experience is the product itself. A slow SaaS tool loses subscribers, not just ranking.
Practical impact and recommendations
What should you actually do with your Lighthouse score?
Use Lighthouse as an initial diagnostic, not as a daily dashboard. Run an audit, identify critical red alerts (non-optimized images, blocking JavaScript, lack of caching), and fix them until you reach the 85-95 range.
Then, shift your focus to real user CrUX data accessible via PageSpeed Insights, Search Console, or the Chrome UX Report. These are the actual metrics, measured on your visitors, that influence ranking — not your score in a lab environment.
What mistakes should be avoided in Core Web Vitals optimization?
The classic mistake: blindly optimizing all Lighthouse recommendations without prioritizing based on business impact. You could spend three days scratching out 2 points on the score optimizing third-party fonts while your LCP is dragged down by a non-lazy-loaded hero image.
Another trap: neglecting the variability of real conditions. Your lab score of 98 on a MacBook Pro on fiber does not predict performance on a mid-range Android device in unstable 4G. Test on devices and connections representative of your audience — and that’s where it gets tricky.
How do I ensure my optimizations truly impact ranking?
Install the CrUX Dashboard for your domain and monitor the monthly trend of the 75th percentiles (Google's threshold) on LCP, FID/INP, and CLS. If your lab optimizations do not translate into an improvement in p75 real-world data, you might be optimizing the wrong levers.
Then, correlate with your Search Console data: average positions, impressions, CTR on your key pages. The impact of Core Web Vitals is documented but remains modest compared to content quality and authority — do not sacrifice these fundamentals to scratch out 3 score points.
- Audit Lighthouse to identify critical structural issues (score < 70)
- Prioritize fixing the critical metric for your site type (LCP for content, FID/INP for apps)
- Stop lab optimization around 90-95 and switch to monitoring CrUX real-world data
- Test on real devices and connections representative of your audience
- Track the monthly CrUX 75th percentile, not the daily Lighthouse score
- Correlate Core Web Vitals improvements with actual ranking developments in Search Console
❓ Frequently Asked Questions
Un score Lighthouse de 95 est-il suffisant pour bien ranker sur Google ?
Quelle métrique Lighthouse prioriser pour un site e-commerce ?
Pourquoi mon score Lighthouse est excellent mais mes Core Web Vitals médiocres ?
Faut-il ignorer Lighthouse et se concentrer uniquement sur CrUX ?
Un concurrent a un score Lighthouse supérieur mais rank moins bien, pourquoi ?
🎥 From the same video 36
Other SEO insights extracted from this same Google Search Central video · duration 51 min · published on 12/05/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.