Official statement
Other statements from this video 36 ▾
- 1:02 Faut-il ignorer le score Lighthouse pour optimiser son SEO ?
- 1:02 La vitesse de page est-elle vraiment un facteur de classement Google ?
- 1:42 Lighthouse et PageSpeed Insights ne servent-ils vraiment à rien pour le ranking ?
- 2:38 Les Web Vitals de Google modélisent-ils vraiment l'expérience utilisateur ?
- 3:40 La vitesse de page est-elle vraiment un facteur de ranking aussi décisif qu'on le prétend ?
- 7:07 Faut-il vraiment injecter la balise canonical via JavaScript ?
- 7:27 Peut-on vraiment injecter la balise canonical via JavaScript sans risque SEO ?
- 8:28 Google Tag Manager ralentit-il vraiment votre site et faut-il l'abandonner ?
- 8:31 GTM sabote-t-il vraiment votre temps de chargement ?
- 9:35 Servir un 404 à Googlebot et un 200 aux visiteurs est-il vraiment du cloaking ?
- 10:06 Servir un 404 à Googlebot et un 200 aux utilisateurs, est-ce vraiment du cloaking ?
- 16:16 Les redirections 301, 302 et JavaScript sont-elles vraiment équivalentes pour le SEO ?
- 16:58 Les redirections JavaScript sont-elles vraiment équivalentes aux 301 pour Google ?
- 17:18 Le rendu côté serveur est-il vraiment indispensable pour le référencement Google ?
- 17:58 Faut-il vraiment investir dans le server-side rendering pour le SEO ?
- 19:22 Le JSON sérialisé dans vos apps JavaScript compte-t-il comme du contenu dupliqué ?
- 20:02 L'état applicatif en JSON dans le DOM crée-t-il du contenu dupliqué ?
- 20:24 Cloudflare Rocket Loader passe-t-il le test SEO de Googlebot ?
- 20:44 Faut-il tester Cloudflare Rocket Loader et les outils tiers avant de les activer pour le SEO ?
- 21:58 Faut-il ignorer les erreurs 'Other Error' dans Search Console et Mobile Friendly Test ?
- 23:18 Faut-il vraiment s'inquiéter du statut 'Other Error' dans les outils de test Google ?
- 27:58 Faut-il choisir un framework JavaScript plutôt qu'un autre pour son SEO ?
- 31:27 Le JavaScript consomme-t-il vraiment du crawl budget ?
- 31:32 Le rendering JavaScript consomme-t-il du crawl budget ?
- 33:07 Faut-il abandonner le dynamic rendering pour le SEO ?
- 33:17 Faut-il vraiment abandonner le dynamic rendering pour le référencement ?
- 34:01 Faut-il vraiment abandonner le JavaScript côté client pour l'indexation des liens produits ?
- 34:21 Le JavaScript asynchrone post-load bloque-t-il vraiment l'indexation Google ?
- 36:05 Faut-il vraiment passer sur un serveur dédié pour améliorer son SEO ?
- 36:25 Serveur mutualisé ou dédié : Google fait-il vraiment la différence ?
- 40:06 L'hydration côté client pose-t-elle vraiment un problème SEO ?
- 40:06 L'hydratation SSR + client est-elle vraiment sans danger pour le SEO Google ?
- 42:47 Faut-il vraiment viser 100 sur Lighthouse ou est-ce une perte de temps ?
- 45:24 La 5G va-t-elle vraiment accélérer votre site ou est-ce une illusion ?
- 49:09 Googlebot ignore-t-il vraiment vos images WebP servies via Service Workers ?
- 49:09 Pourquoi Googlebot ignore-t-il vos images WebP servies par Service Worker ?
Google states that the importance of each Core Web Vitals metric (FID, LCP, CLS) directly depends on the type of site. A content site should prioritize LCP, while an interactive application should focus on FID. The overall Lighthouse score remains helpful as a detector of critical issues (score < 5) or final validation (score > 95), but it should not dictate the optimization strategy.
What you need to understand
Why is Google questioning the importance of the overall Lighthouse score?
The overall Lighthouse score aggregates several metrics (FID, LCP, CLS, as well as Time to Interactive, Speed Index) into a single number. This simplification poses a problem: it gives the same weight to all metrics, regardless of the site's usage context.
Martin Splitt emphasizes that this score mainly functions as a “smoke test”: a rough indicator that detects extreme cases. A score of 5 indicates a technically broken site requiring urgent intervention. A score of 95 indicates that only marginal optimizations remain to be made.
Between these two extremes, the overall score loses its relevance. A site with a score of 60 can be perfectly optimized for its real use if the right metrics are in the green.
Which metrics should be prioritized according to the type of site?
Google provides two clear examples to illustrate the contextualization of metrics. For a chat application (Slack, Discord, etc.), First Input Delay (FID) becomes critical: the user wants to interact immediately, and any delay in responsiveness harms the experience.
Conversely, for a pure content site like Wikipedia, Largest Contentful Paint (LCP) takes precedence: the user wants access to the main text as quickly as possible. FID is less important since interaction is not the core of the usage.
The Cumulative Layout Shift (CLS), however, remains universally important but with varying tolerance thresholds. An e-commerce site with payment forms should aim for a CLS close to zero to avoid accidental clicks.
Does this approach change the Core Web Vitals optimization strategy?
Absolutely. Instead of attempting to artificially balance all metrics to inflate the Lighthouse score, one should first identify the dominant user journey. Then, prioritize the metric that directly impacts that journey.
This statement validates an approach that some technical SEOs are already practicing: template-specific audits. A homepage, product page, and blog article have different performance stakes and critical metrics.
Practically, this means that a site may have pages with Lighthouse scores of 75 yet have excellent real performance on the metrics that matter for their specific usage.
- The overall Lighthouse score is an indicator of general health, not an objective in itself
- Each type of page should be optimized for its dominant metric (LCP for content, FID for interactive, CLS for transactional)
- A score of 5 = technical urgency, a score of 95 = polishing, and between the two = contextual analysis necessary
- Lighthouse tests should be segmented by template to identify real priorities
- Never sacrifice real user experience to improve an artificial aggregated score
SEO Expert opinion
Is this statement consistent with observed practices in the field?
Yes, but it comes late. Technical SEOs have noted since 2021 that sites with average Lighthouse scores (65-80) could outperform sites with 95+ in SERPs, as long as their critical metrics were excellent.
The issue is that Google has long communicated ambiguously on this subject. Official tools (PageSpeed Insights, Search Console) highlight the overall score, leading many SEOs to blindly optimize it. Splitt's clarification puts the emphasis back in the right place.
There remains a gray area: how does Google actually measure these metrics for ranking? CrUX data (Chrome User Experience Report) aggregates all real users. If your site has 80% mobile visitors on 3G degrading the LCP, but your target audience primarily uses fiber desktop, are you unfairly penalized? [To be verified]
What nuances should be added to this statement?
First point: Splitt does not say that ignored metrics can be catastrophic. A content site cannot afford an FID of 800ms just because the LCP is good. There is a notion of an acceptable minimum threshold across all metrics.
Second nuance: the example of chat vs Wikipedia is exaggerated. Most sites are hybrid: an e-commerce site has content (product pages), interaction (filters, comparison), and transactional elements (checkout). How to prioritize? The answer requires an analysis of strategic pages and their business objectives.
Third point: Google provides no weighting figures. Saying “LCP matters more” quantifies nothing. Is it 60/40? 80/20? Without data, interpretation remains subjective. [To be verified]
In what cases can this approach be counterproductive?
If applied too binarily. Some SEOs might completely neglect FID on a content site, resulting in unusable share buttons, newsletter forms, or comments. Secondary interactivity is still interactivity.
Another risk: justifying mediocrity. “Our Lighthouse score is 40 but that’s okay, we are a content site so only LCP matters.” No. A score of 40 generally indicates structural issues (blocking JavaScript, unoptimized CSS) that also impact the LCP.
Practical impact and recommendations
What should be done concretely to adapt Core Web Vitals monitoring?
First step: map out the strategic templates of your site. Homepage, categories, product pages, blog articles, checkout pages — each template has a different user objective. Document for each whether the primary usage is reading, interaction, or transaction.
Second action: set up segments in PageSpeed Insights or CrUX to track each type of page separately. Do not settle for the aggregated domain score. A site may have excellent overall LCP but a disastrous FID on product filter pages — and that's where abandonment occurs.
Third point: redefine your performance KPIs. If you are a media outlet, your OKR is not “achieve 95 on Lighthouse” but “LCP < 2s on 75% of articles and CLS < 0.1”. If you are a SaaS, it’s “FID < 100ms on the app page and LCP < 2.5s on the landing”.
What mistakes should be avoided in this re-evaluation of priorities?
Do not fall into cherry-picking: arbitrarily choose the metric you are already good at and ignore others. Identification of the priority metric should stem from behavioral analysis (heatmaps, session recordings, analytics), not your current performance.
Also, avoid over-optimizing a single metric at the expense of the overall experience. We've seen sites sacrifice above-the-fold content to boost LCP (by displaying just a title and an empty image), which degrades the real engagement rate. Google always adjusts its algorithms against such artificial optimizations.
Last common mistake: neglecting mobile vs desktop variability. A site may have excellent desktop LCP but a catastrophic mobile LCP. If 70% of your SEO traffic comes from mobile, guess which version Google prioritizes for ranking? Device segmentation is non-negotiable.
How to check if your site is optimized according to the right criteria?
Use the Search Console to identify the pages with the most impressions and clicks. Cross-reference with CrUX data to see which metrics are failing on these strategic pages. If your top 10 pages that generate 80% of the traffic have a poor FID but you’ve spent 3 months optimizing LCP on zombie pages, you’ve wasted time.
Establish a continuous monitoring process with alerts on priority metrics by template. Tools like SpeedCurve, Calibre, or Treo can track changes over time and correlate them with fluctuations in organic traffic.
Finally, test under real conditions. Lighthouse tests are conducted on a simulated network and an emulated device. Use WebPageTest with varied connection profiles (3G, 4G, cable) and real devices. An LCP of 1.8s in Lighthouse may explode to 5s on a real Xiaomi Redmi on Indian 3G — and that’s what your users experience.
- Map out strategic templates and identify the dominant metric for each (LCP for content, FID for interactive, CLS for transactional)
- Set up tracking segments in CrUX and PageSpeed Insights by page type, not just at the domain level
- Redefine performance KPIs based on real use, not the overall Lighthouse score
- Cross-reference Search Console data (strategic pages) with CrUX data (failing metrics) to prioritize tasks
- Test under real conditions (WebPageTest, real devices, real connections) to validate optimizations beyond Lighthouse simulations
- Implement continuous monitoring with alerts on critical metrics by template
❓ Frequently Asked Questions
Le score Lighthouse global n'a-t-il plus aucune importance pour le SEO ?
Comment identifier quelle métrique Core Web Vitals prioriser pour mon site ?
Un site e-commerce doit-il optimiser toutes les métriques de la même manière ?
Les données CrUX utilisées par Google tiennent-elles compte de cette différenciation par type de page ?
Peut-on ignorer complètement une métrique Core Web Vitals si elle n'est pas prioritaire ?
🎥 From the same video 36
Other SEO insights extracted from this same Google Search Central video · duration 51 min · published on 12/05/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.