Official statement
Other statements from this video 36 ▾
- 1:02 Should you overlook the Lighthouse score to optimize your SEO?
- 1:02 Is page speed really a Google ranking factor?
- 1:42 Do Lighthouse and PageSpeed Insights really have no impact on rankings?
- 2:38 Do Google's Web Vitals really model user experience?
- 3:40 Is it true that page speed is as crucial a ranking factor as claimed?
- 7:07 Is it really a good idea to inject the canonical tag through JavaScript?
- 7:27 Can you really inject the canonical tag via JavaScript without risking your SEO?
- 8:28 Does Google Tag Manager really slow down your site, and should you abandon it?
- 8:31 Is GTM really sabotaging your loading time?
- 9:35 Is serving a 404 to Googlebot while showing a 200 to visitors really cloaking?
- 10:06 Is it really cloaking when Googlebot sees a 404 while users see a 200?
- 16:16 Are 301, 302, and JavaScript redirects really equivalent for SEO?
- 16:58 Are JavaScript redirects truly equivalent to 301 redirects for Google?
- 17:18 Is server-side rendering truly essential for Google SEO?
- 17:58 Should you really invest in server-side rendering for SEO?
- 19:22 Does serialized JSON in your JavaScript apps count as duplicate content?
- 20:02 Does the JSON application state in the DOM create duplicate content?
- 20:24 Is Cloudflare Rocket Loader passing Googlebot's SEO test?
- 20:44 Should you test Cloudflare Rocket Loader and third-party tools before activating them for SEO?
- 21:58 Should you worry about 'Other Error' messages in Search Console and Mobile Friendly Test?
- 23:18 Should you really be concerned about the 'Other Error' status in Google's testing tools?
- 27:58 Should you choose one JavaScript framework over another for your SEO?
- 31:27 Does JavaScript really consume crawl budget?
- 31:32 Does JavaScript rendering really consume crawl budget?
- 33:07 Should you ditch dynamic rendering for better SEO results?
- 33:17 Is it really time to move on from dynamic rendering for SEO?
- 34:01 Should you really abandon client-side JavaScript for indexing product links?
- 34:21 Does asynchronous JavaScript post-load really hinder Google indexing?
- 36:05 Is it really necessary to switch to a dedicated server to improve your SEO?
- 36:25 Shared or Dedicated Server: Does Google really make a difference?
- 40:06 Is client-side hydration really a SEO concern?
- 40:06 Is SSR + client hydration really safe for Google SEO?
- 42:47 Is striving for 100 on Lighthouse really worth your time?
- 45:24 Is it true that 5G will accelerate your site, or is it just a mirage?
- 49:09 Does Googlebot really ignore your WebP images served through Service Workers?
- 49:09 Is it true that Googlebot overlooks your WebP images served by Service Worker?
Google states that the importance of each Core Web Vitals metric (FID, LCP, CLS) directly depends on the type of site. A content site should prioritize LCP, while an interactive application should focus on FID. The overall Lighthouse score remains helpful as a detector of critical issues (score < 5) or final validation (score > 95), but it should not dictate the optimization strategy.
What you need to understand
Why is Google questioning the importance of the overall Lighthouse score?
The overall Lighthouse score aggregates several metrics (FID, LCP, CLS, as well as Time to Interactive, Speed Index) into a single number. This simplification poses a problem: it gives the same weight to all metrics, regardless of the site's usage context.
Martin Splitt emphasizes that this score mainly functions as a “smoke test”: a rough indicator that detects extreme cases. A score of 5 indicates a technically broken site requiring urgent intervention. A score of 95 indicates that only marginal optimizations remain to be made.
Between these two extremes, the overall score loses its relevance. A site with a score of 60 can be perfectly optimized for its real use if the right metrics are in the green.
Which metrics should be prioritized according to the type of site?
Google provides two clear examples to illustrate the contextualization of metrics. For a chat application (Slack, Discord, etc.), First Input Delay (FID) becomes critical: the user wants to interact immediately, and any delay in responsiveness harms the experience.
Conversely, for a pure content site like Wikipedia, Largest Contentful Paint (LCP) takes precedence: the user wants access to the main text as quickly as possible. FID is less important since interaction is not the core of the usage.
The Cumulative Layout Shift (CLS), however, remains universally important but with varying tolerance thresholds. An e-commerce site with payment forms should aim for a CLS close to zero to avoid accidental clicks.
Does this approach change the Core Web Vitals optimization strategy?
Absolutely. Instead of attempting to artificially balance all metrics to inflate the Lighthouse score, one should first identify the dominant user journey. Then, prioritize the metric that directly impacts that journey.
This statement validates an approach that some technical SEOs are already practicing: template-specific audits. A homepage, product page, and blog article have different performance stakes and critical metrics.
Practically, this means that a site may have pages with Lighthouse scores of 75 yet have excellent real performance on the metrics that matter for their specific usage.
- The overall Lighthouse score is an indicator of general health, not an objective in itself
- Each type of page should be optimized for its dominant metric (LCP for content, FID for interactive, CLS for transactional)
- A score of 5 = technical urgency, a score of 95 = polishing, and between the two = contextual analysis necessary
- Lighthouse tests should be segmented by template to identify real priorities
- Never sacrifice real user experience to improve an artificial aggregated score
SEO Expert opinion
Is this statement consistent with observed practices in the field?
Yes, but it comes late. Technical SEOs have noted since 2021 that sites with average Lighthouse scores (65-80) could outperform sites with 95+ in SERPs, as long as their critical metrics were excellent.
The issue is that Google has long communicated ambiguously on this subject. Official tools (PageSpeed Insights, Search Console) highlight the overall score, leading many SEOs to blindly optimize it. Splitt's clarification puts the emphasis back in the right place.
There remains a gray area: how does Google actually measure these metrics for ranking? CrUX data (Chrome User Experience Report) aggregates all real users. If your site has 80% mobile visitors on 3G degrading the LCP, but your target audience primarily uses fiber desktop, are you unfairly penalized? [To be verified]
What nuances should be added to this statement?
First point: Splitt does not say that ignored metrics can be catastrophic. A content site cannot afford an FID of 800ms just because the LCP is good. There is a notion of an acceptable minimum threshold across all metrics.
Second nuance: the example of chat vs Wikipedia is exaggerated. Most sites are hybrid: an e-commerce site has content (product pages), interaction (filters, comparison), and transactional elements (checkout). How to prioritize? The answer requires an analysis of strategic pages and their business objectives.
Third point: Google provides no weighting figures. Saying “LCP matters more” quantifies nothing. Is it 60/40? 80/20? Without data, interpretation remains subjective. [To be verified]
In what cases can this approach be counterproductive?
If applied too binarily. Some SEOs might completely neglect FID on a content site, resulting in unusable share buttons, newsletter forms, or comments. Secondary interactivity is still interactivity.
Another risk: justifying mediocrity. “Our Lighthouse score is 40 but that’s okay, we are a content site so only LCP matters.” No. A score of 40 generally indicates structural issues (blocking JavaScript, unoptimized CSS) that also impact the LCP.
Practical impact and recommendations
What should be done concretely to adapt Core Web Vitals monitoring?
First step: map out the strategic templates of your site. Homepage, categories, product pages, blog articles, checkout pages — each template has a different user objective. Document for each whether the primary usage is reading, interaction, or transaction.
Second action: set up segments in PageSpeed Insights or CrUX to track each type of page separately. Do not settle for the aggregated domain score. A site may have excellent overall LCP but a disastrous FID on product filter pages — and that's where abandonment occurs.
Third point: redefine your performance KPIs. If you are a media outlet, your OKR is not “achieve 95 on Lighthouse” but “LCP < 2s on 75% of articles and CLS < 0.1”. If you are a SaaS, it’s “FID < 100ms on the app page and LCP < 2.5s on the landing”.
What mistakes should be avoided in this re-evaluation of priorities?
Do not fall into cherry-picking: arbitrarily choose the metric you are already good at and ignore others. Identification of the priority metric should stem from behavioral analysis (heatmaps, session recordings, analytics), not your current performance.
Also, avoid over-optimizing a single metric at the expense of the overall experience. We've seen sites sacrifice above-the-fold content to boost LCP (by displaying just a title and an empty image), which degrades the real engagement rate. Google always adjusts its algorithms against such artificial optimizations.
Last common mistake: neglecting mobile vs desktop variability. A site may have excellent desktop LCP but a catastrophic mobile LCP. If 70% of your SEO traffic comes from mobile, guess which version Google prioritizes for ranking? Device segmentation is non-negotiable.
How to check if your site is optimized according to the right criteria?
Use the Search Console to identify the pages with the most impressions and clicks. Cross-reference with CrUX data to see which metrics are failing on these strategic pages. If your top 10 pages that generate 80% of the traffic have a poor FID but you’ve spent 3 months optimizing LCP on zombie pages, you’ve wasted time.
Establish a continuous monitoring process with alerts on priority metrics by template. Tools like SpeedCurve, Calibre, or Treo can track changes over time and correlate them with fluctuations in organic traffic.
Finally, test under real conditions. Lighthouse tests are conducted on a simulated network and an emulated device. Use WebPageTest with varied connection profiles (3G, 4G, cable) and real devices. An LCP of 1.8s in Lighthouse may explode to 5s on a real Xiaomi Redmi on Indian 3G — and that’s what your users experience.
- Map out strategic templates and identify the dominant metric for each (LCP for content, FID for interactive, CLS for transactional)
- Set up tracking segments in CrUX and PageSpeed Insights by page type, not just at the domain level
- Redefine performance KPIs based on real use, not the overall Lighthouse score
- Cross-reference Search Console data (strategic pages) with CrUX data (failing metrics) to prioritize tasks
- Test under real conditions (WebPageTest, real devices, real connections) to validate optimizations beyond Lighthouse simulations
- Implement continuous monitoring with alerts on critical metrics by template
❓ Frequently Asked Questions
Le score Lighthouse global n'a-t-il plus aucune importance pour le SEO ?
Comment identifier quelle métrique Core Web Vitals prioriser pour mon site ?
Un site e-commerce doit-il optimiser toutes les métriques de la même manière ?
Les données CrUX utilisées par Google tiennent-elles compte de cette différenciation par type de page ?
Peut-on ignorer complètement une métrique Core Web Vitals si elle n'est pas prioritaire ?
🎥 From the same video 36
Other SEO insights extracted from this same Google Search Central video · duration 51 min · published on 12/05/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.