Official statement
Other statements from this video 27 ▾
- 13:31 Vos pages lentes peuvent-elles plomber le classement de tout votre site ?
- 13:33 Les Core Web Vitals impactent-ils vraiment tout votre site ou seulement vos pages lentes ?
- 13:33 Peut-on bloquer la collecte des Core Web Vitals avec robots.txt ou noindex ?
- 14:54 Pourquoi CrUX collecte vos Core Web Vitals même si vous bloquez Googlebot ?
- 15:50 Page Experience : Google ment-il sur son véritable poids dans le classement ?
- 16:36 L'expérience de page est-elle vraiment un signal de classement secondaire ?
- 17:28 Le LCP mesure-t-il vraiment la vitesse perçue par l'utilisateur ?
- 19:57 Les Core Web Vitals se calculent-ils vraiment pendant toute la navigation ?
- 21:22 Comment Google estime-t-il vos Core Web Vitals quand les données CrUX manquent ?
- 22:22 Comment Google estime-t-il les Core Web Vitals d'une page sans données CrUX ?
- 27:07 Comment Google attribue-t-il désormais les données CrUX du cache AMP à l'origine ?
- 29:47 AMP est-il encore nécessaire pour ranker dans Top Stories sur mobile ?
- 32:31 Comment exploiter les logs serveur pour détecter les erreurs 4xx dans Search Console ?
- 34:34 Pourquoi les nouveaux sites connaissent-ils une volatilité extrême dans l'indexation et le classement ?
- 34:34 Faut-il vraiment analyser les logs serveur pour diagnostiquer les erreurs 4xx dans Search Console ?
- 34:34 Pourquoi votre nouveau site fluctue-t-il comme un yoyo dans les SERP ?
- 40:03 Faut-il vraiment signaler le contenu copié de votre site via le formulaire spam de Google ?
- 40:20 Comment signaler efficacement le spam de contenu copié à Google ?
- 43:43 Vos pages franchise sont-elles des doorway pages aux yeux de Google ?
- 45:46 Le contenu dupliqué est-il vraiment sans danger pour votre référencement ?
- 45:46 Le contenu dupliqué est-il vraiment sans pénalité pour votre SEO ?
- 45:46 Vos pages franchises sont-elles perçues comme des doorway pages par Google ?
- 51:52 Le namespace http:// ou https:// dans un sitemap XML influence-t-il vraiment le crawl ?
- 52:00 Le namespace en https dans votre sitemap XML pénalise-t-il votre référencement ?
- 55:56 Faut-il vraiment inclure les deux versions mobile et desktop dans son sitemap XML ?
- 56:00 Faut-il vraiment soumettre les versions mobile ET desktop dans votre sitemap ?
- 61:54 Faut-il abandonner AMP si vous utilisez GA4 pour mesurer vos performances ?
Google confirms that LCP updates during loading and CLS continues to evolve with user interactions. The direct consequence: measuring these metrics solely at the initial load is no longer sufficient — you need to monitor their evolution throughout the entire session. This statement questions monitoring approaches that focus exclusively on the first render.
What you need to understand
Does LCP recalculate during loading?
Yes, and this is a point that many practitioners still overlook. The Largest Contentful Paint is not set in stone as soon as the first large element appears. If a larger element subsequently appears — such as a lazy-loaded image or a dynamically injected block via JavaScript — the browser recalculates the LCP.
In practical terms, your initial LCP may be excellent (1.2s for a title), then suddenly degrade when a 2MB hero image finally displays after 3 seconds. Measurement tools that capture only the first LCP event give you a partial, even misleading, view of the actual experience.
Why does CLS continue to evolve after loading?
Because users interact with the page. A click on a button can trigger the display of a banner, a modal, or change the layout. If these modifications cause unforeseen shifts, the CLS increases.
Google stresses this point: CLS is not an instantaneous metric but cumulative. Each shift adds up over the entire lifetime of the page in the tab. A page that appears stable at load can show a disastrous CLS after 30 seconds of interaction if your JavaScript injects content without reserving the necessary space.
What impact does this have on field monitoring?
If you exclusively use lab testing tools (Lighthouse, PageSpeed Insights in synthetic mode), you are measuring a snapshot in controlled conditions. These tools do not capture real behaviors: scrolling, clicks, dynamic content additions.
RUM (Real User Monitoring) data becomes essential here. They reflect what your visitors truly experience: how long it actually takes for the LCP to stabilize, how the CLS evolves during browsing. Without RUM, you are optimizing in a vacuum.
- The LCP can recalculate multiple times if large content appears late
- The CLS accumulates throughout the session, not just at load
- Lab measurements reflect only a fraction of the real experience
- RUM monitoring becomes critical for capturing post-load variations
- User interactions (clicks, scrolls) can degrade scores after a good initial start
SEO Expert opinion
Is this statement consistent with field observations?
Yes, absolutely. Practitioners analyzing Chrome UX Report (CrUX) data have long noted this: aggregated scores over 28 days reflect behaviors that are far more complex than what a simple Lighthouse test shows. Discrepancies between lab and field are sometimes spectacular — up to 40-50% on certain sites.
What’s missing from this statement is the weighting: Google does not specify whether a late-degrading LCP weighs as much as a poor LCP from the start. The same goes for CLS: are all shifts treated equally by the algorithm, or do those occurring after 10 seconds of interaction count for less? [To be verified]
What nuances should be added to this claim?
First, Google talks about updating, not immediate penalization. The fact that LCP or CLS evolves does not necessarily mean your ranking collapses in real-time. Core Web Vitals are assessed based on aggregated data over 28 rolling days via CrUX, so the impact is smoothed over time.
Next, be careful about edge cases: an LCP that recalculates due to an advertisement displaying 5 seconds after loading is not always under your control. If you rely on external ad networks, you are at the mercy of their scripts. Google does not provide any indication of how they handle these situations — and frankly, I doubt they really take them into account.
In what cases does this rule not really apply?
On pure static sites without user interaction or dynamic content, the issue almost doesn’t arise. A basic blog in HTML/CSS, without aggressive lazy-loading or heavy JavaScript, will see its metrics stabilize very quickly. LCP and CLS will hardly change after 2-3 seconds.
In contrast, on complex web applications (SaaS, e-commerce with dynamic filters, interactive dashboards), it’s a nightmare. Every user action can change the layout, load new assets, recalculate LCP. If your site falls into this category, you must monitor continuously — and accept that some variance is beyond your control.
Practical impact and recommendations
How to correctly measure these evolving metrics?
First step: abandon the idea that Lighthouse is sufficient. Lighthouse measures a frozen moment, in a controlled environment, on an emulated device. This is useful for diagnostics, but doesn’t reflect the ground reality. Set up a RUM system that captures Core Web Vitals throughout the entire user session.
Google Analytics 4 can do this through web-vitals events, but you’ll have more granularity with specialized tools (SpeedCurve, Treo, or a custom implementation via web-vitals.js). The key is to track every update of LCP and accumulate the CLS until the page is closed.
What mistakes to avoid in optimization?
Don’t focus solely on the first render. I’ve seen teams celebrate a 1.5s LCP on Lighthouse while their real users were experiencing a 4s LCP due to images that were lazy-loaded too late. If you lazy-load, ensure that the LCP element is loaded as a priority (via fetchpriority="high" or a preload).
For CLS, the classic mistake: reserving space at the initial load but forgetting dynamically added content. A cookie banner that inserts without transition, an ad that pushes content, a carousel that changes size — all this destroys your score after the fact, even if your initial code was clean.
What should you concretely do to improve these metrics?
For LCP: identify the relevant element (often a hero image or a block of text) and optimize its render path. This means: image compression (WebP/AVIF), preloading the critical resource, eliminating blocking scripts in <head>. If your LCP recalculates late, track down the culprit with Chrome DevTools (Performance tab, filter for LCP).
For CLS: reserve space for all elements that might appear after loading. Set explicit dimensions (width/height) for images, iframes, ad blocks. Use aspect-ratio in CSS for flexible containers. And importantly, test with real user interactions — not just at the load.
- Set up a RUM monitoring system to capture actual Core Web Vitals
- Identify the real LCP element (not the one from Lighthouse) and optimize its priority loading
- Reserve CSS space for all dynamic content (ads, modals, banners)
- Test metrics after user interaction (scrolls, clicks, waiting 10-15s)
- Systematically compare lab vs field data to detect discrepancies
- Document CLS variations related to third-party scripts beyond your control
❓ Frequently Asked Questions
Le LCP peut-il se dégrader après un premier affichage rapide ?
Le CLS continue-t-il d'augmenter même après plusieurs secondes de navigation ?
Les outils lab (Lighthouse, PageSpeed Insights) suffisent-ils pour mesurer ces évolutions ?
Comment savoir si mon LCP se recalcule après le chargement initial ?
Un CLS qui se dégrade à cause d'une pub externe peut-il me pénaliser ?
🎥 From the same video 27
Other SEO insights extracted from this same Google Search Central video · duration 1h07 · published on 28/01/2021
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.