What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

Core Web Vitals come from real user data (Chrome UX Report), not from bots. Therefore, dynamic rendering does not impact Core Web Vitals since users receive the client-side rendered version.
6:15
🎥 Source video

Extracted from a Google Search Central video

⏱ 30:57 💬 EN 📅 11/11/2020 ✂ 26 statements
Watch on YouTube (6:15) →
Other statements from this video 25
  1. 1:36 Comment tester efficacement le rendu JavaScript avant de mettre un site en production ?
  2. 1:36 Pourquoi tester le rendu JavaScript avant le lancement est-il devenu incontournable pour l'indexation Google ?
  3. 1:38 Pourquoi une refonte de site fait-elle chuter le ranking même sans modifier le contenu ?
  4. 1:38 Migrer vers JavaScript impacte-t-il vraiment le classement SEO ?
  5. 3:40 Hreflang : pourquoi Google insiste-t-il encore sur cette balise pour le contenu multilingue ?
  6. 3:40 Googlebot crawle-t-il vraiment toutes les versions localisées de vos pages ?
  7. 3:40 Hreflang regroupe-t-il vraiment vos contenus multilingues aux yeux de Google ?
  8. 4:11 Comment rendre découvrables vos URLs de contenu hyper-local sans perdre de trafic ?
  9. 4:11 Comment structurer vos URLs pour maximiser la découvrabilité du contenu hyper-local ?
  10. 5:14 La personnalisation utilisateur peut-elle déclencher une pénalité pour cloaking ?
  11. 5:14 Est-ce que personnaliser du contenu pour vos utilisateurs peut vous valoir une pénalité pour cloaking ?
  12. 6:15 Les Core Web Vitals sont-ils réellement mesurés sur les utilisateurs ou sur les bots ?
  13. 7:18 Pourquoi le schema markup ne suffit-il pas à garantir l'affichage des rich snippets ?
  14. 7:18 Pourquoi les rich snippets n'apparaissent-ils pas malgré un markup Schema.org valide ?
  15. 9:14 Le dynamic rendering est-il vraiment mort pour le SEO ?
  16. 9:29 Faut-il abandonner le dynamic rendering pour du SSR avec hydration ?
  17. 11:40 Pourquoi le main thread JavaScript bloque-t-il l'interactivité de vos pages aux yeux de Google ?
  18. 11:40 Pourquoi le thread principal JavaScript bloque-t-il l'indexation de vos pages ?
  19. 12:33 HTML initial vs HTML rendu : pourquoi Google peut-il ignorer vos balises critiques ?
  20. 13:12 Que se passe-t-il quand votre HTML initial diffère du HTML rendu par JavaScript ?
  21. 15:50 Googlebot clique-t-il sur les boutons de votre site ?
  22. 15:50 Faut-il vraiment s'inquiéter si Googlebot ne clique pas sur vos boutons ?
  23. 26:58 La performance JavaScript pour vos utilisateurs réels doit-elle primer sur l'optimisation pour Googlebot ?
  24. 28:20 Les web workers sont-ils vraiment compatibles avec le rendu JavaScript de Google ?
  25. 28:20 Faut-il vraiment se méfier des Web Workers pour le SEO ?
📅
Official statement from (5 years ago)
TL;DR

Google confirms that Core Web Vitals come exclusively from the Chrome UX Report, meaning real user data, never from crawl bots. This technical distinction has a direct consequence: if you use dynamic rendering to serve a static version to bots and a JavaScript version to users, only the client version counts for your Core Web Vitals. Your Lighthouse scores in lab tests may therefore differ radically from your CrUX field data.

What you need to understand

Where exactly does the data for Core Web Vitals come from?

Core Web Vitals rely on the Chrome UX Report (CrUX), a public database that aggregates real performance metrics collected from Chrome users who have enabled syncing and usage statistics sharing. No Googlebot in the equation: we're talking about real browsers, real connections, real devices.

Specifically, every time a Chrome user loads your page, their browser measures the LCP (Largest Contentful Paint), FID (First Input Delay), or the INP (Interaction to Next Paint) that replaced it, as well as the CLS (Cumulative Layout Shift). These data are anonymized, aggregated by origin (domain), and then made available via CrUX with a delay of about 28 days.

It is this aggregated dataset that Google uses to assess the real user experience of your site as part of the Page Experience ranking signal. The bots do not measure any of this — they crawl, render HTML/JS if necessary, index, but generate no exploitable CWV metrics for ranking.

  • Core Web Vitals are field metrics, not lab: they reflect the actual user experience, not a controlled environment.
  • CrUX collects data only from Chrome desktop and mobile, with a globally representative but limited panel of users who have consented to share statistics.
  • Google bots (Googlebot, Googlebot Smartphone) never contribute at any time to CrUX: their requests are excluded from any CWV measurement.
  • The CrUX publication delay (28 days) means your technical optimizations will only reflect in scores a month later.
  • CrUX data is public and accessible via BigQuery, the CrUX API, PageSpeed Insights, or the Search Console.

SEO Expert opinion

Is this statement consistent with what we observe on the ground?

Yes, and it’s actually one of the few points on which Google has always been clear and consistent. Since the launch of Core Web Vitals in 2020, all signals have aligned: CrUX = field data only. Field tests confirm that a site crawled by Googlebot but lacking real Chrome traffic will never have CWV data in the Search Console. Conversely, a site with Chrome traffic but blocked from crawling may very well appear in CrUX.

This consistency is reassuring, but it masks a blind spot: Google never specifies the minimum traffic threshold required for a site to appear in CrUX. Sites with low volumes of Chrome visitors — typically niche B2B sites, partially open intranets, or sites heavily using Firefox/Safari — may simply have no exploitable CWV data. In this case, Google resorts to origin data (entire domain) or, if nothing is available, likely ignores the CWV signal for those URLs.

Does dynamic rendering really skew Core Web Vitals?

No, it doesn't skew them — it dissociates them from what bots see, which is different. If you serve a server-side rendered (SSR) or statically prerendered version to Googlebot via dynamic rendering, but your real users receive a heavy JavaScript SPA with client-side hydration, it’s the performance of the JS version that counts for your CWV. Googlebot has no influence on these scores.

In practice, this can create a troubling gap: your bot version is fast, well indexed, but your real users suffer from poor LCP and chaotic CLS. The result? Perfect indexing, but ranking penalized by poor CWV. The opposite is rarer but possible: a slow bot version (for example, with unoptimized blocking JS) but a smooth client version due to aggressive lazy-loading — in this case, your CWV will be good even if the bot rendering is laborious. [To be verified]: Google has never published data correlating dynamic rendering and large-scale CWV impact, so all this remains anecdotal from real-world experience.

What are the limitations of this field-only approach?

The main issue is that CrUX is not exhaustive. Only Chrome users contribute, which excludes Safari (dominant on iOS), Firefox, non-Chromium Edge, and all niche browsers. If your audience is 60% Safari, your CWV only reflect 40% Chrome — potentially a bias if the two populations behave differently (e.g., mobile vs desktop, 4G vs fiber).

Another limitation: the 28-day delay makes any reactive optimization difficult. You fix a critical CLS bug today, you won’t see the effect in the Search Console for a month. In the meantime, your ranking may have suffered. Finally, sites with very low Chrome traffic simply have no URL-level CWV data, depriving them of a differentiation lever against better-resourced competitors.

Attention: Never confuse Lighthouse scores (lab data, controlled synthetic environment) with CWV CrUX (field data, real users). Lighthouse can show 95/100 while your CrUX is in the red, and vice versa. Google ranking uses CrUX, not Lighthouse.

Practical impact and recommendations

Should we abandon dynamic rendering if we want good Core Web Vitals?

Not necessarily. Dynamic rendering remains a pragmatic solution for complex SPA architectures where migrating to SSR or Isomorphic Rendering would take months of development. But if you use it, acknowledge that your CWV will reflect the client version, so imperatively optimize that version: code-splitting, image lazy-loading, reducing blocking JavaScript, preloading critical resources, using Service Workers for caching.

The classic mistake is to only focus on the version served to bots thinking it will suffice for ranking. False. Bots do not vote for your CWV. It’s your Chrome users, on 3G mobile in suburban Paris or on fiber optic desktop in Bordeaux, who determine whether your LCP is below 2.5 seconds or beyond.

How can I check that my Core Web Vitals are accurately measured from real users?

Check the Search Console (Core Web Vitals section): if data appears, it means you have sufficient Chrome traffic to feed CrUX. You can also query the CrUX API or PageSpeed Insights in field data mode. If no data appears, either your Chrome traffic is too low or your site is too new (CrUX requires a minimum 28-day history).

To continuously monitor, deploy Google's web-vitals.js (open-source JavaScript library) that sends real metrics to your analytics system (Google Analytics 4, Matomo, or a custom endpoint). This allows you to cross your own field data with CrUX and detect regressions in real-time, without waiting for the monthly CrUX publication.

What concrete actions can I take to align bots and users on performance?

The ideal is to serve the same version to bots and users, eliminating any divergence. If you are using dynamic rendering, gradually migrate to SSR (Next.js, Nuxt, SvelteKit) or Static Site Generation (SSG) with partial hydration. If full-stack migration is out of budget, at least prioritize strategic pages (home, categories, top product sheets).

In the meantime, optimize the client version as if your ranking depended on it — because it does. Use a CDN to reduce latency, compress your assets (WebP, AVIF for images; Brotli for text), defer loading non-critical scripts, and eliminate layout shifts by reserving space for images/ads from the initial render.

  • Check for CrUX data for your domain in PageSpeed Insights or Search Console.
  • Deploy web-vitals.js to monitor CWV in real-time from the user perspective and catch regressions before CrUX publication.
  • If you use dynamic rendering, audit the client version (the one that users receive) with Lighthouse and WebPageTest, not just the bot version.
  • Prioritize mobile: CrUX mobile weighs more in ranking, and it’s often where scores are most degraded (slow networks, low CPUs).
  • Plan a gradual migration to SSR or SSG if your current SPA architecture generates mediocre CWV despite optimizations.
  • Document the gap between lab data (Lighthouse) and field data (CrUX) to explain to stakeholders why a good Lighthouse score doesn’t guarantee good ranking.
Core Web Vitals measure the actual experience of Chrome users, not of bots. If you use dynamic rendering, it's the client version that determines your CWV scores and thus part of your ranking. Technical optimization of CWV — compression, lazy-loading, code-splitting, SSR — can be complex to orchestrate alone, especially on modern JS architectures. A specialized SEO agency in web performance can help you diagnose bottlenecks, prioritize technical projects, and reconcile indexing and user experience without a complete overhaul.

❓ Frequently Asked Questions

Les bots Googlebot contribuent-ils aux données Core Web Vitals de mon site ?
Non, jamais. Les Core Web Vitals proviennent uniquement du Chrome UX Report, alimenté par les utilisateurs réels de Chrome. Les bots ne génèrent aucune métrique CWV exploitable pour le ranking.
Si mon site utilise du dynamic rendering, quelle version compte pour les Core Web Vitals ?
C'est la version servie aux utilisateurs réels (client-side) qui compte, pas celle servie aux bots. Vos scores CWV refléteront donc les performances de la version JavaScript hydratée côté navigateur.
Mon site a peu de trafic Chrome : puis-je quand même avoir des données CWV ?
Si votre trafic Chrome est trop faible, vous n'aurez pas de données CWV URL-level dans CrUX. Google peut se rabattre sur des données d'origine (domaine entier) ou ignorer le signal CWV pour ces URLs.
Pourquoi mes scores Lighthouse sont bons mais mes CWV Search Console sont mauvais ?
Lighthouse mesure en environnement lab contrôlé (lab data), tandis que les CWV Search Console proviennent de CrUX (field data, utilisateurs réels). Les conditions réelles (réseau, device, cache) diffèrent souvent radicalement du lab.
Combien de temps faut-il pour qu'une optimisation CWV se reflète dans les données Google ?
Le CrUX a un décalage de publication d'environ 28 jours. Une correction technique déployée aujourd'hui n'apparaîtra dans Search Console que dans un mois environ.
🏷 Related Topics
Crawl & Indexing JavaScript & Technical SEO Links & Backlinks Web Performance

🎥 From the same video 25

Other SEO insights extracted from this same Google Search Central video · duration 30 min · published on 11/11/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.