Official statement
Other statements from this video 11 ▾
- 4:49 Google rend-il vraiment TOUTES les pages crawlées avec JavaScript ?
- 9:01 Google exploite-t-il vraiment TOUTES vos données structurées, même les invalides ?
- 11:40 Le PageRank fonctionne-t-il encore vraiment comme on le pense ?
- 13:49 Faut-il vraiment renoncer à acheter des liens de qualité pour son SEO ?
- 15:23 Safe Search s'applique-t-il vraiment pendant l'indexation ?
- 15:54 Comment Google détecte-t-il la localisation et la langue de vos pages à l'indexation ?
- 17:27 Tous les signaux d'indexation sont-ils vraiment des signaux de classement ?
- 21:22 JavaScript côté client : Google l'indexe, mais faut-il vraiment l'utiliser pour le SEO ?
- 23:38 Quelles erreurs JavaScript tuent votre crawl budget sans que vous le sachiez ?
- 24:41 Pourquoi les SEO doivent-ils s'imposer dès la phase d'architecture technique d'un projet web ?
- 27:18 Faut-il vraiment viser la perfection SEO pour ranker ?
Google claims to use an evergreen version of Chrome for rendering, updated a few weeks after each stable release. Essentially, your sites are crawled with a modern browser that supports ES6, JavaScript modules, and recent APIs. The catch? This lag of a few weeks can create discrepancies between what a visitor sees and what Googlebot indexes — especially if you're deploying experimental features without polyfills.
What you need to understand
What does "Evergreen Chrome" mean in the context of Google rendering?
An evergreen browser updates automatically without user intervention. Unlike older versions of Chrome that are frozen in time, Googlebot uses a version that continuously evolves, aligned with the public releases of Chrome.
Martin Splitt specifies that this version follows Chrome's stable updates with a lag of a few weeks. If Chrome 120 is released on December 5, Googlebot will likely switch to this version by the end of December or early January. It's not instantaneous, but it's predictable.
Why does this "few weeks" lag exist?
Google doesn't want to break the indexing of millions of sites with every micro-evolution of Chrome. The lag allows for testing the stability of the rendering engine in a controlled environment before mass deployment on Search infrastructure.
This delay also covers the automatic retry mechanisms: if a page fails during rendering, Googlebot will retry several times before giving up. A failed rendering does not mean zero indexing — the system demonstrates resilience.
Which JavaScript features are actually supported by Googlebot?
With an evergreen version of Chrome, all recent stable APIs are supported: ES6+, fetch, IntersectionObserver, JavaScript modules, async/await, Web Components v1, and even most modern CSS features like grid and custom properties.
On the other hand, experimental or Origin Trial APIs are generally inactive on Googlebot. If your site relies on a bleeding-edge feature still under flag, it won't run during rendering.
- Evergreen Chrome: Googlebot tracks stable Chrome releases with a delay of a few weeks
- Modern JavaScript support: ES6+, modules, async/await, fetch, IntersectionObserver are fully functional
- Automatic retry: in case of rendering failure, the system retries before giving up
- No experimental APIs: flagged features or those in Origin Trial are not available on the bot side
- Predictability: you can anticipate the bot's capabilities by following the stable Chrome release schedule
SEO Expert opinion
Is this statement consistent with real-world observations?
Overall, yes. Regular tests on Search Console and through comparative crawls show that Googlebot does indeed execute modern JavaScript without a hitch. Sites using React, Vue, or Angular with standard Babel transpilation index without issue.
But beware: the "few weeks" lag is vague. Splitt does not provide a precise number — 2 weeks? 6 weeks? 8? This lack of transparency complicates planning for sites that quickly adopt new features. [To be verified]: Google does not publish a public changelog of the Chrome versions deployed on Googlebot, unlike what would be ideal.
What nuances should be added to this statement?
First nuance: evergreen does not mean instantaneous. If you deploy a SPA that leverages an API that appeared in Chrome 119 while Googlebot is still on Chrome 117, your content may not display correctly during rendering.
Second nuance: the automatic retry system is good news, but it has its limits. If your JavaScript consistently fails (syntax error, blocked resource, timeout), even after multiple attempts, Googlebot will give up. The retry does not fix bugs — it only compensates for network hiccups or server load.
In what cases does this rule not apply?
If your site uses conditional polyfills detecting the user-agent, you risk serving different code to Googlebot than to real visitors. Some frameworks detect "legacy" browsers and load appropriate code — but if Googlebot is detected as a modern Chrome, it will receive the optimized bundle, which may sometimes lack fallbacks.
Another edge case: sites that perform aggressive feature detection and display content only if a very recent API is available. If this API is not yet in Googlebot's Chrome version, the content remains invisible. Let's be honest: this is rare, but it happens on sites heavily oriented towards PWA or experimental WebAssembly.
Practical impact and recommendations
What concrete steps should you take to ensure optimal compatibility?
First, audit your JavaScript stack. If you use Babel or a transpiler, configure it to target at least Chrome stable -2 versions (since Googlebot lags a few weeks). Avoid transpiling to ES5 if you don’t need to support IE11 — you will unnecessarily bloat the bundles.
Next, systematically test with the URL inspection tool in Search Console after each major deployment. This tool shows you exactly what Googlebot managed to render, including console errors and blocked resources. If rendering fails, you’ll see why.
What mistakes should you absolutely avoid?
Never block your JavaScript or CSS resources in robots.txt. This is the classic mistake that prevents rendering even if Googlebot supports your code. Even though Google uses modern Chrome, if the JS files are forbidden, the bot cannot execute anything.
Also avoid relying solely on bleeding-edge APIs without a fallback. If your site displays critical content via an API that's in phase 3 of TC39 but not yet in stable Chrome, have a plan B. Rendering will fail silently, and your content will disappear from the index.
How can I check if my site is compliant with Google rendering?
Use Puppeteer or Playwright in headless mode with the current stable version of Chrome -1 or -2 to simulate what Googlebot sees. If your content displays correctly, you’re probably safe. Otherwise, debug before Google discovers the issue.
Also activate JavaScript error monitoring via Sentry or an equivalent tool, including the Googlebot user-agent in your filters. If the bot generates errors that real users don’t see, you’ll know there’s a specific compatibility issue.
- Configure Babel to target Chrome stable -2 versions minimum
- Test each major deployment with the Search Console URL inspection tool
- Never block JS/CSS in robots.txt — check your file today
- Add fallbacks for any recent API not yet stabilized in Chrome
- Implement JS error monitoring filtered on Googlebot user-agent
- Simulate rendering with Puppeteer using Chrome stable -1 or -2 locally
❓ Frequently Asked Questions
Googlebot utilise-t-il exactement la même version de Chrome que celle installée sur mon ordinateur ?
Si mon JavaScript plante lors du rendering, Googlebot réessaie-t-il automatiquement ?
Les APIs JavaScript expérimentales sont-elles supportées par Googlebot ?
Dois-je encore transpiler mon code JavaScript pour Googlebot en 2025 ?
Comment savoir quelle version de Chrome utilise actuellement Googlebot pour mon site ?
🎥 From the same video 11
Other SEO insights extracted from this same Google Search Central video · duration 32 min · published on 10/12/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.