What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

Googlebot uses a recent version of Chrome, called an evergreen browser, to render web pages in a way that reflects what users would see on their own browsers.
1:07
🎥 Source video

Extracted from a Google Search Central video

⏱ 2:41 💬 EN 📅 08/08/2019 ✂ 3 statements
Watch on YouTube (1:07) →
Other statements from this video 2
  1. 1:37 Les redirections JavaScript sont-elles vraiment équivalentes aux 301 côté serveur pour Google ?
  2. 2:08 Faut-il vraiment privilégier la découverte automatique des URLs plutôt que les sitemaps ?
📅
Official statement from (6 years ago)
TL;DR

Google claims that Googlebot uses a recent version of Chromium to render web pages, similar to an evergreen browser. This means that JavaScript, modern CSS, and recent web features are supposed to be interpreted correctly during crawling. However, be aware: this promise does not guarantee instant rendering or an unlimited crawl budget for your heavy JS resources.

What you need to understand

What is an evergreen browser and why does it matter?

An evergreen browser updates automatically without user intervention. Chrome, Firefox, or Edge have been in this category for several years. Google announces that Googlebot relies on recent Chromium, which theoretically supports modern web standards: ES6+, JavaScript modules, fetch API, CSS Grid, etc.

For an SEO, this changes the game compared to older versions where Googlebot was stuck on a Chrome 41 base. Sites developed with JS frameworks like React, Vue, or Angular are now expected to be crawlable and indexable without any special tricks.

Does rendering really happen like it does in a user's browser?

Google claims that rendering reflects what a typical internet user sees. In practice, several differences persist. Googlebot does not load resources blocked by robots.txt, does not always execute tracking or advertising scripts, and may have different timeouts than a real browser.

The bot also does not have an identical viewport each time — it simulates a mobile and a desktop, but the exact dimensions fluctuate. Complex CSS animations, user interactions (hover, scroll events), or poorly configured lazy-loading can thus cause issues, even if the underlying engine is Chromium.

What is the difference between raw HTML crawl and JavaScript rendering?

Googlebot first performs a classic HTML crawl, then queues pages requiring JavaScript for deferred rendering. This two-step process introduces a delay that can range from a few hours to several days, or even weeks on low-authority sites.

If your main content only appears after JS execution, you risk delayed or incomplete indexing. Links discovered only after JS rendering are not always followed immediately, which fragments crawling and dilutes the allocated budget.

  • Googlebot uses recent Chromium, but JS rendering remains deferred and subject to server resource constraints.
  • Modern web features (ES6, CSS Grid, Fetch API) are theoretically supported, provided that resources are accessible.
  • The viewport and execution context differ from a real browser: no systematic third-party cookies, no user interactions, specific timeouts.
  • The delay between HTML crawl and JS rendering can negatively impact the freshness of indexing and the discovery of internal links.
  • Resources blocked by robots.txt or inaccessible (poorly configured CORS, 3xx redirects on JS/CSS) prevent complete rendering, even with the evergreen Chromium.

SEO Expert opinion

Does this statement align with field observations?

Yes and no. Google has indeed modernized its rendering stack — tests show that Chromium 109+ is now used in most cases. Modern JS frameworks work better than before, that’s undeniable. But claiming that Googlebot "reflects what users would see" remains a misleading approximation.

In practice, JavaScript rendering delays create indexing lag. A site that relies heavily on client-side rendering (pure CSR) often experiences several days of delay before the rendered content appears in the index. Server logs and Search Console confirm this pattern: HTML crawl arrives quickly, rendering follows much later, or may never occur on pages deemed low priority.

What nuances should be added to this statement?

Google never specifies how much CPU resources and wait time are allocated for rendering each page. If your JS bundle is 2 MB and takes 8 seconds to execute, there’s no guarantee Googlebot will wait for it to finish. The exact timeout remains unclear — [To be confirmed] as Google refuses to provide public figures.

Another point: JavaScript redirections (window.location, pushState) are not always followed as Chrome would do in a user’s hand. Links dynamically generated after interaction (click, infinite scroll) may completely escape the bot. And if your JS relies on third-party cookies or pre-filled Web Storage, Googlebot’s rendering will start from a blank state.

In which cases does this rule not fully apply?

Sites with low crawl budgets or those hosted on slow servers will see JS rendering deprioritized, or even skipped altogether. Google allocates its resources based on domain authority, historical content freshness, and user demand. A personal blog in React without solid backlinks will wait a long time for each page to be rendered.

The same goes for pages protected by login or paywall: Googlebot can crawl the initial HTML, but if the JS requires an authenticated session to display content, rendering will remain partial. Poorly architected Single Page Apps (SPAs) — URLs without pre-rendered HTML state on the server side — remain a minefield despite evergreen Chromium.

Caution: Never block CSS or JavaScript files via robots.txt. Even with recent Chromium, Googlebot cannot properly render a page if the necessary resources are inaccessible. This classic mistake literally kills the indexing of JS-heavy sites.

Practical impact and recommendations

What concrete steps should be taken to optimize Googlebot rendering?

Start by testing your site in Search Console using the "URL Inspection Tool". Compare the raw HTML received during crawling and the rendered version. If critical elements (titles, content, internal links) only appear after rendering, you are vulnerable to indexing delays.

Implement Server-Side Rendering (SSR) or static generation (SSG) for priority content. Next.js, Nuxt, Gatsby, or custom solutions allow you to send pre-rendered HTML to Googlebot, eliminating dependence on deferred JS rendering. If SSR/SSG is too heavy, opt for progressive hydration: the initial HTML contains content, while JS enhances interactivity afterward.

What mistakes should be absolutely avoided?

Never block JS/CSS resources in robots.txt — this is the number one mistake that sabotages modern rendering. Also check your HTTP headers: if your JS files are served with Cache-Control: no-store or restrictive CORS, Googlebot may fail to load them.

Avoid bloated JS bundles. A 3 MB file that takes 10 seconds to parse can exceed Googlebot’s patience threshold. Code-splitting, smart lazy-loading, and tree-shaking become essential. Be cautious of frameworks that generate thousands of nested divs — a complex DOM slows down rendering even with recent Chromium.

How can I check if my site is compliant and rendered correctly?

Use Screaming Frog in JavaScript mode to simulate rendering with Chromium. Compare the results with a raw HTML crawl: differences will show you what Googlebot sees in two stages. Monitor server logs to identify delays between initial crawl and rendering.

In Search Console, regularly inspect strategic pages and check the rendered source code. If internal links or content are missing, it’s a red flag. Core Web Vitals must also remain green: an LCP > 4s may discourage Googlebot from fully rendering the page, despite evergreen Chromium.

  • Test each critical page with the URL Inspection Tool in Search Console and compare raw HTML vs. rendered
  • Implement SSR or SSG for priority content (categories, product sheets, pillar articles)
  • Disallow blocking robots.txt on JS/CSS files and check CORS/Cache-Control headers
  • Reduce the size of JavaScript bundles: aim for < 500 KB initially, code-split the rest
  • Monitor Core Web Vitals: LCP < 2.5s, CLS < 0.1, INP < 200ms
  • Crawl your site with Screaming Frog in JS mode and compare with a classic HTML crawl
Googlebot uses recent Chromium, certainly, but JS rendering remains deferred and conditioned by your technical architecture. SSR/SSG eliminates the risk, lightweight bundles speed up rendering, and not blocking robots.txt guarantees access to resources. These optimizations require sharp expertise in web architecture and performance — if your team lacks resources or specific skills, enlisting a specialized SEO agency in JavaScript environments may prove wise to secure indexing without compromising user experience.

❓ Frequently Asked Questions

Googlebot supporte-t-il ES6 et les modules JavaScript natifs ?
Oui, puisque Googlebot utilise Chromium récent (109+), il supporte ES6, les modules ES, async/await et la plupart des features modernes. Assure-toi toutefois que tes polyfills ne bloquent pas le rendu si tu vises des compatibilités exotiques.
Le rendu JavaScript se fait-il en temps réel lors du crawl ?
Non. Googlebot crawle d'abord le HTML brut, puis met les pages nécessitant JS dans une file d'attente pour rendu différé. Ce délai peut aller de quelques heures à plusieurs semaines selon le budget de crawl et l'autorité du site.
Dois-je encore utiliser du Server-Side Rendering si Googlebot comprend JavaScript ?
Oui, fortement recommandé. Le SSR élimine le délai de rendu différé, garantit que le contenu est crawlé immédiatement et améliore les Core Web Vitals. Le JS evergreen ne dispense pas d'une architecture SEO-friendly.
Puis-je bloquer les fichiers CSS dans robots.txt sans impacter le rendu ?
Non, jamais. Même avec Chromium récent, bloquer les CSS ou JS via robots.txt empêche Googlebot de rendre correctement la page. Le bot ne verra qu'un squelette HTML sans mise en forme ni contenu dynamique.
Comment savoir si Googlebot a réussi à rendre ma page complètement ?
Utilise l'outil Inspection d'URL dans Search Console et consulte la vue « Page rendue ». Compare avec ce que tu vois dans ton navigateur. Si des éléments critiques manquent (liens, contenus, balises meta), le rendu a échoué ou été incomplet.
🏷 Related Topics
Domain Age & History Crawl & Indexing AI & SEO

🎥 From the same video 2

Other SEO insights extracted from this same Google Search Central video · duration 2 min · published on 08/08/2019

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.