Official statement
Other statements from this video 12 ▾
- 1:02 Les liens JavaScript sont-ils vraiment crawlables par Google si le code est propre ?
- 3:43 Les redirections JavaScript sont-elles vraiment aussi efficaces que les 301 pour le SEO ?
- 7:17 Faut-il ignorer les erreurs timeout du Mobile-Friendly Test ?
- 8:59 Un bundle JavaScript de 2,7 Mo peut-il vraiment passer sans problème chez Google ?
- 10:05 Faut-il vraiment abandonner le unbundling complet de vos fichiers JavaScript ?
- 14:28 Pourquoi vos données structurées disparaissent-elles par intermittence dans Search Console ?
- 18:27 Googlebot crawle-t-il encore votre site avec un user-agent Chrome 41 obsolète ?
- 24:22 Faut-il vraiment éviter les multiples balises H1 sur une même page ?
- 36:57 Renommer un paramètre URL peut-il vraiment forcer Google à réindexer vos pages dupliquées ?
- 39:40 Faut-il vraiment abandonner le dynamic rendering pour l'indexation JavaScript ?
- 41:20 Pourquoi Google ignore-t-il mon balisage FAQ structuré dans les SERP ?
- 49:18 Faut-il vraiment corriger toutes les imperfections techniques d'un site qui performe en SEO ?
Rendertron first runs the page through Puppeteer to generate a complete rendering, then removes all JavaScript from the final HTML sent to the crawlers. Scripts like Google Analytics run during rendering but disappear from the source code. In practical terms: bots receive a clean HTML snapshot without any <script> tags, which raises concerns for tracking and the detection of certain technical implementations.
What you need to understand
What exactly is Rendertron and why is Google discussing it?
Rendertron is an open-source tool developed by Google to facilitate server-side rendering (SSR) of JavaScript applications. It is a middleware solution that intercepts bot requests, executes the page in a headless browser (Puppeteer), and then returns the generated HTML.
The important point highlighted by Martin Splitt is that Rendertron actively removes all JavaScript from the final document. In other words, the bot receives only a static HTML file devoid of <script> tags, even though these scripts have indeed run during the rendering phase. This radical approach aims to prevent any attempts of executing JS on the bot side — which, in essence, makes the HTML lighter and potentially faster to index.
What is the exact rendering lifecycle with Rendertron?
The process occurs in four distinct stages. First, the bot sends its request to the server — Googlebot's User-Agent, for example. Next, Rendertron detects that it is a crawler and triggers Puppeteer to load the page in a full headless browser.
During this phase, all JavaScript executes normally: API calls, React/Vue hydration, analytics events, tracking pixels. Once the DOM stabilizes, Rendertron captures the resulting HTML, then removes all <script> tags before serving this snapshot to the bot.
The bot thus receives an HTML that reflects the final state of the page after JS execution but without any trace of executable code. It is a static snapshot, not an interactive page.
What are the implications for third-party scripts and tracking?
Scripts like Google Analytics, GTM, Facebook Pixel, or any other tracking tags run during the Puppeteer rendering. They send their HTTP requests normally — so your analytics events may theoretically trigger during bot rendering.
But these scripts disappear from the final HTML. If you inspect the source code served to the bot (via a Fetch as Google test or a headless crawl), you will see no <script src="analytics.js"> tag. This complicates manual verification and can skew some automated audits that parse HTML to detect installed tools.
- Rendertron executes the JS then removes it — the bot sees the result, not the code
- Third-party scripts run during rendering but are invisible in the final HTML
- The snapshot is static: no interactivity, no JS events on the bot side
- Indexing relies on the final DOM, not on the bot's ability to execute JavaScript
- This approach simplifies crawling but masks some technical implementations from manual audits
SEO Expert opinion
Is this statement consistent with field observations?
Yes and no. Rendertron is a specific tool, not the rendering solution used by Googlebot in production. Google currently crawls with its own Chromium-based engine, capable of natively executing JavaScript without going through a middleware like Rendertron.
What Splitt is referring to applies to sites that choose to implement Rendertron themselves to serve pre-rendered HTML to bots — a practice still common among certain JS-heavy sites that want to ensure full control over the rendering. But be careful: modern Googlebot doesn't need Rendertron to index React or Vue. It does it directly.
The complete removal of JavaScript on the Rendertron side is documented in the official repo. It avoids double executions and bugs related to client hydration after a partial SSR. On the native Googlebot side, however, JS remains present in the crawled HTML — it is simply executed in Google's internal headless browser.
What nuances should be added to this statement?
First nuance: not all sites go through Rendertron. If your React application uses Next.js with native SSR or Nuxt.js, you are already serving complete HTML with client-side JS hydration — Rendertron is never involved. Therefore, Splitt's statement only applies to manual implementations of Rendertron as a proxy.
Second point: saying that "scripts execute but disappear" can be misleading. The events triggered during rendering (analytics, pixels) are indeed sent to third-party servers — so your stats may count bot visits if you don’t filter User-Agents. But these hits do not reflect actual user behavior. [To verify]: no public data from Google specifies whether these bot analytics events are automatically filtered by GA4 or if this needs to be managed manually.
Third nuance: the HTML without JS is lighter, certainly, but it can also mask implementation errors. If a critical script fails during Puppeteer rendering, the HTML snapshot will be incomplete — and without the <script> tags in the source code, diagnosing the problem becomes more challenging.
In which cases does this approach pose problems?
If you are using lazy-loading JS for critical content — for instance, loading products via fetch after the onload event — Rendertron might capture the snapshot too early. As a result: the bot sees an empty or incomplete page. You then need to configure a rendering delay or a "page ready" signal to ensure all content is present before capturing.
Another problematic scenario: sites that rely on critical inline scripts to inject schema.org or structured data. If these <script type="application/ld+json"> tags are removed by Rendertron, Google will never see your structured data — unless you directly inject them into the server-side HTML before rendering.
Practical impact and recommendations
Should you implement Rendertron on your site in 2025?
Let’s be honest: most sites no longer need Rendertron. Googlebot has been executing JavaScript natively for years, and modern frameworks (Next.js, Nuxt, SvelteKit) offer SSR or SSG out-of-the-box. Rendertron remains relevant for ultra-specific cases: sites with complex legacy SPAs, need for full control over bot rendering, or extreme performance constraints.
If you are starting from scratch on a React/Vue/Angular project, prioritize a native SSR framework instead of an external middleware. You gain maintainability, compatibility with Core Web Vitals, and avoid pitfalls related to JavaScript removal.
How to check that bot rendering meets your expectations?
First step: use the Search Console URL Inspection Tool and compare the "HTML received" vs "HTML rendered". If you are going through Rendertron, the rendered HTML should contain no <script> tags but all visible content should be present. A discrepancy between the two versions signals a rendering issue.
Second check: crawl your site with Screaming Frog in JavaScript mode enabled, then in plain text mode. Both crawls should return the same titles, descriptions, H1, and main content. If the crawl without JS is empty or incomplete, it means your SSR (Rendertron or otherwise) is not functioning correctly.
Third verification: manually inspect the source code of a strategic page using curl with a Googlebot User-Agent. If you see <script> tags, you are not using Rendertron — or it is poorly configured. If they have disappeared, check that your structured data is properly injected server-side.
What errors should you absolutely avoid with Rendertron?
Error number one: not testing the rendering delay. By default, Puppeteer waits for the load event to trigger, but some contents load later through timers or observers. Configure an appropriate waitUntil (networkidle0, custom signal) to capture the full DOM.
Second error: forgetting to filter analytics User-Agents. If your tracking scripts run during bot rendering, you pollute your stats with non-human hits. Implement server-side detection or configure GA4 to exclude known bots.
Third trap: believing that Rendertron solves all JS SEO issues. If your application architecture is poorly designed — aggressive lazy-loading, unmanaged client-side routes, no SSR fallback — Rendertron will only capture a snapshot of a broken page. The problem lies in the JS implementation itself, not the bot rendering.
- Prefer a modern SSR framework (Next.js, Nuxt) over Rendertron unless in very specific cases
- Test bot rendering via Search Console and Screaming Frog in JS mode enabled
- Ensure structured data is injected server-side, not via client JS
- Configure an appropriate rendering delay to capture all async content
- Filter bot User-Agents in your analytics tools to avoid data pollution
- Regularly crawl with and without JS to detect rendering discrepancies
❓ Frequently Asked Questions
Googlebot utilise-t-il Rendertron pour crawler les sites JavaScript ?
Si Rendertron supprime le JavaScript, mes données structurées JSON-LD sont-elles perdues ?
Les événements Google Analytics se déclenchent-ils quand Rendertron rend la page pour un bot ?
Rendertron améliore-t-il le temps de crawl et l'indexation ?
Puis-je utiliser Rendertron uniquement pour certains bots et servir du JS normal aux utilisateurs ?
🎥 From the same video 12
Other SEO insights extracted from this same Google Search Central video · duration 56 min · published on 05/05/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.