Official statement
Other statements from this video 7 ▾
- 10:06 Pourquoi Google ignore-t-il vos liens sans attribut HREF ?
- 19:57 Le rendu hybride est-il vraiment la seule solution pour indexer vos pages JavaScript ?
- 21:40 Le rendu dynamique est-il vraiment la solution pour indexer vos pages JavaScript ?
- 22:42 Puppeteer et Rendertron : faut-il vraiment les utiliser pour rendre son JavaScript crawlable ?
- 25:44 Googlebot est-il vraiment bloqué sur Chrome 41 pour JavaScript ?
- 30:06 Faut-il vraiment tester la version mobile de chaque page pour éviter les pénalités d'indexation ?
- 33:03 Le lazy loading condamne-t-il vos images à l'invisibilité sur Google ?
Googlebot performs indexing in two waves: the first captures server-side content immediately, while the second, delayed wave processes client-side JavaScript rendering. This delay can prolong indexing by several days for heavily JS-oriented sites. If your critical content relies on client-side rendering, you're losing indexing time compared to competitors who serve static HTML.
What you need to understand
How does this two-phase indexing actually work?
Googlebot starts by crawling and indexing the raw HTML returned by the server. This is the first wave. If your page loads content via client-side JavaScript (React, Vue, Angular without SSR), that content won't appear in this initial analysis.
The second wave occurs later, sometimes several days after the initial crawl. Googlebot then queues the page for rendering in a headless Chrome browser, executes the JS, and discovers dynamically generated content. In the meantime, your page may be indexed partially or with incomplete content.
What is the difference between server content and client content?
Server-side content is immediately available in the source HTML that Googlebot receives during the initial request. There's no need to execute JavaScript: the text, links, and meta tags are there.
Client-side content requires script execution to be displayed. If you do a "View Source" and your main text is missing, it is rendered on the client side. Googlebot will then have to wait for the second wave to see it, which mechanically extends the indexing delay.
What is the actual delay between the two waves?
Google doesn't provide an official figure, but field observations indicate gaps ranging from a few hours to several days depending on the crawl budget allocated to the site. A high-authority site may have its JS rendered in just a few hours, while a small site could wait a week.
This delay becomes critical for news, product launches, or any urgent page. If your competitor serves static HTML while you rely on JS, they will appear in the SERPs before you, even if your content is published first.
- First wave: crawling raw HTML returned by the server, immediate indexing of available content
- Second wave: queuing, JavaScript rendering in headless Chrome, indexing of dynamically generated content
- Variable delay: from a few hours to several days depending on crawl budget and site authority
- Risk of partial indexing: between the two waves, the page may appear in the index with incomplete or empty content
SEO Expert opinion
Is this statement consistent with field observations?
Absolutely. Tests with modern JS frameworks without SSR consistently show this delay. If you publish a React page with pure client-side rendering, you may see it appear in Search Console as "crawled" but without the main content for days.
What Google doesn’t clearly say is that the delay varies greatly depending on the crawl budget. An authoritative site with many backlinks and high freshness will see its JS rendered quickly. A poorly interlinked small site will wait much longer. Google remains vague on the exact criteria that determine the priority of the second wave.
What nuances should be added to this claim?
Firstly, not all JavaScript sites are equal. If you're using server-side rendering (SSR) or static site generation (SSG), your initial HTML already contains the complete content. The second wave then becomes trivial, just a check that the client render matches the server render.
Furthermore, Google can deliberately deprioritize certain JS sites if it detects suspicious patterns (cloaking, hidden content visible only after interaction). [To be validated]: Google claims to treat modern JS "like a conventional browser", but reality shows limitations (short timeouts, no infinite script execution, partial handling of aggressive lazy-loading).
In what cases does this rule not really penalize?
If your site prioritizes user experience over raw indexing speed, client-side rendering may still be viable. For example, a member area, SaaS, or web app where SEO isn't the main traffic source. The indexing delay becomes secondary.
Additionally, some sites compensate with such a generous crawl budget that the second wave arrives almost instantaneously. But let's be honest: this is the exception, not the rule. Most sites lose indexing time with poorly implemented JS.
Practical impact and recommendations
What concrete steps should be taken to minimize this delay?
The most effective solution is to migrate to server-side rendering or static generation. Use Next.js for React, Nuxt for Vue, Angular Universal for Angular. Your initial HTML already contains the complete content, allowing Googlebot to index everything in the first wave.
If you stay on CSR, maximize your optimization: reduce JS execution time, limit blocking resources, and use pre-rendering for strategic pages. Tools like Rendertron or Prerender.io generate static HTML snapshots for crawlers, but this is a patch, not a true solution.
What mistakes should be absolutely avoided?
Never rely on aggressive lazy-loading for critical content. If your main text loads only on scroll, Googlebot may never see it in the second wave (timeout before full execution). Load immediately what matters for SEO.
Avoid also depending solely on client rendering for meta tags (title, description, canonical). Even if Google finally sees them in the second wave, the delay can skew initial indexing. Always inject them server-side.
How can I check if my site is properly indexed despite the JS?
Use the URL inspection tool in Search Console. Compare the "raw HTML" and the "final render": if the main content only appears in the final render, you're on pure CSR and are experiencing delays. If both versions are identical, you're good.
Also monitor the discovery vs indexing delays in your server logs. If Googlebot crawls your page on day J but it only appears in the index on J+5, that's the classic symptom of deferred JS. A gap of more than 48 hours should raise alarms.
- Migrate to SSR/SSG if SEO is strategic (Next.js, Nuxt, Angular Universal)
- Inject critical meta tags (title, description, canonical) server-side, never just in JS
- Load the main content immediately, without waiting for scroll or user interaction
- Regularly test with the Search Console URL inspection: compare raw HTML vs final render
- Monitor crawl/indexing delays in logs: gap > 48h = JS issue
- Limit blocking JS resources and optimize execution time (budgets < 500ms ideally)
❓ Frequently Asked Questions
Le délai entre les deux vagues d'indexation est-il le même pour tous les sites ?
Le server-side rendering (SSR) élimine-t-il complètement ce problème ?
Peut-on forcer Googlebot à rendre le JavaScript plus rapidement ?
Les balises meta injectées en JavaScript sont-elles prises en compte ?
Un site en React pur sans SSR peut-il quand même bien ranker ?
🎥 From the same video 7
Other SEO insights extracted from this same Google Search Central video · duration 39 min · published on 10/05/2018
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.