Official statement
Google indexes your site in two waves: the first completely ignores JavaScript. The result: any content that relies on JS rendering doesn't exist during this first pass and risks never being considered if your site is large or frequently updated. Specifically, a site that delivers its content as raw HTML has a massive indexing advantage over a competitor that relies on React or Vue to display its titles, descriptions, or internal links.
What you need to understand
What’s the deal with this 'two waves' of indexing?
Googlebot crawls your page and retrieves the initial HTML — the one that comes directly from the server. At this stage, nothing is executed: no JavaScript, no front-end frameworks, no fetch API. This first wave forms the backbone of indexing.
The second wave comes later — sometimes hours or days later — and executes JavaScript to complete the rendering. But here's the catch: if your critical content (titles, meta, internal linking, paragraphs) is only available after this second pass, you've already wasted precious time. And that’s where the trouble begins.
Why does this strategy pose problems for large sites?
A site with 10,000 pages that relies on JavaScript to display its product URLs or internal navigation links creates a bottleneck. Googlebot cannot re-crawl everything in JS rendering mode: it's too resource-intensive.
The result: Some pages may never pass the second wave, or if they do, they arrive so late that frequent updates are never taken into account. Your site becomes a ghost for Google — technically crawled but indexed incompletely.
Does server-side rendering really solve everything?
SSR (Server-Side Rendering) or static pre-rendering become survival mechanisms in this ecosystem. They ensure that the initial HTML already contains critical content, without waiting for the browser to execute code.
But be careful: poorly configured SSR may still send an empty skeleton to the bot, especially if you serve different versions based on user-agent. Google doesn't cheat: it analyzes what comes in the initial HTTP request, period.
- Initial HTML = first impression: anything not included is considered secondary by Googlebot during the first wave.
- Limited crawl budget: on a large site, the second wave may never reach certain pages or may arrive too late.
- Frequent updates = increased risk: If your content changes rapidly (e-commerce, news), the gap between the two waves becomes a major handicap.
- SSR/SSG mandatory: to ensure critical content is present from the first pass, server-side rendering becomes essential.
- No JS magic: a pure SPA (Single Page Application) site without HTML fallback takes a huge risk on complete indexing.
SEO Expert opinion
Is this statement consistent with real-world observations?
Yes, and it’s even a stark realization for anyone who has migrated a site to a full-JS stack without SSR. We regularly see massive indexing drops after poorly managed React or Vue migrations — pages disappearing from the index simply because their content no longer arrives in the initial HTML.
What's interesting is that Google has been clear about this mechanism for years, yet many developers and even SEOs continue to treat JS rendering as a technical detail. Splitt sets the record straight: it’s not a detail; it’s the heart of the issue.
What nuances should we add to this statement?
The term 'two waves' is a pedagogical simplification. In reality, Googlebot manages a complex pipeline where some pages are re-crawled in JS rendering mode almost immediately, while others never are. Priority depends on crawl budget, content freshness, and page popularity.
Secondly: the 'absence of initial rendering' doesn’t necessarily mean a total blackout. However, it leads to indexing delays, a loss of context (internal links not discovered during the first pass), and a heightened risk of updates not being considered. [To be verified]: Google does not publish precise statistics on the percentage of pages that actually pass the second wave on a large site.
In what cases can this rule be circumvented?
On a small static site (fewer than 500 pages, few updates), the impact is negligible. Googlebot has the means to fully re-crawl everything. But as soon as we talk about a product catalog, a high-frequency publication blog, or a directory, the problem becomes structural.
Some have attempted to bypass this via dynamic rendering (serving pre-rendered HTML only to bots). Google tolerates this approach… but officially discourages it. And this is where Splitt's message makes perfect sense: rather than hack workaround solutions, it's better to design your architecture so that the initial HTML is rich right from the start.
Practical impact and recommendations
What should you concretely do on an existing site?
First step: audit the initial HTML. Disable JavaScript in your browser (or use a tool like curl) and check what's left. If your H1 titles, paragraphs, and internal links disappear… you have a problem.
Then, implement SSR (Server-Side Rendering) or SSG (Static Site Generation) depending on your stack. Next.js, Nuxt, Gatsby, Eleventy — it doesn’t matter what tool; the goal is the same: send complete content from the first HTTP request. If you are on WordPress with a classic theme, you’re probably good to go. If you're using a headless CMS with a React front, that’s where it gets complicated.
What mistakes should you absolutely avoid?
Don't just check the homepage. Deep pages — product sheets, blog articles, category pages — suffer the most from crawl budget deficit. That’s where the second wave never arrives.
Another classic pitfall: enabling SSR but forgetting to properly configure server-side caching. Result: every Googlebot request generates a complete rendering, your servers crash, and you disable SSR in a panic. Set up smart caching (Varnish, Cloudflare, Redis) to serve pre-rendered HTML without overloading your infrastructure.
How can you check if your site meets this requirement?
Use Google Search Console and inspect a representative URL. Compare the 'Fetched HTML' to the 'Rendered HTML'. If both are identical (or nearly so), you’re in the clear. If the rendering adds 80% of the content… you have an issue.
Also, conduct a crawl with Screaming Frog in 'JavaScript rendering disabled' mode. Count the number of pages with empty content, missing links, absent title tags. It’s a brutal but reliable indicator of your exposure to incomplete indexing risk.
- Audit the initial HTML on a representative sample of pages (homepage, categories, products, articles)
- Implement SSR or SSG if critical content currently depends on JavaScript
- Check consistency between fetched HTML and rendered HTML in Google Search Console
- Configure server-side caching to avoid overload from dynamic rendering
- Crawl the site with JavaScript disabled to measure the extent of the problem
- Prioritize high-SEO-value pages (top landing pages, conversion-generating pages)
❓ Frequently Asked Questions
Est-ce que Google indexe quand même le contenu qui arrive via JavaScript ?
Le dynamic rendering (servir du HTML pré-rendu uniquement aux bots) est-il une solution acceptable ?
Mon site WordPress est-il concerné par ce problème ?
Comment vérifier rapidement si mon HTML initial contient le contenu critique ?
Est-ce que Next.js ou Nuxt règlent automatiquement ce problème ?
🎥 From the same video 2
Other SEO insights extracted from this same Google Search Central video · duration 3 min · published on 10/04/2019
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.