Official statement
Other statements from this video 2 ▾
Google indexes JavaScript websites in two distinct phases: first the static HTML, and then the JavaScript-generated content in a second wave that can take several days or even weeks. This delay between the two phases can postpone the indexing of critical content and create discrepancies between what you see and what Google is actually indexing. For an SEO, this means that a fully JavaScript site may experience a temporary visibility handicap on fresh content, particularly in news or e-commerce.
What you need to understand
What really happens during these two phases of indexing?
When Googlebot crawls a JavaScript page, it first retrieves the raw HTML sent by the server. This first phase is almost instantaneous—the static content present in the source HTML is analyzed and indexed quickly.
The second phase occurs afterward: Google puts the page in queue for rendering. A headless browser (based on Chromium) executes the JavaScript, builds the final DOM, and then Google indexes the content that only appears after the code execution. This process can take from a few hours to several weeks depending on the crawl budget, Google's server load, and the complexity of the page.
Why does this delay between the two waves pose a problem for SEO?
The real issue is the time gap. If your main content only exists in the post-JavaScript DOM, it remains invisible to Google for the entire period while your page is waiting for its turn in the rendering queue.
For time-sensitive content—news articles, promotional products, events—this delay can choke visibility. Google indexes an empty shell in phase 1, and when the content finally arrives in phase 2, the peak of searches has passed. You find yourself invisible when it matters.
How can you check what Google really sees on your site?
The Search Console offers the URL inspection tool that shows a snapshot of the final rendering. But be cautious: this tool forces an immediate rendering and does not reflect the actual delay a page experiences in production.
The most reliable test? Compare the raw source HTML (Ctrl+U in Chrome) with the inspected DOM (F12). If critical content only appears in the latter, you are in a risk zone. Google will see it, yes—but when?
- Phase 1 of indexing: Static HTML analyzed immediately, without JavaScript execution
- Phase 2 of indexing: Deferred JavaScript rendering, variable delay from a few hours to several weeks
- Limited crawl budget: Sites with low authority or massive content wait longer for their rendering turn
- Critical content: Any essential SEO element (titles, text, links) should ideally be present from phase 1
- Verification tools: Search Console (URL inspection), source HTML vs. inspected DOM comparison, server logs to identify Googlebot crawls
SEO Expert opinion
Is this statement consistent with real-world observations?
Yes, and it's even a diplomatic understatement. When Martin Splitt says, "it can take time," we are sometimes talking about 10-15 days on sites with limited crawl budgets. I've seen cases where JavaScript-heavy pages remained partially indexed for three weeks after publication.
Ground tests show that Google prioritizes JavaScript rendering based on domain authority and the historical freshness of content. An established news site will have faster rendering than an average corporate blog. Google never states this explicitly, but the log data is clear on that.
What nuances should be added to this two-phase view?
Google does not systematically render all JavaScript. If a resource takes too long to load, if a script runs into an infinite loop, or if the time budget for rendering (a few seconds) is exceeded, Google indexes what it managed to build and moves on.
Another rarely mentioned point: asynchronous API calls. If your content arrives via fetch() after the load event, Google may never see it, even in phase 2. It doesn’t wait indefinitely for all promises to resolve. [To be confirmed]: Google has never published the exact timeout applied to rendering, but tests suggest between 5 and 10 seconds max.
In what cases does this rule not really apply?
If you are doing Server-Side Rendering (SSR) or Static Site Generation (SSG), this issue almost entirely disappears. The HTML sent on the first hit already contains the complete content—Google doesn’t need to wait for a second wave.
Similarly, modern frameworks (Next.js, Nuxt, SvelteKit) with progressive hydration send pre-rendered HTML. JavaScript only enriches interactivity, not indexable content. In these architectures, Google's statement becomes less critical—you benefit from near-instant indexing.
Practical impact and recommendations
What concrete steps should you take to minimize the impact of this delay?
First action: audit the raw HTML of your strategic pages. Open the raw source code (view-source: in the URL) and check that critical elements—title, meta description, H1, first paragraphs, priority internal links—are present before any JavaScript execution.
If these elements only appear after rendering, you have two options. Either you migrate to SSR or pre-rendering (a clean but heavy solution), or you implement a dynamic server-side rendering that detects Googlebot and serves complete HTML to it (a pragmatic solution but requires double maintenance).
What mistakes should you absolutely avoid with JavaScript content?
The classic mistake: loading main content via an external API without HTML fallback. If the API is slow or times out during Google’s rendering, your content is never indexed. Always plan an HTML skeleton with at least a textual backup content.
Another trap: aggressive lazy-loading that hides content under intersection observer conditions. Google rarely scrolls beyond the initial viewport when rendering. If your content is below the fold and requires scrolling to load, it may remain invisible.
How can you check that your site is being crawled and indexed correctly?
Set up server log monitoring filtered for Googlebot. Compare hits from the standard crawler (phase 1) with those from the renderer (user agent containing "Chrome" and coming from Google IPs). A gap of several days between the two indicates a prioritization issue.
Use the Search Console to track pages "Crawled, currently not indexed." If this status lingers beyond two weeks on strategic pages, it often signifies problematic JavaScript rendering. Google saw the page in phase 1, but can’t render it in phase 2—or the rendering does not produce enough unique content to justify indexing.
- Verify that critical content (H1, main text, links) is present in the raw HTML before JavaScript execution
- Implement Server-Side Rendering (SSR) or static pre-rendering for strategic pages
- Regularly test the rendering with the Search Console's URL inspection tool
- Monitor server logs to identify the time gap between crawl and rendering
- Avoid lazy-loading that blocks content outside the initial viewport
- Plan HTML fallbacks for any content loaded via external API
❓ Frequently Asked Questions
Combien de temps Google met-il en moyenne pour indexer le contenu JavaScript d'une page ?
Le contenu chargé par API après le load event est-il indexé par Google ?
Le Server-Side Rendering résout-il complètement ce problème de double indexation ?
Les SPA (Single Page Applications) sont-elles incompatibles avec un bon SEO ?
Comment savoir si Google a réussi à rendre correctement ma page JavaScript ?
🎥 From the same video 2
Other SEO insights extracted from this same Google Search Central video · duration 5 min · published on 13/03/2019
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.