What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

If Google only indexes the non-rendered version of a JavaScript page, it can lead to incorrect page merging if the meta tags are identical.
53:20
🎥 Source video

Extracted from a Google Search Central video

⏱ 56:43 💬 EN 📅 04/09/2019 ✂ 10 statements
Watch on YouTube (53:20) →
Other statements from this video 9
  1. 1:45 Pourquoi Google n'indexe-t-il pas le contenu qu'il ne parvient pas à rendre en JavaScript ?
  2. 3:01 Pourquoi Google n'indexe-t-il pas toutes les pages des gros sites ?
  3. 5:45 Les Core Updates changent-ils vraiment le classement en continu entre deux mises à jour ?
  4. 9:48 Le maillage interne suffit-il vraiment à booster le classement de toutes vos pages ?
  5. 10:20 Les blogs rankent-ils plus vite que les pages statiques dans Google ?
  6. 14:37 Pourquoi Google affiche-t-il parfois des URLs M-Dot dans les résultats desktop ?
  7. 23:54 Les erreurs 500 prolongées font-elles vraiment disparaître vos pages de l'index Google ?
  8. 29:06 L'en-tête Vary mal configuré impacte-t-il vraiment l'indexation de votre site responsive ?
  9. 32:09 Faut-il vraiment utiliser l'outil de changement d'adresse pour migrer des sous-domaines ?
📅
Official statement from (6 years ago)
TL;DR

Google can index the non-rendered version of your JavaScript pages, which means the raw HTML before JS execution. If multiple pages have identical meta tags in this initial HTML, Google may merge them into a single canonical URL, even if the final rendered content differs. Specifically, ensure that each page has unique meta tags right from the initial HTML, without relying solely on JS to differentiate them.

What you need to understand

What does the term "non-rendered version" of a JavaScript page actually mean?

When Googlebot crawls a page, it first retrieves the raw HTML served by the server. This is the non-rendered version. If your site relies on JavaScript to generate the main content or meta tags, these elements do not exist at this stage.

Google will then attempt to render the page — that is, execute the JavaScript to see the final content. However, this rendering process takes time, consumes resources, and is not always done immediately. Sometimes, indexing is temporarily or permanently based on the initial HTML.

Why do identical meta tags cause page merging?

Google uses the meta tags (title, description, canonical, robots, etc.) as signals to understand the content and uniqueness of a page. If two URLs return exactly the same meta tags in the initial HTML, Google may consider this as duplicate or redundant content.

The engine will then look to merge these pages under a single canonical URL. This deduplication mechanism is meant to clean up the index. However, if your pages are indeed different but only differ due to JS, Google has no way of knowing this before rendering. And that's where the problem arises.

When does indexing occur without full rendering?

JavaScript rendering is resource-intensive for Google. The engine prioritizes sites based on their crawl budget, popularity, and technical complexity. If your site has speed issues, a poor crawl budget, or a very heavy JS architecture, rendering can be delayed by several days — or may never occur.

Moreover, some pages may be indexed quickly based on the initial HTML, especially if Google detects a high freshness rate (news, e-commerce). In this context, relying solely on JS to differentiate your pages is a risky bet.

  • Unique meta tags from the initial HTML: don’t rely on JS to generate title, description, canonical
  • Rendering is not instantaneous: Google may index before executing JavaScript
  • Incorrect merging: two URLs with identical meta tags may be treated as duplicates even if the rendered content differs
  • Limited crawl budget: JS-heavy sites consume more resources, potentially slowing or blocking rendering
  • SSR or pre-rendering recommended: to ensure Google sees the correct content from the first crawl

SEO Expert opinion

Is this statement consistent with real-world observations?

Absolutely. There are regular instances where Google indexes the raw HTML before rendering JavaScript, especially on high-volume sites or Single Page Applications (SPAs). Martin Splitt confirms a well-documented phenomenon here: indexing can occur in two stages, with the first pass based solely on the initial HTML.

What's interesting is that Google explicitly acknowledges that this delay can cause incorrect merging. This validates the historical recommendations of experienced SEOs: never rely solely on JS to differentiate pages, especially for critical meta tags.

What nuances should be considered regarding this statement?

Google does not specify what criteria trigger immediate versus delayed rendering. We know that crawl budget, URL popularity, and loading speed play a role, but the exact thresholds remain opaque. [To be verified]: Google also does not indicate how much time can elapse between the initial crawl and full rendering — some tests show delays of several weeks for low-priority sites.

Another point: the statement mentions "incorrect merging" but does not say anything about the reversibility of this process. If Google mistakenly merges two pages, how long does it take to rectify this after adding unique meta tags? Real-world observations show that this can take weeks and may even require a forced re-indexing request.

Attention: Sites relying on frameworks like React, Vue, or Angular without SSR (Server-Side Rendering) are particularly vulnerable. If your initial HTML is almost empty and all content is generated on the client side, you risk incomplete or incorrect indexing for days, potentially permanently if the crawl budget is too tight.

In what cases does this rule not apply?

If you're using Server-Side Rendering (SSR) or static pre-rendering, the problem largely disappears. The initial HTML already contains the final content and meta tags, so Google doesn't need to execute JavaScript to understand the page. This is the recommended solution for high-stakes SEO sites.

Similarly, if you have very few pages and your crawl budget is generous, Google will likely have the time to render each page correctly. However, this is a risky bet: it’s better to secure from the outset with unique meta tags in the initial HTML.

Practical impact and recommendations

What specific actions should be taken to avoid this issue?

The most robust solution is to implement Server-Side Rendering (SSR) or Static Site Generation (SSG). This ensures that the initial HTML already contains unique meta tags and complete content, without depending on client-side JavaScript. Frameworks like Next.js (React), Nuxt.js (Vue), or Angular Universal facilitate this transition.

If SSR is not feasible in the short term, opt for dynamic pre-rendering: a service like Prerender.io or Rendertron detects bots and serves them a pre-rendered HTML version. This is an effective intermediate solution, but ensure that it is not seen as cloaking — the content served to bots must be strictly identical to what users see.

What mistakes should be absolutely avoided?

Never generate the title, meta description, or canonical tags solely via client-side JavaScript. This is a classic SPA error: the initial HTML contains a generic title ("My App") and the real title only appears after executing the JS. Google may index with the generic title or even merge all your pages under the same URL if they share these identical tags.

Another common mistake: assuming that Google will render your JS quickly. Tests show that on low-priority sites, rendering can be delayed by several days. If you launch a new section of the site, it may remain invisible in the SERPs during this period. Always validate that the raw HTML contains the critical signals.

How can I check if my site is compliant?

Use the Inspect URL tool in Search Console and compare the "HTML" tab (raw version) with the "Rendered" tab (version after JS). If you see major differences in the meta tags or main content, that's a warning signal. Ideally, both versions should be almost identical.

Also test with curl from the command line to retrieve the raw HTML of your critical pages. If two URLs return the same title or meta description, you have a problem. Finally, monitor the index coverage reports: pages excluded for "Duplicate, user did not select a canonical page" may indicate an incorrect merge due to identical meta tags.

  • Implement SSR or SSG on strategic pages (categories, product sheets, articles)
  • Verify that title, meta description, and canonical are unique in the initial HTML
  • Use the Inspect URL tool in Search Console to compare raw HTML and rendered content
  • Test with curl or a simple scraper to validate server-side HTML
  • Regularly audit coverage reports to detect incorrect merges
  • Avoid generic titles or meta tags generated only on the client side
Managing JavaScript rendering and unique meta tags can quickly become complex, especially on high-volume SPA architectures. If you lack internal technical resources or face recurring indexing problems, engaging an SEO agency specialized in JavaScript sites can save you valuable time and secure your organic visibility. A personalized approach allows you to identify specific friction points related to your technical stack and implement the most suitable solutions.

❓ Frequently Asked Questions

Google indexe-t-il toujours le JavaScript ou seulement parfois ?
Google tente de rendre le JavaScript, mais ce processus peut être différé de plusieurs jours, voire ignoré si le crawl budget est limité. L'indexation peut donc se faire temporairement ou définitivement sur le HTML brut uniquement.
Quelles balises meta doivent absolument être présentes dans le HTML initial ?
Au minimum : title, meta description, canonical, robots, hreflang si applicable. Ces balises sont critiques pour la compréhension et la déduplication des pages par Google.
Le pré-rendu dynamique est-il considéré comme du cloaking par Google ?
Non, si le contenu servi aux bots est strictement identique à celui vu par les utilisateurs. Google tolère le pré-rendu pour faciliter l'indexation du JS, à condition qu'il n'y ait aucune manipulation.
Peut-on corriger une fusion incorrecte après coup ?
Oui, en ajoutant des balises meta uniques dans le HTML initial et en demandant une réindexation via la Search Console. Mais cela peut prendre plusieurs semaines avant que Google réévalue et sépare les pages.
Un site 100% client-side (React, Vue) peut-il ranker correctement ?
C'est possible, mais risqué. Sans SSR ni pré-rendu, vous dépendez entièrement du bon vouloir de Google pour rendre votre JS rapidement. Les sites à fort enjeu SEO privilégient systématiquement le SSR ou SSG.
🏷 Related Topics
Domain Age & History Crawl & Indexing JavaScript & Technical SEO

🎥 From the same video 9

Other SEO insights extracted from this same Google Search Central video · duration 56 min · published on 04/09/2019

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.