What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

Google does not retain the state of applications. When a user clicks on a search result, they are taken directly to the indexed page. JavaScript applications must have unique URLs that can be accessed directly. Avoid using hash URLs; ensure that the server can deliver the correct content directly.
4:13
🎥 Source video

Extracted from a Google Search Central video

⏱ 16:39 💬 EN 📅 06/06/2019 ✂ 6 statements
Watch on YouTube (4:13) →
Other statements from this video 5
  1. 3:14 Google indexe-t-il vraiment JavaScript aussi bien que du HTML classique ?
  2. 7:16 Les appels AJAX consomment-ils vraiment votre crawl budget ?
  3. 9:22 Le Googlebot crawle-t-il vos liens JavaScript avant même de rendre la page ?
  4. 10:55 Le pré-rendu améliore-t-il vraiment le crawl et l'expérience utilisateur ?
  5. 14:59 Lighthouse et PageSpeed Insights suffisent-ils vraiment à optimiser la performance pour le SEO ?
📅
Official statement from (6 years ago)
TL;DR

Google states that its bot has no memory of state: each URL must be directly accessible, without requiring prior navigation within the application. Hash URLs (#) pose a structural problem for indexing. For a SPA to be properly indexed, each route must return the correct content server-side — which requires SSR or pre-rendering, not just an empty shell that will fill in with JS.

What you need to understand

What does stateless really mean for Googlebot?

When Splitt talks about stateless, he refers to a simple technical constraint: Googlebot remembers nothing between requests. Unlike a user navigating through your app, clicking, scrolling, and changing pages, the bot goes directly to the URL it wants to index. It does not go through your homepage, trigger your JavaScript router, or load your global state.

If your SPA relies on a logic where “everything goes through index.html and then JS loads the route,” Google sees the same empty shell on all your URLs. Result: duplicate content or total absence of indexable content. The bot does not necessarily execute the JS during the initial crawl, and even if it does, it does not reconstruct the browsing history.

Why are hash URLs (#) a problem?

Fragments (everything that follows the #) are never sent to the server. When Googlebot fetches https://example.com/#/product/123, the server only receives https://example.com/. It is impossible to distinguish this request from another one towards https://example.com/#/product/456.

Historically, Google tried to mitigate this problem with the hashbang (#!) system, which has been abandoned since 2015. Today, the stance is clear: hash URLs do not allow for distinct content indexing. If your router still uses this pattern, each page is seen as the same by Google, losing all granularity of indexing.

What should the server return when Googlebot accesses a URL?

The server must deliver the final content of the requested page, not an empty template. If Googlebot accesses /product/eames-chair, the HTML response must contain the title, description, price, and images of that product — even before the JS executes.

This requires either Server-Side Rendering (SSR) or static pre-rendering (Static Site Generation). Modern frameworks (Next.js, Nuxt, SvelteKit) handle this natively. For legacy SPAs (pure React, Vue CLI without SSR), an additional pre-rendering layer or migrating to a hybrid solution is necessary.

  • Every URL must be directly accessible without prior navigation within the app
  • Avoid hash URLs (#): they do not allow indexing of distinct content
  • The server must return the complete content of the page, not an empty shell
  • SSR or pre-rendering is essential for SPAs that want robust SEO
  • Googlebot does not retain any state: it does not “navigate” through your app like a user

SEO Expert opinion

Does this statement align with on-the-ground observations?

Yes, and it is even one of the rare cases where Google leaves no ambiguity. Tests show that badly configured SPAs end up with a disastrous indexing rate. I have seen React sites with hash URLs where 80% of their pages were ignored by Google, simply because each URL returned the same HTML shell.

The notion of stateless is not new — it is at the heart of the HTTP protocol. What Splitt reminds us is that Googlebot adheres to this protocol strictly. No persistent cookies between crawls, no localStorage, no session. Each request is isolated. If your JavaScript app reconstructs the state client-side, Google will not see it during the first pass.

What nuances should we add to this directive?

Splitt does not say “ban all JavaScript.” He says: ensure that the content exists in the initial HTML. A well-designed SPA with SSR or pre-rendering can work perfectly. Vue, React, Angular — all can be SEO-friendly if the server returns complete HTML.

However, caution is needed with pre-rendering solutions like Prerender.io or Rendertron if they are not well configured. Google detects cloaking when the content served to the bot differs radically from that served to users. Pre-rendering must produce exactly what a human sees, just faster. [To be verified]: some sites report penalties after poorly implementing dynamic rendering — Google does not communicate a precise threshold of “acceptable difference.”

In what cases does this rule not apply?

If your SPA is a private application (admin dashboard, internal CRM, post-login customer area), SEO has no relevance. No need for SSR, nor even clean URLs. Google will never crawl these pages as they are behind authentication.

For Progressive Web Apps (PWAs) where the offline experience is paramount, the logic changes too. You can use a service worker that serves cached content, but again, the initial indexing requires the server to return complete HTML on the first load. Once the app is installed, Googlebot's statelessness has no impact — the user no longer goes through search.

If you are migrating a legacy SPA to SSR, first test with a subset of pages. All-in migrations often cause regressions: temporary ranking loss, unmanaged URL changes, exploding server response times. Plan for an immediate rollback if Core Web Vitals metrics degrade.

Practical impact and recommendations

What should be done concretely to make a SPA indexable?

First, audit the current architecture. Type your main URLs into a tool that simulates Googlebot (Screaming Frog in text mode, or curl without JS). If you see an empty

, it’s dead. The content only exists client-side.

Next, choose between SSR, pre-rendering, or migrating to a hybrid framework. Next.js and Nuxt are the most common, but SvelteKit and Astro are rising. If your stack is stuck (large legacy React), a pre-rendering service may suffice — but verify that the generated HTML contains everything: meta tags, structured data, complete textual content.

What mistakes should be avoided during migration?

Do not confuse hydration with SSR. Hydration is when JS takes over from the already rendered HTML. If your initial HTML is empty and JS “hydrates” an absent DOM, you have gained nothing. SSR must produce complete HTML before hydration.

Another trap: forgetting the redirects. If you abandon hash URLs for clean URLs, each old route must redirect to the new one with a 301. Otherwise, you lose all accumulated SEO juice. And be cautious with canonical tags: they must point to the SSR version, not the old hash URL.

How to check that the server is delivering complete content?

Use the Search Console, URL Inspection section. Google shows you exactly what it sees during the crawl. If the rendered HTML is empty or generic, it means SSR is not working. Compare with a manual curl: curl -A "Googlebot" https://yoursite.com/page. The returned HTML must be readable and complete.

Also test the server response times (TTFB). SSR can be slow if poorly optimized — Redis cache, edge rendering CDN, or static pre-generation for less dynamic pages. A TTFB > 600 ms starts to penalize crawl budget on large sites.

  • Abandon hash URLs (#) and migrate to clean URLs (/page instead of #/page)
  • Implement SSR or pre-rendering so that each URL returns its complete content
  • Test each key URL with curl or Screaming Frog in “no JS” mode
  • Configure 301 redirects if you change the URL structure
  • Check the rendered HTML in Search Console (URL Inspection)
  • Monitor TTFB: slow SSR can hurt crawl budget
Making a SPA indexable requires a non-trivial technical overhaul. Between choosing the framework, managing redirects, optimizing response times, and validating the rendered HTML, the pitfalls are numerous. If your team lacks experience in these areas, consulting an SEO agency specialized in JavaScript architectures can save you months of regression and secure the migration from the start.

❓ Frequently Asked Questions

Google indexe-t-il vraiment le JavaScript des SPA ?
Oui, Google exécute du JavaScript, mais pas systématiquement au premier crawl. Si le HTML initial est vide, la page risque de ne jamais être correctement indexée. Le SSR garantit que le contenu existe dès la première requête.
Peut-on utiliser des hash URLs pour des sections internes d'une page ?
Oui, les ancres internes (#section) fonctionnent parfaitement pour la navigation intra-page. Le problème concerne les hash URLs utilisées comme routing principal (#/page), qui ne sont pas distinguables côté serveur.
Le pre-rendering est-il considéré comme du cloaking par Google ?
Non, tant que le contenu servi au bot est identique à celui vu par l'utilisateur. Le pre-rendering accélère simplement le rendu, il ne doit pas afficher un contenu différent ou cacher des éléments aux humains.
Next.js ou Nuxt sont-ils obligatoires pour faire du SSR ?
Non, ce sont des frameworks populaires mais pas les seuls. SvelteKit, Astro, Remix, ou même du SSR custom avec Express + React fonctionnent. L'important est que le serveur renvoie du HTML complet.
Faut-il faire du SSR pour toutes les pages d'une SPA ?
Pas forcément. Les pages publiques destinées à être indexées doivent utiliser du SSR ou pre-rendering. Les pages privées (dashboards, espaces clients) peuvent rester en CSR pur, elles ne seront jamais crawlées.
🏷 Related Topics
Domain Age & History Content Crawl & Indexing JavaScript & Technical SEO Domain Name

🎥 From the same video 5

Other SEO insights extracted from this same Google Search Central video · duration 16 min · published on 06/06/2019

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.