What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

For dynamic URLs generated by JavaScript (HTML5 pushState), ensure that you include a fallback in the form of a regular static link so that Google can properly index them.
2:10
🎥 Source video

Extracted from a Google Search Central video

⏱ 1h00 💬 EN 📅 01/05/2018 ✂ 12 statements
Watch on YouTube (2:10) →
Other statements from this video 11
  1. 1:05 Les URL avec hash (#) sont-elles vraiment ignorées par Google lors de l'indexation ?
  2. 3:10 Googlebot attend-il vraiment le JavaScript avant d'indexer vos pages ?
  3. 5:50 Pourquoi vos nouvelles pages dansent-elles dans les SERPs pendant des semaines ?
  4. 13:08 Faut-il vraiment optimiser la longueur des méta-descriptions pour Google ?
  5. 16:45 Faut-il vraiment utiliser rel="next" et rel="prev" pour la pagination ?
  6. 21:30 Le contenu masqué derrière des onglets pénalise-t-il vraiment le SEO mobile ?
  7. 28:46 Faut-il vraiment inclure Googlebot dans vos tests A/B ou risquez-vous une pénalité SEO ?
  8. 29:22 Googlebot rate-t-il des pages entières à cause de la géolocalisation ?
  9. 33:34 Faut-il vraiment séparer contenu familial et non-familial par URL pour SafeSearch ?
  10. 35:05 Quelle métrique de vitesse Google privilégie-t-il vraiment pour le ranking ?
  11. 56:58 Les redirections 301 suffisent-elles vraiment à protéger votre visibilité après un changement d'URL ?
📅
Official statement from (8 years ago)
TL;DR

Google recommends including traditional static links as a fallback for all dynamic URLs created via pushState. Without this fallback, crawling and indexing can be incomplete, especially if JavaScript rendering fails. Essentially, a SPA that relies solely on client-side URL manipulation risks leaving its pages orphaned from Googlebot's perspective.

What you need to understand

Why does Google emphasize fallback for pushState?

The HTML5 pushState API allows you to modify the URL displayed in the browser without reloading the page. This is the foundation of modern Single Page Applications (SPAs): React Router, Vue Router, Angular… all rely on this technique. The problem for Google? If a page has no classic HTML links pointing to these dynamic URLs, Googlebot cannot discover them during the initial parsing of the raw HTML.

Google operates in two stages: crawling static HTML, then JavaScript rendering in a separate queue. If JS rendering fails or is delayed, dynamic URLs remain unseen. A fallback in the form of a classic <a href> ensures that Googlebot discovers the URL even if JavaScript crashes or is not executed.

What exactly is considered a regular static link?

A regular static link is an HTML <a> element with an href attribute present in the source DOM before JavaScript execution. Not a link generated dynamically by an onClick event, nor a <div> with a JavaScript handler. Google must be able to see it in the raw View Source of the page.

If your navigation menu is generated client-side and contains only JavaScript event handlers that call history.pushState(), Google sees no link. You must serve an initial HTML with real <a href> links, even if your framework subsequently intercepts them for client routing.

What are the risks if you ignore this recommendation?

The first risk is orphaned pages. An orphan page is technically indexable if Google discovers it via XML sitemap or external backlink, but it does not receive any internal PageRank. As a result: degraded ranking performance, superficial crawling, wasted crawl budget.

The second risk concerns JavaScript rendering errors. If Googlebot encounters a JS exception (incompatibility, timeout, resource blocked by robots.txt), it falls back on the initial HTML. Without a fallback, the page becomes an empty shell. On-the-ground observations indicate that 15 to 20% of JS renders partially fail on poorly configured SPA sites.

  • pushState modifies the URL on the client without reloading the page, making the discovery of new URLs impossible for a static crawler
  • Googlebot first crawls the raw HTML, then queues the JS rendering — potential delays of several days on low authority sites
  • A static <a href> in the source DOM ensures immediate discovery, even if JavaScript crashes
  • Orphaned pages do not receive any internal PageRank and may never be explored if they do not appear in the sitemap
  • SPAs must hybrid: serve initial HTML with real links, then gradually enhance with client routing

SEO Expert opinion

Is this directive consistent with observed practices in the field?

Yes, and it’s actually a classic in SEO audits on SPAs. Sites that implement only client-side routing without a static fallback systematically lose crawl coverage. This can be verified in Search Console: discovered pages are not crawled, crawled pages are not indexed, flat discovery graph despite the site containing thousands of pages.

The nuance is that Google does render JavaScript — but not in real-time, not exhaustively, and not always successfully. High authority sites (established e-commerce, media) fare better: their crawl budget allows for frequent re-crawling and fast JS rendering. New or niche sites? They may wait weeks before their dynamic URLs are rendered. [To be verified] if Google truly prioritizes JS rendering for low PageRank sites.

In what cases can we do without a fallback with no major risk?

If your site generates dynamic URLs server-side (Server-Side Rendering or SSR), the problem does not arise. Next.js, Nuxt, Gatsby in SSR or SSG mode: the initial HTML already contains all the links. Google crawls fully formed HTML, not JavaScript to execute.

If you use only pushState for filtering or sorting parameters (e.g., /products?color=red becomes /products/red on the client), and your canonical URLs remain static, the risk is limited. But be careful: if these filtered URLs generate unique content that you want to index, the fallback becomes mandatory once again.

What common mistakes are observed in SPA implementations?

The number one mistake: confusing progressive enhancement with total dependency on JavaScript. Many developers think that "Google crawls JS" means "I can deliver empty HTML". No. Google crawls JS, but with a delay, a limited budget, and a non-zero failure rate.

The second mistake: blocking JavaScript resources in robots.txt. If Google cannot load React, Vue, or Angular, it cannot execute pushState. The static fallback then becomes your only lifeline — and even then, only if you have one. The third mistake: forgetting that third-party crawlers (Bing, SEMrush, Ahrefs) do not always reliably render JavaScript.

Practical impact and recommendations

How to implement an effective static fallback on a SPA?

The most reliable solution is progressive hydration. You serve an initial HTML server-side containing all navigation links as real <a href>. Then, your JavaScript framework intercepts clicks on these links, prevents page reload, and calls history.pushState() to simulate client navigation.

Concretely: your Menu component in React should render <Link> (React Router) or <router-link> (Vue Router), which generate real <a href> in the DOM. No <div onClick={navigate}>. Check the View Source of your page: the URLs must be hard-coded in the HTML, not injected afterwards by a useEffect.

What checks should be performed to ensure Google sees the links?

First test: disable JavaScript in Chrome DevTools (Settings > Debugger > Disable JavaScript), reload your page, and check that your navigation links are clickable. If they disappear or become inert, it means you have no fallback.

Second test: use the URL inspection tool in Search Console and compare the raw HTML with the rendered HTML. If the links only appear in the rendered HTML, Google discovers them late. Third test: crawl your site with Screaming Frog in "JavaScript Rendering Disabled" mode. The orphan URLs revealed are those lacking a fallback.

What to do if my framework enforces 100% client-side routing?

If you’re stuck on a framework that generates everything client-side (certain configurations of Create React App or Vue CLI without SSR), you have three options. Option 1: migrate to an SSR solution (Next.js, Nuxt, SvelteKit). This is the cleanest, but the most costly in terms of redesign.

Option 2: implement prerendering (Prerender.io, Rendertron) which generates static HTML snapshots for crawlers. This is a band-aid, not a real solution, and Google sometimes detects cloaking if the content differs too much between users and bots. Option 3: accept the limitation and compensate with a comprehensive XML sitemap + aggressive internal linking on already indexed pages. You lose crawling efficiency, but it’s better than nothing.

  • Check that all navigation links are present in the source HTML (View Source) before JavaScript execution
  • Use routing components that generate real <a href> (React Router Link, Vue Router router-link)
  • Test the site with JavaScript disabled in DevTools: the links should remain clickable
  • Crawl the site with Screaming Frog in "JS Rendering Disabled" mode to identify orphan pages
  • Compare raw HTML vs rendered HTML in Search Console (URL Inspection Tool) to detect late-injected links
  • Prefer SSR or SSG (Next.js, Nuxt, Gatsby) over pure Client-Side Rendering for SEO-critical sites
Implementing a static fallback for pushState is not a suggestion, it is a technical necessity to ensure the discoverability of your URLs. Google’s JavaScript rendering is real but unpredictable: delays, partial failures, limited budget. An <a href> link in the source DOM is your safety net. If your current architecture makes this implementation difficult — SSR redesign, progressive hybridization, prerendering — a specialized SEO agency in JavaScript SEO can help diagnose the blockages and propose a technical roadmap suited to your stack without compromising user experience.

❓ Frequently Asked Questions

pushState est-il compatible avec le SEO si j'utilise un sitemap XML complet ?
Le sitemap permet la découverte des URLs, mais pas leur exploration efficace ni le transfert de PageRank interne. Sans liens HTML statiques, les pages restent orphelines du point de vue du maillage interne, ce qui dégrade leur potentiel de ranking.
Google rend-il vraiment le JavaScript de toutes les pages crawlées ?
Google rend le JavaScript, mais avec un délai et un budget limité. Les sites à faible autorité peuvent attendre des semaines avant que certaines pages soient rendues. Le rendu n'est ni instantané, ni exhaustif, ni garanti à 100%.
Un site en React ou Vue peut-il être bien référencé sans SSR ?
Oui, si le HTML initial contient de vrais liens statiques et que les ressources JavaScript ne sont pas bloquées. Mais le SSR simplifie drastiquement l'indexation et réduit les risques d'erreur de rendu.
Comment savoir si mes URLs dynamiques sont bien découvertes par Google ?
Vérifie dans Search Console le rapport de couverture d'index. Les URLs "découvertes, actuellement non indexées" qui stagnent pendant des semaines sont souvent victimes d'un manque de fallback statique.
Le prerendering pour bots est-il considéré comme du cloaking par Google ?
Non, tant que le contenu servi aux bots et aux utilisateurs est équivalent. Mais si le prerendering masque des différences substantielles (prix, disponibilité, contenu éditorial), Google peut le traiter comme du cloaking et pénaliser le site.
🏷 Related Topics
Content Crawl & Indexing JavaScript & Technical SEO Links & Backlinks Domain Name

🎥 From the same video 11

Other SEO insights extracted from this same Google Search Central video · duration 1h00 · published on 01/05/2018

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.