What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

For Googlebot to access the different views of a single-page app, it is necessary to use the History API and appropriate link markup with href attributes to expose the views as URLs in the links.
2:38
🎥 Source video

Extracted from a Google Search Central video

⏱ 5:53 💬 EN 📅 14/10/2020 ✂ 8 statements
Watch on YouTube (2:38) →
Other statements from this video 7
  1. JavaScript peut-il vraiment contrôler l'intégralité du cycle de vie d'une Single Page App pour le SEO ?
  2. 2:05 Pourquoi Googlebot refuse-t-il la géolocalisation et comment éviter les erreurs d'indexation liées aux chemins de code ?
  3. 2:38 Pourquoi Googlebot rate-t-il systématiquement vos pages si l'URL ne change pas ?
  4. 3:09 Pourquoi Google insiste-t-il sur des titres et meta descriptions uniques pour chaque vue ?
  5. 4:02 Pourquoi renvoyer un HTTP 200 sur vos erreurs sabote-t-il votre crawl budget ?
  6. 4:47 Comment gérer correctement les codes HTTP d'erreur dans une single-page app ?
  7. 4:47 Les redirections JavaScript vers des pages d'erreur déclenchent-elles réellement un signal d'erreur pour Googlebot ?
📅
Official statement from (5 years ago)
TL;DR

Google requires the use of the History API and links with href attributes for Googlebot to access different views of a SPA. Without this, URLs are not exposed as distinct resources, and the engine cannot discover or index the content. Specifically: each view must have a unique URL, and internal links must point to these URLs—not just trigger JavaScript events.

What you need to understand

Why does Googlebot need the History API to index a SPA?

Single-page applications load a single initial HTML page and then dynamically modify the content without reloading the page. If no mechanism exposes the different views as distinct URLs, Googlebot only sees one URL—the entry point.

The History API (history.pushState() and history.replaceState()) allows manipulation of the URL in the address bar without triggering a page reload. Each internal navigation can therefore create a new entry in the browser's history, with a unique URL. It is this URL that Googlebot will discover, crawl, and index.

What does ‘appropriate link markup with href attributes’ mean?

Splitt emphasizes a commonly overlooked point: internal links must be real HTML links, with href attributes pointing to the URLs of the views. Not <div onclick="loadView()"> or <a href="#"> that trigger JavaScript.

Googlebot follows href links. If your “links” are merely JavaScript events attached to non-semantic elements, the engine cannot discover them. Even if your SPA works perfectly for a user, it remains invisible to the crawler if the URLs are not exposed in the DOM.

Does this approach guarantee complete indexing?

No, not automatically. The History API and href links are necessary conditions, but not sufficient. Googlebot still has to execute JavaScript, wait for the content to render, and discover the links—which can fail if the JS is blocked, if the render time exceeds the crawl budget, or if the links are injected too late.

It is also crucial to ensure each URL returns a 200 code, that the main content is present in the DOM after JavaScript rendering, and that the meta tags (title, description, canonicals) are correctly updated for each view.

  • Each view of the SPA must have a unique URL managed via the History API.
  • Internal links must be <a href="..."> pointing to these URLs, not JavaScript events.
  • Googlebot follows href links and crawls exposed URLs—but the content must also render correctly after JS execution.
  • A SPA without the History API or href links remains a single page for Google, with only one indexable entry point.
  • Ensure that the URLs of the views are discoverable in the DOM and return distinct content after rendering.

SEO Expert opinion

Is this recommendation consistent with real-world observations?

Yes, largely. SPA site audits regularly show that the absence of distinct URLs and href links is the number one cause of under-indexing. Modern frameworks (React Router, Vue Router, Angular Router) already implement the History API by default—but it is still common to see developers break this logic with poorly formed links or client-side redirects without updating the URL.

Where it gets tricky: many sites properly use the History API, but forget to make the links crawlable. A <button onClick={navigate}> works for the user, but Googlebot doesn’t click on buttons. It follows href. Splitt here reminds us of a fundamental often overlooked in modern JS stacks.

What nuances should be added?

This statement is correct, but it says nothing about render timing. A SPA can expose perfect URLs and href links, but if the content takes 5 seconds to load after JS execution, Googlebot may abandon before capturing the final DOM. [To verify] — Google has never communicated an official threshold for JS render timeout.

Another point: the History API is not enough if the meta tags are not updated dynamically. A SPA showing 10 different views with the same <title> and the same meta description will create duplicate content issues and relevance problems. Frameworks like React Helmet or Vue Meta exist precisely for this.

In what cases does this approach show its limits?

SPAs with client-side navigation only (no server-rendering) remain dependent on JavaScript execution by Googlebot. If the JS fails—blocked resources, network errors, timeouts—the content is not indexed, even with perfect URLs and hrefs. That’s why server-side rendering (SSR) or static generation remain more reliable solutions.

Another limitation: sites with a very large number of dynamic views (e-commerce with thousands of products, for example) might saturate the crawl budget if each view requires a full JS execution. In these cases, a mix of SSR/CSR or a hybrid architecture (critical pages in SSR, interactions in CSR) is preferable.

Practical impact and recommendations

What concrete steps should you take to comply with this recommendation?

Audit all internal links of your SPA. Each link must be a <a href="/path"> pointing to a real URL, not a clickable element without href. Use DevTools to inspect the DOM and check that the links are present before user interaction—not injected after a click.

Then, verify that each internal navigation updates the URL in the address bar through history.pushState() or history.replaceState(). Test by navigating within your SPA: each view must have a unique URL, and a page refresh should reload the same view (not redirect to the home page).

How can you check that Googlebot is correctly crawling the views of your SPA?

Use Google Search Console to check the indexing of your view URLs. If any URLs are discovered but not indexed, inspect them with the rich URL testing tool: does the rendered content match what you expect? Are the internal links present in the DOM captured by Google?

Supplement this with a JavaScript-enabled crawl using Screaming Frog or OnCrawl. Compare the number of URLs discovered with and without JS execution. If the gap is significant, it means your links are not crawlable without JS—this is a problem. Also, verify that the render times remain reasonable: delays longer than 3-4 seconds can cause crawl abandonment.

What mistakes should you absolutely avoid?

Do not use href="#" or href="javascript:void(0)" to trigger internal navigations. Googlebot does not follow them. Avoid encapsulating all navigation in onClick events without an href attribute—even if it works for the user, it is invisible to the crawler.

Another common mistake: forgetting to update the meta tags (title, description, canonical) for each view. A SPA with 50 views and a single page title creates a nightmare of duplicate content. Use a meta tag management library or implement a dynamic updating system in your router.

  • Audit all internal links: each link must be a <a href="..."> with a valid URL.
  • Ensure that each navigation updates the URL via history.pushState() or history.replaceState().
  • Test page refresh on each view: it should reload the same content, not redirect.
  • Use Google Search Console to check the indexing of your view URLs.
  • Crawl the site with JS enabled and compare discovered URLs with a crawl without JS.
  • Dynamically update the meta tags (title, description, canonical) for each view.
The correct implementation of the History API and href links is fundamental for any SPA but requires careful attention to technical details—router architecture, meta tag management, render timing. If your technical stack is complex or you encounter persistent indexing issues, engaging an SEO agency specialized in JavaScript architectures can save you months of trial and error and ensure a robust implementation from the start.

❓ Frequently Asked Questions

Une SPA sans API History peut-elle être indexée par Google ?
Non, pas correctement. Sans API History, Googlebot ne voit qu'une seule URL — celle de la page d'entrée. Les vues internes ne sont pas exposées comme des ressources distinctes, donc elles ne peuvent pas être crawlées ni indexées individuellement.
Puis-je utiliser des hash URLs (#/page) au lieu de l'API History ?
Techniquement oui, mais c'est déconseillé. Google ignore généralement la partie après le # dans une URL (fragment identifier). L'API History avec des URLs propres (/page) est la méthode recommandée pour exposer des vues distinctes.
Un lien avec onClick mais sans href est-il suivi par Googlebot ?
Non. Googlebot suit les liens href. Un élément cliquable sans attribut href n'est pas reconnu comme un lien par le crawler, même si le JavaScript déclenche une navigation côté client.
Le server-side rendering est-il obligatoire pour les SPAs ?
Non, mais il facilite grandement l'indexation. Une SPA avec API History et liens href corrects peut être indexée sans SSR, mais elle dépend de l'exécution JavaScript par Googlebot, qui peut échouer. Le SSR offre une sécurité supplémentaire.
Comment tester si mes liens sont crawlables par Googlebot ?
Utilisez l'outil de test des URLs enrichies dans Google Search Console, ou crawlez votre site avec Screaming Frog en mode JavaScript activé. Vérifiez que les liens internes apparaissent dans le DOM capturé et pointent vers des URLs valides avec attribut href.
🏷 Related Topics
Domain Age & History Crawl & Indexing JavaScript & Technical SEO Links & Backlinks Domain Name

🎥 From the same video 7

Other SEO insights extracted from this same Google Search Central video · duration 5 min · published on 14/10/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.