What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

Googlebot uses URLs to locate different pages or views. If the application does not change the URL during navigation between views, Googlebot will only see the homepage and nothing else.
2:38
🎥 Source video

Extracted from a Google Search Central video

⏱ 5:53 💬 EN 📅 14/10/2020 ✂ 8 statements
Watch on YouTube (2:38) →
Other statements from this video 7
  1. JavaScript peut-il vraiment contrôler l'intégralité du cycle de vie d'une Single Page App pour le SEO ?
  2. 2:05 Pourquoi Googlebot refuse-t-il la géolocalisation et comment éviter les erreurs d'indexation liées aux chemins de code ?
  3. 2:38 Comment rendre une single-page app crawlable par Google sans perdre son indexation ?
  4. 3:09 Pourquoi Google insiste-t-il sur des titres et meta descriptions uniques pour chaque vue ?
  5. 4:02 Pourquoi renvoyer un HTTP 200 sur vos erreurs sabote-t-il votre crawl budget ?
  6. 4:47 Comment gérer correctement les codes HTTP d'erreur dans une single-page app ?
  7. 4:47 Les redirections JavaScript vers des pages d'erreur déclenchent-elles réellement un signal d'erreur pour Googlebot ?
📅
Official statement from (5 years ago)
TL;DR

Martin Splitt confirms that Googlebot relies solely on URLs to discover and index the various pages of a site. If your web application does not change the URL while navigating between views, the bot will remain stuck on the homepage, never detecting the rest of the content. For SPAs and JavaScript applications, this means strict management of routing and URL history — otherwise, your crawl budget is wasted on a single page.

What you need to understand

Does Googlebot navigate like a human user?

No. That's the first thing to keep in mind. Googlebot uses URLs as the only entry point to identify and rank distinct content. Unlike a human visitor who can click, scroll, and interact with a rich interface without ever leaving the same URL, Google's bot requires an explicit signal: a new URL = a new page.

This logic stems from the historical architecture of the web, where each resource had its unique identifier. But with the rise of Single Page Applications (SPAs) built with React, Vue, or Angular, many applications load content dynamically without altering the URL. The result: Googlebot arrives at the homepage, sees no internal links pointing to other URLs, and leaves indexing strictly nothing else.

What really happens when the URL stays fixed?

Let's take a classic case: an e-commerce site built as an SPA where product listings are displayed through AJAX calls, but the URL remains stuck on https://example.com/. The user clicks on a product, sees the full listing, can even add it to their cart — but the URL in the address bar does not change.

For Googlebot, this site has only one page: the homepage. All products, all categories, all strategic content are invisible. The bot cannot guess that other views exist: it has neither a clickable link nor a distinct URL to explore. Your crawl budget is entirely spent on a single page, and your product listings will never rise in the SERPs.

What is the difference between URL and application view?

An application view is a user interface rendered client-side by JavaScript. In a modern SPA, you can have dozens of different views (user profile, dashboard, results list, product detail) all served from the same root URL. This is convenient for user experience but catastrophic for SEO if no distinct URL is associated.

Google needs URLs to build its index. Each URL is treated as a standalone entity: it receives its own PageRank, its own relevance signals, its own positioning in the results. Without a unique URL, no differentiation is possible. Your application may contain 10,000 products — if everything is behind the same URL, Google will index none of them.

  • Googlebot identifies pages solely by their URL — not by the content displayed client-side.
  • JavaScript applications that do not modify the URL during navigation are only crawled at their entry point.
  • Even if the content is visible to the user, it remains invisible to the bot without a distinct URL.
  • Managing history (pushState, replaceState) is essential for exposing views to Googlebot.
  • An XML sitemap does not compensate for the lack of internal URLs: the bot must be able to discover pages via links.

SEO Expert opinion

Is this statement consistent with observed practices in the field?

Absolutely. For years, we've observed that poorly configured SPAs suffer from catastrophic indexing. SEO audits regularly reveal sites where only the homepage is indexed, while hundreds of theoretically accessible pages remain off the radar. This statement from Splitt only confirms what we empirically know: no distinct URL = no indexing.

Where it becomes interesting is when we cross this assertion with Google's JavaScript rendering tests. The bot can execute JS and display dynamic content — but only if it has a reason to crawl the given URL. If the URL never changes, the bot has no incentive to revisit the page to see if the content has evolved. It considers it static.

What nuances should be added to this rule?

Google does not say that all application views must be indexed. Some interfaces (private dashboards, payment tunnels, settings pages) have no interest in appearing in the index. What Splitt points out is the situation where strategic public content is hidden behind a single URL.

Another point: the statement does not specify how Google treats URL fragments (hash #). Historically, Google ignores fragments — but with frameworks like Angular that used hash routing, workarounds were introduced (the infamous _escaped_fragment_, now deprecated). Today, the recommendation is clear: use the History API (pushState) to manipulate clean URLs, without hashes.

In what situations does this rule present a problem?

Sites that built their frontend architecture before considering SEO find themselves stuck. Rewriting a complete routing system to add distinct URLs can represent months of development. Many attempt hybrid solutions: pre-rendering critical pages server-side (SSR or SSG) so that Googlebot receives static HTML with clean URLs, while maintaining the SPA for user experience.

The real trap is when technical teams do not understand the stakes involved. They see an application that "works" perfectly for users and fail to realize that Googlebot experiences it radically differently. [To be verified]: Google claims to execute JavaScript "like a modern browser", but the reality is more nuanced — the bot does not scroll automatically, does not wait indefinitely for API calls, and does not trigger all user events. If your content only appears after interaction, even with a distinct URL, it may remain invisible.

Attention: Frameworks like Next.js or Nuxt.js simplify this management, but do not resolve everything automatically. Always check that your URLs are properly exposed in the initial HTML, not just after JavaScript execution.

Practical impact and recommendations

What should you do to expose your pages to Googlebot?

The first step: audit your current routing. Use Search Console and check how many URLs are actually indexed vs. the number of pages you think you have. A massive gap signals a problem. Next, manually inspect a few URLs in Google's URL testing tool — look at the rendered HTML, not just the source HTML. If your internal links do not appear in the rendered HTML, Googlebot will never see them.

For SPAs, you must implement the History API (methods pushState and replaceState) to change the URL at every view change. Each application state must correspond to a unique and directly accessible URL (deep linking). Test by pasting the URL into a new tab: if the page does not load correctly, Googlebot will have the same issue.

What mistakes to avoid when migrating to SEO-friendly routing?

A classic mistake: using hash fragments (#) for routing. Google does not interpret them as distinct URLs. Another trap: implementing clean URLs but forgetting to generate an up-to-date XML sitemap with all these new URLs. The sitemap helps Google discover your pages faster, especially if your internal linking is weak.

Do not overlook pure HTML internal links. Even if your URLs are changing correctly, Googlebot must be able to discover them via <a href> tags present in the initial HTML. Links generated solely by JavaScript after a user event remain invisible during the first crawl. Ensure that your main navigation and critical links are in the DOM as soon as loading occurs.

How can you verify that your site meets Googlebot's expectations?

Use the URL Inspection Tool in Search Console for each type of strategic page (homepage, product page, category, article). Look at the screenshot rendered by Googlebot: if the displayed content does not match what you see in a browser, it's a red flag. Also, check the server logs to trace requests from Googlebot — if the bot only visits a handful of URLs while you have hundreds, your architecture is blocking discovery.

Finally, test the rendering speed. Google has a limited crawl budget and a timeout for JavaScript execution. If your application takes 10 seconds to display content, Googlebot may give up before the links are rendered. Optimize the initial loading: code splitting, lazy loading, server-side rendering — everything counts.

  • Audit the number of indexed URLs vs. the actual number of site pages
  • Implement the History API (pushState/replaceState) to create distinct URLs for each view
  • Generate a comprehensive XML sitemap including all public URLs
  • Ensure that internal links are present in the initial HTML, not just after JS
  • Test each type of page with Google's URL Inspection Tool
  • Analyze server logs to detect Googlebot's crawling patterns
Managing URLs in a modern application is a demanding technical task. Between client-side routing, server rendering, JavaScript hydration, and crawl constraints, there are many friction points. If you find your indexing stagnating despite internal efforts, it may be wise to consult a technical SEO agency specializing in JavaScript architectures — an external perspective often helps identify invisible blockages for the project team.

❓ Frequently Asked Questions

Googlebot peut-il indexer du contenu chargé en AJAX si l'URL ne change pas ?
Non. Même si Googlebot exécute JavaScript et peut théoriquement voir le contenu chargé dynamiquement, sans URL distincte il considère qu'il s'agit toujours de la même page. Il n'y a donc pas de nouvelle URL à indexer.
Est-ce que le sitemap XML suffit à compenser l'absence d'URLs dans la navigation ?
Le sitemap aide à découvrir les URLs, mais ne remplace pas un maillage interne solide. Googlebot utilise les liens internes pour évaluer l'importance relative des pages et distribuer le PageRank. Un sitemap seul ne suffit pas.
Les fragments d'URL (hash #) sont-ils pris en compte par Google pour identifier des pages distinctes ?
Non. Google ignore les fragments d'URL dans la plupart des cas. Le système _escaped_fragment_ utilisé autrefois pour contourner ce problème est obsolète. Utilisez l'API History pour créer des URLs propres.
Faut-il obligatoirement faire du Server-Side Rendering pour indexer une SPA ?
Pas obligatoirement, mais c'est fortement recommandé. Google peut exécuter JavaScript, mais le SSR garantit que le contenu et les liens sont disponibles immédiatement, sans dépendre du budget crawl ou du timeout JavaScript.
Comment savoir si Googlebot voit les mêmes liens internes qu'un utilisateur humain ?
Utilisez l'outil Inspection d'URL dans la Search Console et consultez le code HTML rendu. Comparez-le avec le HTML source : si les liens n'apparaissent que dans la version rendue après exécution JS, vérifiez qu'ils sont bien dans le DOM initial.
🏷 Related Topics
Domain Age & History Crawl & Indexing JavaScript & Technical SEO Domain Name Pagination & Structure Local Search

🎥 From the same video 7

Other SEO insights extracted from this same Google Search Central video · duration 5 min · published on 14/10/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.