What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

Google recommends using the pushState feature for URLs as it allows for proper indexing of routes. Hash-based URLs are not indexed by Google.
5:53
🎥 Source video

Extracted from a Google Search Central video

⏱ 14:02 💬 EN 📅 27/06/2019 ✂ 5 statements
Watch on YouTube (5:53) →
Other statements from this video 4
  1. 2:37 Googlebot exécute-t-il vraiment JavaScript aussi bien qu'un navigateur moderne ?
  2. 4:28 Comment la Search Console aide-t-elle vraiment à déboguer les erreurs d'affichage mobile ?
  3. 8:16 Pourquoi chaque modal doit-il avoir sa propre URL pour être indexable ?
  4. 12:59 Le nombre de requêtes HTTP plombe-t-il vraiment votre crawl budget ?
📅
Official statement from (6 years ago)
TL;DR

Google states that hash-based URLs (#) are not indexed, unlike those using pushState. This technical limitation directly impacts single-page applications (SPAs) that still rely on URL fragments for navigation. Specifically, if your routes use hashes, Google only sees one page — your content remains invisible for indexing.

What you need to understand

What is the technical difference between hash and pushState?

Hash-based URLs use the # symbol to define anchors or client-side routes (e.g., example.com/#/product). Historically, browsers do not send the part after the # to the server — it’s a purely client-side mechanism. Google ignores this portion during crawling, as it considers it a fragment identifier, not a distinct resource.

The pushState method, introduced with the HTML5 History API, allows modification of the displayed URL without reloading the page (e.g., example.com/product). The server receives this complete URL during a direct access, and Google can crawl it like any standard page. It is the standard for modern JavaScript applications that want to remain crawlable.

Why does this distinction pose a problem for SPAs?

Single-page applications (React, Vue, Angular) have long used hashes to manage navigation without refreshing the page. It was simple to implement and required no server configuration. But this ease comes at a cost: Google sees only one URL, the one without the hash.

The result? A site with 50 different product pages, all accessible via example.com/#/product-1, #/product-2, etc., appears in the index as a single page: example.com. The dynamic content loaded via JavaScript after the hash remains invisible to the engine. This is a critical problem for e-commerce or media sites that rely on organic traffic.

Has Google ever tried to crawl hashes?

Yes, and this is where it gets historically interesting. Google introduced a temporary scheme with #! (hashbang) to allow for the indexing of AJAX content. This system required the server to provide a static HTML version of the page during a special request (_escaped_fragment_). It was clunky, difficult to maintain, and Google officially abandoned it.

Since then, the position has been clear: no indexing of hashes. Google pushes developers towards pushState and server-side rendering (SSR) or hydration. Splitt's statement just confirms a rule that has been in place for years, but many sites still ignore it.

  • Hash-based URLs (#) are not sent to the server and remain invisible to Google
  • pushState allows for creating crawlable URLs without reloading the page
  • The old hashbang (#!) system has been abandoned by Google
  • SPAs must use client-side routing with pushState or implement SSR
  • Without pushState, a site with 100 routes appears in the index as only one page

SEO Expert opinion

Is this statement consistent with observed practices in the field?

Absolutely. SEO audits regularly reveal SPA sites with zero indexed pages despite rich content, solely because they use hashes. Google Search Console then shows only one URL in the index, and developers are puzzled by the lack of organic traffic. This is not a bug; it is the documented behavior of the crawler.

However, a nuance is needed: some hybrid sites use hashes for secondary navigation elements (tabs, modals, filters) without negative SEO impact. The problem arises when the hash defines the main content. If your blog article is only accessible via example.com/#article-123, it does not exist in Google’s eyes.

What are the exceptions or edge cases to watch out for?

Google indexes classic anchors (links to #section-2 on the same page) for featured snippets and internal navigation, but it does not treat them as distinct pages. If you use hashes for scrolling to a section, there is no issue. The problem concerns only application routes.

Another rarely mentioned point: some modern JavaScript frameworks (Next.js, Nuxt) automatically handle pushState and SSR, making this issue invisible for the developer. But if you are building a custom SPA or using an older version of React Router in hash mode, you are directly affected. [To verify]: Could Google one day crawl hashes in the context of progressive web apps? Nothing suggests that today.

Should you migrate an existing hash site to pushState?

If your site generates significant SEO traffic, yes, it is a technical priority. But be careful: migrating to pushState requires correct server configuration. All routes must return the same JavaScript application with a 200 code, otherwise direct accesses (via an external link or bookmark) generate 404s.

Specifically, if a user accesses example.com/product/shoes, the server must serve the application, and then JavaScript loads the corresponding content. Without this config (Apache/Nginx redirects to index.html), the migration breaks the site. This is a technical project that requires testing and coordination between developers and SEO.

⚠️ Attention: Switching from hash to pushState changes all your URLs. Plan 301 redirects if hash pages were linked from outside (unlikely, but to check). Implement crawl monitoring before/after to detect 404 errors.

Practical impact and recommendations

What should you do concretely to migrate to pushState?

First step: audit your architecture. If you are using a modern framework (Next.js, Nuxt, Angular in HTML5 mode), pushState is likely already active. Check the router configuration: look for options like mode: 'history' (Vue Router) or useHash: false (Angular). If you find mode: 'hash', that's where you need to act.

Second step: configure the server. All routes must point to your main HTML file. With Apache, add a rewrite rule in .htaccess. With Nginx, use try_files to return index.html on any unknown route. Without that, a direct access to /product/123 generates a 404 server-side, even if JavaScript can handle that route.

How to check that Google is crawling correctly after migration?

Use the URL inspection tool in Search Console. Test a specific route (e.g., /category/seo) and check that Google sees the expected content in the HTML render. If the render shows a 404 error or remains empty, your server config is incorrect. The live URL test simulates the actual crawl.

Also monitor coverage reports. After migration, you should gradually see new URLs appearing in the index. If the number of indexed pages stagnates or drops, it’s a red flag. Google should never encounter a 404 error on your pushState routes. A Log File Analysis monitoring allows you to spot the HTTP codes returned to the bot.

What errors to avoid during implementation?

The classic error: forgetting the base href tag or mismanaging relative paths. When the URL changes client-side, CSS/JS resources may load from an incorrect path. Test each route with direct access (F5 or new tab) to ensure everything loads correctly, not just during internal navigation.

Another trap: hardcoded internal links with hash. If your old code contains <a href="#/product">, replace them with router components that generate pushState URLs. Otherwise, you are mixing the two systems, creating inconsistencies. Also consider sitemaps: generate them with the new URLs without hashes.

  • Audit the current JavaScript router configuration (hash or history)
  • Configure the server to return index.html on all application routes
  • Test each route with direct access to detect server 404s
  • Validate the HTML rendering using the Search Console inspection tool
  • Update sitemaps and internal links
  • Monitor crawl logs to identify errors post-migration
Migrating to pushState is a technical undertaking that involves both JavaScript code and server configuration. For complex sites with hundreds of routes, this transformation can be challenging to orchestrate without regressions. If you lack internal resources or want to secure the migration without loss of traffic, engaging a specialized SEO agency can help you avoid costly mistakes and accelerate the return on investment of this technical overhaul.

❓ Frequently Asked Questions

Est-ce que Google indexe les URLs avec hash si j'utilise du rendu côté serveur ?
Non. Même avec SSR, la portion après le # reste côté client et n'est jamais transmise au serveur lors du crawl. Google ne voit que l'URL avant le hash.
Puis-je utiliser des hashes pour des filtres ou des onglets sans impact SEO ?
Oui, si le contenu principal reste accessible sans hash. Les hashes pour la navigation secondaire (onglets, modales) ne posent pas problème tant que chaque page indexable a une URL propre.
Quelle est la différence entre pushState et replaceState ?
pushState ajoute une entrée dans l'historique du navigateur, replaceState modifie l'entrée actuelle sans en créer de nouvelle. Pour le SEO, les deux permettent de changer l'URL visible, mais pushState est recommandé pour la navigation.
Mon site utilise Angular en mode hash, que se passe-t-il si je ne change rien ?
Google continuera à indexer uniquement l'URL racine. Toutes vos routes internes resteront invisibles dans les résultats de recherche, limitant drastiquement votre trafic organique.
Faut-il rediriger les anciennes URLs en hash après migration ?
Techniquement, les URLs en hash n'étaient pas indexées, donc il n'y a pas de redirection serveur à faire. En revanche, un script JavaScript peut détecter un hash dans l'URL et rediriger côté client vers la nouvelle route propre.
🏷 Related Topics
Crawl & Indexing Domain Name

🎥 From the same video 4

Other SEO insights extracted from this same Google Search Central video · duration 14 min · published on 27/06/2019

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.