Official statement
Other statements from this video 4 ▾
- 2:37 Googlebot exécute-t-il vraiment JavaScript aussi bien qu'un navigateur moderne ?
- 4:28 Comment la Search Console aide-t-elle vraiment à déboguer les erreurs d'affichage mobile ?
- 8:16 Pourquoi chaque modal doit-il avoir sa propre URL pour être indexable ?
- 12:59 Le nombre de requêtes HTTP plombe-t-il vraiment votre crawl budget ?
Google states that hash-based URLs (#) are not indexed, unlike those using pushState. This technical limitation directly impacts single-page applications (SPAs) that still rely on URL fragments for navigation. Specifically, if your routes use hashes, Google only sees one page — your content remains invisible for indexing.
What you need to understand
What is the technical difference between hash and pushState?
Hash-based URLs use the # symbol to define anchors or client-side routes (e.g., example.com/#/product). Historically, browsers do not send the part after the # to the server — it’s a purely client-side mechanism. Google ignores this portion during crawling, as it considers it a fragment identifier, not a distinct resource.
The pushState method, introduced with the HTML5 History API, allows modification of the displayed URL without reloading the page (e.g., example.com/product). The server receives this complete URL during a direct access, and Google can crawl it like any standard page. It is the standard for modern JavaScript applications that want to remain crawlable.
Why does this distinction pose a problem for SPAs?
Single-page applications (React, Vue, Angular) have long used hashes to manage navigation without refreshing the page. It was simple to implement and required no server configuration. But this ease comes at a cost: Google sees only one URL, the one without the hash.
The result? A site with 50 different product pages, all accessible via example.com/#/product-1, #/product-2, etc., appears in the index as a single page: example.com. The dynamic content loaded via JavaScript after the hash remains invisible to the engine. This is a critical problem for e-commerce or media sites that rely on organic traffic.
Has Google ever tried to crawl hashes?
Yes, and this is where it gets historically interesting. Google introduced a temporary scheme with #! (hashbang) to allow for the indexing of AJAX content. This system required the server to provide a static HTML version of the page during a special request (_escaped_fragment_). It was clunky, difficult to maintain, and Google officially abandoned it.
Since then, the position has been clear: no indexing of hashes. Google pushes developers towards pushState and server-side rendering (SSR) or hydration. Splitt's statement just confirms a rule that has been in place for years, but many sites still ignore it.
- Hash-based URLs (#) are not sent to the server and remain invisible to Google
- pushState allows for creating crawlable URLs without reloading the page
- The old hashbang (#!) system has been abandoned by Google
- SPAs must use client-side routing with pushState or implement SSR
- Without pushState, a site with 100 routes appears in the index as only one page
SEO Expert opinion
Is this statement consistent with observed practices in the field?
Absolutely. SEO audits regularly reveal SPA sites with zero indexed pages despite rich content, solely because they use hashes. Google Search Console then shows only one URL in the index, and developers are puzzled by the lack of organic traffic. This is not a bug; it is the documented behavior of the crawler.
However, a nuance is needed: some hybrid sites use hashes for secondary navigation elements (tabs, modals, filters) without negative SEO impact. The problem arises when the hash defines the main content. If your blog article is only accessible via example.com/#article-123, it does not exist in Google’s eyes.
What are the exceptions or edge cases to watch out for?
Google indexes classic anchors (links to #section-2 on the same page) for featured snippets and internal navigation, but it does not treat them as distinct pages. If you use hashes for scrolling to a section, there is no issue. The problem concerns only application routes.
Another rarely mentioned point: some modern JavaScript frameworks (Next.js, Nuxt) automatically handle pushState and SSR, making this issue invisible for the developer. But if you are building a custom SPA or using an older version of React Router in hash mode, you are directly affected. [To verify]: Could Google one day crawl hashes in the context of progressive web apps? Nothing suggests that today.
Should you migrate an existing hash site to pushState?
If your site generates significant SEO traffic, yes, it is a technical priority. But be careful: migrating to pushState requires correct server configuration. All routes must return the same JavaScript application with a 200 code, otherwise direct accesses (via an external link or bookmark) generate 404s.
Specifically, if a user accesses example.com/product/shoes, the server must serve the application, and then JavaScript loads the corresponding content. Without this config (Apache/Nginx redirects to index.html), the migration breaks the site. This is a technical project that requires testing and coordination between developers and SEO.
Practical impact and recommendations
What should you do concretely to migrate to pushState?
First step: audit your architecture. If you are using a modern framework (Next.js, Nuxt, Angular in HTML5 mode), pushState is likely already active. Check the router configuration: look for options like mode: 'history' (Vue Router) or useHash: false (Angular). If you find mode: 'hash', that's where you need to act.
Second step: configure the server. All routes must point to your main HTML file. With Apache, add a rewrite rule in .htaccess. With Nginx, use try_files to return index.html on any unknown route. Without that, a direct access to /product/123 generates a 404 server-side, even if JavaScript can handle that route.
How to check that Google is crawling correctly after migration?
Use the URL inspection tool in Search Console. Test a specific route (e.g., /category/seo) and check that Google sees the expected content in the HTML render. If the render shows a 404 error or remains empty, your server config is incorrect. The live URL test simulates the actual crawl.
Also monitor coverage reports. After migration, you should gradually see new URLs appearing in the index. If the number of indexed pages stagnates or drops, it’s a red flag. Google should never encounter a 404 error on your pushState routes. A Log File Analysis monitoring allows you to spot the HTTP codes returned to the bot.
What errors to avoid during implementation?
The classic error: forgetting the base href tag or mismanaging relative paths. When the URL changes client-side, CSS/JS resources may load from an incorrect path. Test each route with direct access (F5 or new tab) to ensure everything loads correctly, not just during internal navigation.
Another trap: hardcoded internal links with hash. If your old code contains <a href="#/product">, replace them with router components that generate pushState URLs. Otherwise, you are mixing the two systems, creating inconsistencies. Also consider sitemaps: generate them with the new URLs without hashes.
- Audit the current JavaScript router configuration (hash or history)
- Configure the server to return index.html on all application routes
- Test each route with direct access to detect server 404s
- Validate the HTML rendering using the Search Console inspection tool
- Update sitemaps and internal links
- Monitor crawl logs to identify errors post-migration
❓ Frequently Asked Questions
Est-ce que Google indexe les URLs avec hash si j'utilise du rendu côté serveur ?
Puis-je utiliser des hashes pour des filtres ou des onglets sans impact SEO ?
Quelle est la différence entre pushState et replaceState ?
Mon site utilise Angular en mode hash, que se passe-t-il si je ne change rien ?
Faut-il rediriger les anciennes URLs en hash après migration ?
🎥 From the same video 4
Other SEO insights extracted from this same Google Search Central video · duration 14 min · published on 27/06/2019
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.