Official statement
Other statements from this video 13 ▾
- 2:06 Google fusionne-t-il vraiment les pages similaires en une seule version indexée ?
- 4:34 Le pré-rendu basé sur l'user-agent est-il devenu la seule méthode recommandée par Google ?
- 5:49 Faut-il vraiment adapter la longueur de ses meta descriptions aux snippets Google ?
- 7:53 Faut-il bloquer la redirection automatique vers l'app mobile pour préserver son SEO ?
- 7:53 Les redirections furtives vers les applications mobiles sont-elles un frein au référencement ?
- 8:32 Google propose-t-il vraiment une révision manuelle SEO de votre site ?
- 9:40 Les canonicals JavaScript sont-elles vraiment ignorées par Google ?
- 11:17 Les PWA sont-elles vraiment indispensables pour le référencement naturel ?
- 16:56 Faut-il corriger les URLs marquées 'submitted URL not selected as canonical' ?
- 17:36 Faut-il supprimer un sitemap qui contient trop d'erreurs ?
- 19:40 Comment Google distingue-t-il réellement le contenu dupliqué des adresses identiques ?
- 25:43 Faut-il vraiment rediriger toutes les pages HTTP vers HTTPS pour éviter les problèmes d'indexation ?
- 37:33 Faut-il craindre de trop lier vers Wikipédia ou des sites d'autorité ?
Google disregards URL fragments containing a hash (#) for indexing, which poses a major issue for Angular applications using default routing. Specifically, if your SPA generates URLs like example.com/#/products, Googlebot will only index the root page. The solution: switch to PathLocationStrategy mode for proper indexable clean URLs, or implement a dynamic pre-rendering system.
What you need to understand
What is the technical origin of this issue with the hash?
The URL fragment (anything following the #) was historically designed solely for client-side navigation. Browsers never send it to the server during an HTTP request. When you access example.com/#/products, the server only receives example.com.
For Googlebot, this logic applies as well: it considers everything after the # as not part of the canonical URL. Two URLs like example.com/#/products and example.com/#/contact are treated as the same resource: example.com. This is a remnant of the static web where the fragment served only as anchors for internal navigation.
Why did Angular use this system by default?
Angular (and many SPA frameworks before it) has long used HashLocationStrategy as the default routing mode. The advantage? No server configuration required. Regardless of the requested URL, the server always returns index.html, and JavaScript handles the client-side routing.
This approach was convenient for development and for environments where configuring server redirects was not easy. However, it created a SEO nightmare: all application pages shared the same URL in the eyes of search engines.
What does this imply for crawling and indexing?
If your Angular site keeps URLs with a hash, Google will only index a single page: your root. All your subpages, product sheets, blog articles generated by the framework will remain invisible in the SERPs. Crawling stops dead at the root since Googlebot does not consider internal links with # as distinct pages.
Even if Google executes JavaScript, it applies this rule upfront. Client-side rendering changes nothing: the fragment is ignored as soon as the URL is analyzed. This is an architectural decision of the web, not a technical limitation of Googlebot.
- HashLocationStrategy produces non-indexable URLs (example.com/#/page)
- PathLocationStrategy generates clean URLs (example.com/page) but requires server configuration
- Server-side pre-rendering (SSR) can circumvent the issue by generating static HTML for each route
- Google makes no exceptions: even with an XML sitemap listing URLs with #, they will not be indexed
- Other engines apply the same rule: Bing, Yandex also ignore URL fragments
SEO Expert opinion
Is this statement consistent with real-world observations?
Yes, unequivocally. Hundreds of Angular migrations have confirmed it: switching from HashLocationStrategy to PathLocationStrategy unlocks indexing almost instantly. The tests are reproducible and the results unanimous. This is not a gray area in SEO.
What still surprises some clients is the radicality of the impact. We are not talking about a slight ranking handicap, but total invisibility. Google literally only indexes one page. Server logs corroborate this: Googlebot makes just one request to the root, regardless of the number of internal links.
What nuances should be added to this rule?
The only historical exception concerned AJAX crawlable URL patterns with #! (hashbang), which Google supported between 2009 and 2015. This system has been obsolete for years and should not be used anymore. If you come across legacy code with #!, migrate immediately.
Also, be cautious of badly configured hybrids. Some sites use PathLocationStrategy but still generate links with # in the rendered HTML, often due to third-party components or poor routing practices. An audit of internal links is necessary: if you see href="#/something", it's a bug hindering your crawl.
Should SSR always be prioritized for Angular?
SSR (Server-Side Rendering) with Angular Universal is not mandatory if you switch to PathLocationStrategy and correctly configure your server to serve index.html on all routes. Googlebot reliably executes JavaScript today. SSR mainly provides a gain in perceived speed and compatibility with less advanced bots.
That said, for an e-commerce or editorial site with thousands of pages, SSR significantly improves crawl budget and indexing responsiveness. Google prefers to receive already hydrated HTML rather than waiting for JavaScript execution. In large volumes, this difference can amount to weeks of indexing delay. [To be verified]: Google has never published official metrics on the impact of SSR on crawl budget for SPAs, but field observations strongly advocate for it.
Practical impact and recommendations
How can you check if your Angular site uses URLs with a hash?
Open your browser and navigate through several pages of your application. Look at the address bar: if you see # followed by paths (/products, /contact, etc.), you are in HashLocationStrategy. Also check the source code of your AppRoutingModule: the presence of {useHash: true} in RouterModule.forRoot() confirms the issue.
Then test the actual indexing: do a site:yourdomain.com in Google. If only the homepage appears while you have dozens of pages, that’s a strong indicator. Use Google Search Console to analyze discovered versus indexed pages: a massive gap suggests a routing issue.
What concrete modifications should be made in your Angular application?
In your main routing file (app-routing.module.ts), ensure that RouterModule.forRoot() does NOT include the option {useHash: true}. By default, Angular uses PathLocationStrategy since recent versions, but check if a developer has forcibly set the hash mode.
On the server side, configure a fallback to index.html for all non-file routes. For Nginx: try_files $uri $uri/ /index.html; For Apache: FallbackResource /index.html in .htaccess. Without this config, a refresh on example.com/products will return a 404 because the server searches for a non-existent products.html file.
What mistakes should be avoided during the migration?
Don't migrate without implementing 301 redirects if your old site with # was partially indexed or linked by other sites. Even if Google ignored fragments, some users may have bookmarked URLs with #. Properly redirect example.com/#/products to example.com/products on the client side with a detection script.
Don't neglect the XML sitemap after migration. Generate it with your new clean URLs and submit it in Search Console to speed up discovery. Monitor 404 errors in the following weeks: they often reveal badly migrated internal links or missing routes in your server configuration.
- Check for the absence of {useHash: true} in RouterModule.forRoot()
- Configure the server fallback to index.html on all routes
- Test browser refresh on several internal URLs to confirm the absence of 404s
- Update the XML sitemap with new URLs without hashes
- Implement 301 redirects on the client side for old URLs with # if necessary
- Monitor Google Search Console for indexing errors post-migration
❓ Frequently Asked Questions
Est-ce que Google peut indexer des URL avec # si on utilise un sitemap XML ?
Les autres frameworks SPA (React, Vue) ont-ils le même problème ?
Faut-il nécessairement passer par du Server-Side Rendering pour indexer un SPA ?
Que se passe-t-il si on mélange URL avec et sans dièse sur le même site ?
Les URL avec # peuvent-elles nuire au ranking même si on ne les utilise plus ?
🎥 From the same video 13
Other SEO insights extracted from this same Google Search Central video · duration 56 min · published on 15/05/2018
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.