Official statement
Other statements from this video 2 ▾
Google completely ignores URL fragments (everything after the #) during crawling and link tracking. This means if your navigation relies on anchors like example.com/page#section2, crawlers will never discover the targeted content. For SEO, this means rethinking the architecture of single-page applications (SPAs) and JavaScript applications that overuse fragment routing.
What you need to understand
What is a URL fragment and why does it exist?
A URL fragment is the portion of a web address that follows the hash symbol (#). Originally designed to allow intra-page navigation to specific HTML anchors, this mechanism has been largely hijacked by modern JavaScript frameworks (React, Angular, Vue.js in hash mode) to simulate client-side routing.
The problem? Crawlers treat fragments as client-side metadata, not as identifiers for distinct content. When Googlebot encounters example.com/blog#article-5, it only processes example.com/blog. Everything that follows the # disappears from the crawl equation.
How does this limitation actually affect indexing?
If your site uses fragments to dynamically load different content, you create ghost URLs that Google cannot follow or index. A navigation menu with links like /products#category-shoes or /services#pricing generates dead links for bots.
Old Single Page Applications (SPAs) in hash router mode are particularly vulnerable. Each internal "page" remains invisible to traditional crawling unless you implement a prerendering or Server-Side Rendering (SSR) system that turns these fragments into real URLs with server state.
What exceptions and edge cases should be noted?
Fragments remain useful and legitimate for their original use: anchoring to sections on the same page. An internal link to /guide#chapter-3 works perfectly for user UX, but does not distribute PageRank to a separate page since there is no separate page.
Some believe that JavaScript can compensate by manipulating the History API to turn fragments into real URLs. It's true, but it requires rigorous implementation and server-side rendering that generates crawlable URLs from the start. The crawler will not execute your client-side JavaScript to magically discover your content.
- Fragments (#) are ignored by Googlebot during crawling and internal link tracking
- Any content accessible only via fragment remains invisible without SSR or prerendering
- Hash router SPAs must migrate to history mode with real server URLs
- The legitimate use of fragments is limited to intra-page anchoring for user experience
- Client-side JavaScript does not compensate for the lack of crawlable URLs on the server side
SEO Expert opinion
Is this statement consistent with field observations?
Absolutely. Empirical tests consistently confirm that fragments disappear from crawl logs and Search Console coverage reports. No surprise here: this behavior has been documented since the early days of the web, but is regularly rediscovered by each generation of developers who reinvent client-side routing.
The real issue is that many modern JavaScript frameworks have long encouraged hash routing by default for its simplicity of deployment (no server configuration needed). As a result, thousands of sites think they have 50 indexable pages when they only expose one to crawl.
What nuances must be considered for this rule?
The essential nuance concerns the use of fragments for A/B testing or tracking. Some tools add session parameters after the # to avoid polluting the canonical URL. In this specific case, Google’s ignorance of the fragment is a blessing, not a problem.
A second nuance: fragments can coexist with crawlable content if the underlying architecture relies on real URLs. A well-structured site can have example.com/article-5 as an indexable URL AND offer example.com/article-5#comment-section for UX. The fragment enhances the experience without breaking indexing.
In what situations might this rule evolve?
[To be verified] Google has experimented in the past with the #! (hashbang) scheme to make fragments crawlable via a snapshot system. This approach was officially abandoned in 2015, but we cannot exclude the possibility of a new method emerging in response to the omnipresence of modern JavaScript.
That said, the industry has clearly moved towards cleaner solutions like Server-Side Rendering (Next.js, Nuxt.js) and Static Site Generation. The trend is not to make fragments crawlable, but to eliminate them completely from SEO-friendly architectures.
Practical impact and recommendations
How can you quickly audit if your site is suffering from this issue?
First step: inspect your server logs and compare them with your internal link structure. If you see internal links with # in your HTML but no corresponding crawl trace in the logs, you have confirmation of the problem. Search Console will never show you these ghost URLs in the coverage report.
Second step: crawl your site with a tool that emulates Googlebot with JavaScript disabled (Screaming Frog in "Render: Text Only" mode). Compare the number of discovered pages with your XML sitemap or your expected inventory. A significant gap often indicates a fragment routing or excessive JavaScript dependency issue.
What technical migrations should be considered concretely?
If you are using a framework like React Router, Vue Router, or Angular Router in hash mode, migrating to "history" mode (or "HTML5 mode") should be your top priority. This requires server config to handle rewrites: all URLs must point to your application entry point which then manages the routing.
For complex sites, consider a hybrid architecture with partial SSR: high-SEO stakes pages (categories, product sheets, articles) are rendered server-side, while purely UX interactions (modals, tabs, accordions) can remain client-side with fragments. You maintain a fluid experience without sacrificing indexing.
What mistakes to avoid during the redesign?
Never remove fragments outright without a redirect plan. If your URLs with # have accumulated external backlinks or social shares (rare but possible), you need to map these old URLs to the new ones. Set up server-side or JavaScript detection to redirect properly.
Second trap: not testing server rendering before going live. A poorly configured SSR can generate empty content or JavaScript errors that break indexing even worse than before. Use the URL inspection tool in Search Console to validate that the rendered content matches expectations.
- Audit server logs to identify uncrawled URLs with fragments
- Migrate from hash routing to history mode in your JavaScript frameworks
- Configure server rewrites (Apache, Nginx) to support HTML5 routing
- Implement SSR or prerendering for critical SEO pages
- Set up redirects for old fragment URLs if they have backlinks
- Validate rendering with the Search Console inspection tool before and after migration
❓ Frequently Asked Questions
Est-ce que les fragments (#) transmettent du PageRank vers les sections ciblées ?
Un sitemap XML peut-il contenir des URLs avec fragments pour forcer l'indexation ?
Les fragments posent-ils problème pour le maillage interne et la profondeur de crawl ?
Le prerendering dynamique (comme Prerender.io) résout-il définitivement le problème ?
Les frameworks modernes (Next.js, Nuxt) évitent-ils automatiquement ce piège ?
🎥 From the same video 2
Other SEO insights extracted from this same Google Search Central video · duration 4 min · published on 29/04/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.