What does Google say about SEO? /

Official statement

On JavaScript sites, if content is only loaded when the fragment is present, Google will likely not be able to index that content because fragments are removed during indexing. Only a very small number of historical sites still use fragments for indexing.
48:03
🎥 Source video

Extracted from a Google Search Central video

⏱ 1h03 💬 EN 📅 15/10/2020 ✂ 26 statements
Watch on YouTube (48:03) →
Other statements from this video 25
  1. 2:16 Why do your Search Console data only tell part of the story?
  2. 3:40 Should you stop optimizing for impressions and clicks in SEO?
  3. 12:12 Is it true that mobile-first indexing completely overlooks your site's desktop version?
  4. 14:15 Why does the mobile-first indexing verification delay cause temporary discrepancies in Google’s index?
  5. 14:47 Should you show the same number of products on mobile and desktop for mobile-first indexing?
  6. 20:35 Can a minor redesign really trigger a Page Layout penalty?
  7. 23:12 Is it true that CLS isn't a ranking factor yet—should you still optimize it?
  8. 24:04 How does Google reassess a site's overall quality when the top pages remain well-ranked?
  9. 27:26 Do Links Without Anchor Text Really Hold Value for SEO?
  10. 29:02 Why do some pages take months to be reindexed after changes?
  11. 29:02 Should you really use sitemaps to speed up the indexing of your content?
  12. 31:06 Can an incomplete or outdated sitemap really harm your SEO?
  13. 33:45 Can you really host your XML sitemap on an external domain?
  14. 34:53 Does each language version really need its own self-referencing canonical?
  15. 37:58 Does structured breadcrumb really enhance your SEO ranking?
  16. 39:33 Do HTML breadcrumbs really enhance crawling and internal linking?
  17. 41:31 Does domain age and the choice of CMS really influence Google rankings?
  18. 43:18 Are backlinks really less important than we think for ranking on Google?
  19. 44:22 Does Google really ignore hidden content instead of penalizing it?
  20. 45:22 Is it really necessary to be 'significantly better' to climb the SERPs?
  21. 47:29 Are URLs with # really invisible for Google SEO?
  22. 50:07 Do words in the URL really still have a true impact on Google rankings?
  23. 51:45 Is it really necessary to list every keyword variation for Google to understand your content?
  24. 55:33 Paired AMP: Is it really the regular HTML that matters for indexing?
  25. 61:49 Does a sudden drop in traffic always signal a quality issue?
📅
Official statement from (5 years ago)
TL;DR

Google systematically removes fragments (#hash) during indexing, which poses a major problem for JavaScript sites that load content based on these fragments. Only a handful of historical sites still benefit from an archaic exception for this type of indexing. If your front-end application relies on fragments to load dynamic content, that content remains invisible to Googlebot.

What you need to understand

Why does Google remove URL fragments during indexing?

URL fragments (the part after the #) were historically designed for client-side internal navigation, not for identifying distinct resources on the server side. The browser never sends them to the server during an HTTP request — only client-side JavaScript can read them.

Google decided to standardize this logic within its indexing system: when Googlebot encounters a URL with a fragment, it removes it before processing. As a result, https://example.com/page#section-A and https://example.com/page#section-B are treated as a single URL for Google's index. This rule applies to almost all current websites.

What is the exception for 'historical sites' mentioned by Mueller?

There is an archaic indexing mechanism called 'hash-bang' (#!), introduced by Google in 2009 and officially deprecated in 2015. A few very old sites — remnants from the pre-HTML5 web — still use this obsolete system. Google maintains minimal support for them, but it’s a fossil-like exception that practically no one benefits from anymore.

If your site was launched after 2015 and you wonder whether you are affected by this exception, the answer is no. Mueller's mention primarily serves to clarify that there is a minuscule minority of cases where fragments are still processed, but it should not create false hope.

Why do modern JavaScript frameworks create issues with this rule?

Many Single Page Applications (SPAs) built with React, Vue, or Angular use client-side routing based on fragments. When a user clicks on an internal link, the framework loads new content via JavaScript without reloading the page — often changing the URL by adding or modifying a fragment.

The catch: if your content is only rendered when the fragment is present, and Googlebot removes this fragment before indexing, then your content simply does not exist for Google. The bot sees an empty shell. This is precisely what Mueller points out: a technical setup that renders your content invisible by design.

  • Googlebot removes fragments (#hash) before indexing — this is a nearly universal rule
  • JavaScript sites that load content conditioned by a fragment create invisible content for Google
  • The 'historical sites' exception is an outdated edge case (hash-bang pre-2015) that affects hardly anyone
  • Modern SPAs must use the HTML5 History API (pushState/replaceState) for SEO-compatible routing
  • If your URL contains a #, ask yourself whether the displayed content relies on this fragment — if yes, it's a red flag

SEO Expert opinion

Is this statement consistent with observed practices in the field?

Yes, absolutely. Technical audits on thousands of JavaScript sites confirm that fragments are a recurring blind spot. We often see SPAs with entire sections of content — FAQs, product descriptions, blog articles — rendered only when a specific fragment is present in the URL. Google sees nothing.

The confusion often arises from a misunderstanding between client navigation and server indexing. Developers test their site in a modern browser, see the content loading correctly, and think everything is fine. However, Googlebot operates differently: it removes the fragment before even starting the rendering. What the bot indexes is the initial state of the page without the fragment — often an empty shell.

What nuances should be added to this statement from Mueller?

The statement is accurate but lacks precision on a key point: the timing of when Google removes the fragment. Some practitioners believe that the bot runs the JavaScript first, sees the content load, and then normalizes the URL. This is false. [To be verified] Google removes the fragment before rendering, which means your JavaScript code never receives the fragment information when executed by Googlebot.

Another nuance: fragments can be useful for user experience (internal navigation anchors, scrolling to a section) as long as they do not condition the display of content. If your content is already present in the initial DOM and the fragment is just for scrolling, no problem. The trap is when the fragment triggers deferred loading or conditional rendering.

In what scenarios does this rule not apply or create unexpected issues?

An interesting edge case: PWAs (Progressive Web Apps) that use fragments to manage application state without reloading the page. If you build a complex PWA with multiple views, each view should ideally have its own clean URL (without fragment) to be indexable. But in practice, certain parts of the application may not need to be indexed — a private user dashboard, for example.

Where it really breaks down: sites that have migrated from an old hash-bang system to a modern SPA without changing their URL structure. They think they have solved the problem by switching to React or Vue, but continue to use fragments because “it has always worked this way.” Result: an invisible SEO regression for months. Technical migration is not enough if the URL architecture remains broken.

Warning: If you audit a JavaScript site and see URLs with fragments in Search Console, immediately check if critical content depends on these fragments. It is often a gaping hole in indexing that no one has detected.

Practical impact and recommendations

What concrete steps should be taken to avoid this indexing pitfall?

The first step: audit your JavaScript routing. Open your browser's DevTools, enable network throttling to simulate a slow bot, and check if your main content loads even without the fragment in the URL. If you load /page and entire sections are empty, you have a fragment dependency issue.

Next, migrate to the HTML5 History API (pushState/replaceState). All modern frameworks natively support it: React Router, Vue Router, Angular Router. Configure your routing to use ‘clean’ URLs without fragments. Instead of /page#section-A, use /page/section-A. The server must be configured to serve the same application on all these routes, then the JavaScript takes over on the client side.

How can you check that content is indeed indexable by Google?

Use the URL inspection tool in Search Console to test your critical pages. Compare the

❓ Frequently Asked Questions

Les fragments d'URL (#hash) sont-ils complètement ignorés par Google ?
Oui, dans la quasi-totalité des cas. Google supprime les fragments avant l'indexation, ce qui signifie qu'une URL avec ou sans fragment est considérée comme identique. Seule une infime minorité de sites historiques (hash-bang pré-2015) fait exception.
Puis-je utiliser des fragments pour la navigation interne sans impact SEO ?
Oui, si le contenu est déjà présent dans le DOM initial et que le fragment sert uniquement à scroller vers une section (ancre classique). Le problème survient uniquement quand le fragment conditionne le chargement du contenu via JavaScript.
Comment migrer d'un routing avec fragments vers des URLs propres ?
Utilise l'HTML5 History API (pushState/replaceState) supportée par tous les frameworks modernes. Configure ton serveur pour servir l'application sur toutes les routes propres, puis adapte ton code JavaScript pour gérer le routing sans fragments.
Google peut-il indexer du contenu chargé après un scroll ou un clic ?
Non, sauf si ce contenu est chargé automatiquement au premier rendu. Google n'interagit pas avec la page (pas de scroll, pas de clic). Le contenu indexable doit être présent dans le HTML initial ou chargé via JavaScript de manière automatique et immédiate.
Les redirections côté serveur fonctionnent-elles avec les fragments d'URL ?
Non, le serveur ne reçoit jamais la partie fragment d'une URL (elle n'est pas transmise dans la requête HTTP). Les redirections basées sur des fragments doivent être gérées côté client en JavaScript si nécessaire pour l'expérience utilisateur.
🏷 Related Topics
Domain Age & History Content Crawl & Indexing JavaScript & Technical SEO

🎥 From the same video 25

Other SEO insights extracted from this same Google Search Central video · duration 1h03 · published on 15/10/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.