What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

Google generally does not consider URLs with hash fragments (#) for indexing. It is advisable to use the HTML5 History API for navigation in JavaScript applications so that Google can properly index these pages.
16:56
🎥 Source video

Extracted from a Google Search Central video

⏱ 56:15 💬 EN 📅 12/06/2018 ✂ 10 statements
Watch on YouTube (16:56) →
Other statements from this video 9
  1. 13:50 Faut-il vraiment abandonner les balises hreflang dans les liens d'ancrage ?
  2. 18:29 Faut-il vraiment corriger toutes les erreurs 404 remontées dans la Search Console ?
  3. 23:48 Les avis clients et étoiles ont-ils vraiment un impact sur le classement SEO organique ?
  4. 27:56 Pourquoi vos rankings chutent-ils sans que vous ayez touché à vos pages ?
  5. 29:49 Faut-il vraiment désavouer les backlinks toxiques ou Google s'en occupe-t-il seul ?
  6. 37:15 Les impressions Search Console comptent-elles vraiment ce que vous croyez ?
  7. 42:12 La traduction de contenu est-elle considérée comme du duplicate content par Google ?
  8. 53:06 Les paramètres de langue dans l'URL peuvent-ils vraiment être indexés correctement par Google ?
  9. 54:05 Faut-il vraiment maintenir les redirections 301 pendant un an après une migration de site ?
📅
Official statement from (7 years ago)
TL;DR

Google generally ignores hash fragments (#) for indexing, which poses challenges for JavaScript applications that use this navigation system. The recommended technical solution involves adopting the HTML5 History API, allowing Google to crawl and index each state of the application as a distinct URL. This is a significant technical undertaking for sites heavily reliant on client-side routing.

What you need to understand

What exactly are hash fragments and why does Google ignore them?

Hash fragments (also known as hash fragments or anchors) are the part of a URL that follows the # symbol. Historically, they were used to point to a specific section of an HTML page through anchors. Their technical peculiarity: they are never sent to the server during an HTTP request.

This characteristic explains why Google does not consider them for indexing. When Googlebot requests example.com/page#section, the server only receives example.com/page. The fragment remains client-side, processed only by the browser. To Google, example.com/page#section1 and example.com/page#section2 refer to the same URL: example.com/page.

Why have JavaScript applications widely adopted this approach?

JavaScript frameworks like Angular, React, or Vue have long used hash routing by default. This method allows for changing the visible state of the application without reloading the page or querying the server. It's simple to implement and does not require any specific server configuration.

The issue arises when these applications generate distinct content for each fragment. An online store with shop.com/#/product/123 and shop.com/#/product/456 displays two different products, but Google will only index one URL: shop.com/. The unique content of each product remains invisible to search engines.

What is the difference with the HTML5 History API recommended by Google?

The HTML5 History API (pushState and replaceState methods) changes the URL shown in the address bar without reloading the page, but this time without using a hash fragment. The application can transform shop.com/#/product/123 into shop.com/product/123, a real URL that the server can receive and process.

This approach requires a proper server configuration. All application routes must redirect to the JavaScript entry point (usually index.html) for the application to take over client-side. It's technically more complex, but it's the only way to make the content crawlable and indexable correctly.

  • Fragments (#) are never transmitted to the server, thus invisible to traditional crawlers
  • A URL with a fragment = one single page in Google's eyes, regardless of the content displayed client-side
  • The HTML5 History API generates real distinct URLs that Google can crawl and index separately
  • Transitioning to History requires server configuration to manage the fallback to the application
  • Modern applications overwhelmingly favor History, with hash routing becoming obsolete

SEO Expert opinion

Does this statement really reflect Google's observed behavior?

Mueller's stance is consistent with fifteen years of field observations. Google has always treated fragments as client-side markers, not as distinct URL segments. Empirical tests confirm this: creating unique content behind hashes without a prerendering system leads to a consistent lack of indexing of that content.

Important nuance: Mueller says "generally." In some very specific cases, Google can execute JavaScript and find that the content changes depending on the fragment. But this capability remains limited and unpredictable. Relying on it is a gamble, not a serious SEO strategy.

Which applications really risk losing visibility?

Old Single Page Applications (SPA) using hash routing are the first affected. Sites built with Angular 1.x or early versions of Backbone.js have massively adopted this approach. If these sites have not transitioned to History or implemented a prerendering system, they potentially lose 90% of their indexable content.

Blogs using client-side filtering systems with hashes (blog.com/#category/seo) face the same issue. Google indexes the homepage, ignores the categories. [To be verified]: some report that Google is beginning to better manage these cases in Search Console, but no official confirmation or quantitative data available.

Attention: If your Google Search Console shows discovered but non-indexed URLs with fragments (#) in the Coverage tab, it's a direct alert signal. Google sees these URLs but refuses to treat them as distinct pages.

Does the History API really solve all indexing issues?

History significantly improves the situation but does not automatically guarantee perfect indexing. Google still needs to execute JavaScript to see the final content, which introduces latency and consumes crawl budget. Sites with thousands of dynamic pages may still encounter difficulties.

The most reliable solution remains Server-Side Rendering (SSR) or static prerendering. Next.js, Nuxt.js, and prerendering solutions like Prerender.io send complete HTML to Googlebot, eliminating any dependency on JavaScript execution. This is technically heavier, but infinitely more reliable for ensuring indexing.

Another seldom mentioned point: even with History, poorly architected applications can generate content duplication issues if the server does not return the correct HTTP status codes. A nonexistent page should return a 404, not a 200 with a client-side error message.

Practical impact and recommendations

How can I diagnose if my site suffers from this fragment issue?

First instinct: open your site and navigate. If the URL in the address bar contains # symbols followed by paths (/#/products, /#page/about), you are affected. Don’t panic if you see simple anchors (#contact) for internal scrolling, that’s not the concern here.

Second step: query Google with site:yourdomain.com in the search. Compare the number of indexed pages to the actual number of unique content pages your site generates. A significant gap (50 pages indexed while you have 500) suggests a structural indexing problem, potentially related to fragments or JavaScript execution.

What technical solution should I concretely adopt?

Transitioning to the History API requires two parallel efforts. On the JavaScript side, replace hash routing with History routing in your framework. Angular provides PathLocationStrategy, React Router uses BrowserRouter instead of HashRouter, Vue Router accepts history mode.

On the server side, configure a universal fallback. All application URLs should return the main HTML file for the JavaScript to take over. With Apache, use a .htaccess file with RewriteRule. Nginx requires a try_files directive in the configuration. Be careful: this configuration can hide real 404s if implemented poorly.

Should I consider more robust solutions than just History?

If your site heavily depends on organic traffic, History alone remains a fragile compromise. Dynamic prerendering detects crawler user agents and serves them static HTML while real users get the JavaScript application. Cloudflare Workers, Prerender.io, or Rendertron offer this capability.

SSR (Server-Side Rendering) represents the most sustainable solution. Next.js for React, Nuxt.js for Vue, or Angular Universal turn your SPA into a hybrid application generating server-side HTML. It's a heavy technical investment but eliminates 90% of the indexing problems related to JavaScript.

  • Audit all URLs on your site to identify the use of hash fragments for navigation
  • Check in Google Search Console the gap between discovered URLs and indexed URLs
  • Test the "URL Inspection" tool on your pages with fragments to see what Google actually renders
  • Migrate routing to the History API if you are still using hash routing
  • Configure the server to redirect all routes to the application entry point
  • Implement a prerendering or SSR system if crawl budget is critical for your business
Hash fragments condemn a significant portion of your content to invisibility in search results. Moving to the History API is the minimum requirement, but SSR architectures or prerendering solutions offer a much higher guarantee of indexing. This type of technical overhaul requires deep expertise in front-end development and a thorough understanding of how crawlers work. Hiring a specialized technical SEO agency helps secure this transition without degrading user experience or compromising your visibility during migration.

❓ Frequently Asked Questions

Google peut-il quand même indexer du contenu derrière un fragment de hachage ?
Techniquement oui, si Google exécute le JavaScript et détecte un changement de contenu, mais c'est imprévisible et non garanti. Construire une stratégie SEO sur cette possibilité relève du pari risqué.
Les ancres de navigation interne (#contact, #section2) posent-elles problème pour le SEO ?
Non, les ancres traditionnelles pour le scroll interne ne posent aucun problème. Google les ignore simplement. Le problème concerne uniquement les fragments utilisés pour le routing applicatif avec contenu distinct.
Faut-il rediriger les anciennes URLs avec # vers les nouvelles sans fragment ?
Les redirections serveur classiques (301, 302) ne fonctionnent pas car le fragment n'atteint jamais le serveur. Il faut gérer cette transition côté client via JavaScript avec détection et redirection programmatique vers les nouvelles URLs.
L'API History fonctionne-t-elle sur tous les navigateurs ?
Oui, tous les navigateurs modernes supportent History API depuis 2011. Seuls IE9 et versions antérieures posent problème, mais leur part de marché est négligeable. Les polyfills existent pour une compatibilité maximale.
Combien de temps prend une migration de hash routing vers History ?
Cela dépend de la complexité de l'application. Un site simple peut basculer en quelques heures, une grosse SPA nécessite plusieurs jours de développement plus tests. La configuration serveur ajoute une demi-journée minimum.
🏷 Related Topics
Domain Age & History Crawl & Indexing JavaScript & Technical SEO Domain Name Pagination & Structure

🎥 From the same video 9

Other SEO insights extracted from this same Google Search Central video · duration 56 min · published on 12/06/2018

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.