Official statement
Other statements from this video 12 ▾
- 2:45 Le snippet Google doit-il toujours correspondre exactement à la page de destination ?
- 3:45 Google détecte-t-il vraiment tout seul la langue de votre site multilingue ?
- 10:01 Faut-il vraiment multiplier les domaines pour son SEO international ?
- 12:02 Google peut-il ignorer vos versions linguistiques si elles se ressemblent trop ?
- 12:41 Les iframes nuisent-elles vraiment au SEO de votre site ?
- 19:33 Pourquoi la Search Console affiche-t-elle des erreurs de données structurées introuvables ailleurs ?
- 22:11 Comment le hreflang détermine-t-il vraiment quelle version de votre site Google affiche ?
- 22:25 Faut-il vraiment traiter vos pages AMP comme du contenu principal pour qu'elles soient indexées ?
- 34:12 Pourquoi Google abandonne-t-il progressivement les pages redirigées vers des erreurs 403 ?
- 38:24 Comment Google traite-t-il vraiment les liens internes dupliqués sur une même page ?
- 51:10 La vitesse de chargement est-elle vraiment un critère de pénalité Google ?
- 61:18 Pourquoi un double canonical AMP/desktop peut-il tuer l'affichage de vos pages ?
Google claims that URL structures based on AJAX fragments (hashbangs) harm SEO. The recommendation is clear: favor clean URLs that work without client-side JavaScript. In practical terms, this means switching to traditional server URLs or adopting server-side rendering for your JavaScript applications.
What you need to understand
What is a hashbang and why was it used?
The hashbang (#!) was a popular technique in the 2010s for creating single-page applications (SPAs) with dynamic content. The idea was simple: use the URL fragment after the hash to load different views via JavaScript without reloading the page.
Google even proposed a translation system where example.com/#!/article became example.com/?_escaped_fragment_=/article for bots. This system has since been abandoned, making hashbangs even more problematic for indexing.
Why do hashbangs pose a technical problem for indexing?
URL fragments (everything following the #) are never sent to the server during an HTTP request. Only client-side JavaScript can read them and act accordingly. For Googlebot, this means extra work: executing JS, waiting for content to load, and hoping everything works.
The reality on the ground? The crawl budget is wasted, rendering time skyrockets, and indexing becomes erratic depending on the complexity of your JavaScript code. Not to mention other engines like Bing or Yandex, which have historically performed worse with JS rendering.
What does Google really mean by a “clean URL structure”?
Google refers to URLs that function without client-side scripts. This means a typical HTTP request to the URL should return the complete content, or at minimum, a version of HTML that is usable by bots.
Current technical solutions include Server-Side Rendering (SSR), Static Site Generation (SSG), or progressive hydration. The goal: ensure that each URL corresponds to an identifiable server resource with a 200 HTTP code and content in the initial HTML.
- Avoid URLs like
site.com/#!page/articleorsite.com/#/product/123 - Prefer URLs like
site.com/page/articleorsite.com/product/123 - Ensure the main content is present in the source HTML, not just injected by JavaScript afterwards
- Test your URLs with a simple
curlor by disabling JavaScript: the essential content should be visible - Check in the Search Console that your pages are indexed with the correct content, not just an empty shell
SEO Expert opinion
Is this directive still relevant with Google's advances in JavaScript?
Googlebot has indeed made progress in JavaScript rendering since 2015, but claiming it perfectly handles all modern frameworks is a myth. Real-world observations show erratic behaviors: timeouts on heavy resources, issues with asynchronous API requests, and partial indexing on sites with lots of JS.
Müller's position remains relevant. Even if Googlebot can technically index content loaded in JS, it remains resource-intensive and less reliable than a traditional server URL. This difference becomes critical for your visibility on sites with thousands of pages.
What nuances should be considered based on the type of site?
A brochure site with 20 pages can afford extensive JavaScript if crawl budget is not an issue. But an e-commerce site with 10,000 product listings? There, every millisecond of rendering matters, and hashbangs become a structural disadvantage.
Complex web applications (SaaS, platforms) often have a public section that must be indexed and a private section (dashboard) which does not. For the public section, using SSR or SSG becomes non-negotiable. For the private section behind authentication, it doesn't matter: hashbangs have no SEO impact since these pages should not be indexed.
What should you do if your site still uses hashbangs today?
Let’s be honest: migrating a complete architecture is not trivial. It often involves redesigning the routing of your application, implementing SSR, and managing 301 redirects from old URLs. The risk of temporary traffic loss exists.
Two pragmatic approaches: either a progressive migration by sections of the site (start with priority pages), or implementing a hybrid system where clean URLs co-exist temporarily with the old ones. In any case, a prior technical audit is essential to identify dependencies. [To verify]: no Google data precisely quantifies the negative SEO impact of hashbangs versus clean URLs, but case studies show indexing gains of 30 to 50% after migration.
Practical impact and recommendations
How can you check if your site is affected by this issue?
The first step: open your site and look at the address bar. Do you see #! or #/ in your navigation URLs? If yes, you are affected. Second test: disable JavaScript in your browser (DevTools > Settings > Disable JavaScript) and navigate your site.
If the main content disappears or if navigation breaks completely, that's a red flag. Also, use the URL Inspection Tool in the Search Console to compare the raw HTML and the rendered HTML: a significant gap indicates excessive dependency on JavaScript.
What concrete actions can be taken to correct the architecture?
The solution depends on your tech stack. With React, choose Next.js, which offers SSR and SSG natively. With Vue, Nuxt.js plays the same role. Angular offers Angular Universal for server-side rendering.
If you're starting from scratch or redesigning, favor Static Site Generation when possible: generate all your pages in static HTML at build time, and SEO becomes trivial. For highly dynamic content (changing prices, stock), combine SSG with client-side API requests for volatile data only.
What pitfalls should you avoid during migration?
The classic pitfall: forgetting 301 redirects from the old URLs with hashbang. The problem: hashbangs are not sent to the server, so a standard server redirect can't intercept them. Solution: manage redirects in client-side JavaScript for old URLs, then redirect to the new clean URLs.
Another common mistake: implementing SSR but forgetting essential meta tags (title, description, canonical) in the initial render. Verify that your SSR generates complete HTML with all SEO metadata in the first server response, not just an empty structure filled in later by JS.
- Audit all your URLs to identify those containing fragments
#!or#/ - Test visible content with JavaScript disabled on a representative sample of pages
- Compare the source HTML (View Source) to the rendered HTML in the inspector to measure JS dependency
- Plan a migration strategy suitable for your framework (SSR, SSG, or hybrid)
- Set up a complete mapping of old/new URL format with 301 redirects managed client-side if necessary
- Monitor indexing in the Search Console after migration and quickly resolve any abnormal drops
❓ Frequently Asked Questions
Les hashbangs (#!) sont-ils complètement obsolètes pour le SEO en 2025 ?
Un site avec des hashbangs peut-il quand même être indexé par Google ?
Quelle est la différence entre hashbang (#!) et hash simple (#) ?
Le Server-Side Rendering est-il la seule solution pour remplacer les hashbangs ?
Comment gérer les redirections 301 depuis des URLs avec hashbang ?
🎥 From the same video 12
Other SEO insights extracted from this same Google Search Central video · duration 56 min · published on 30/11/2017
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.