Official statement
Other statements from this video 13 ▾
- 2:06 Google fusionne-t-il vraiment les pages similaires en une seule version indexée ?
- 5:49 Faut-il vraiment adapter la longueur de ses meta descriptions aux snippets Google ?
- 7:53 Faut-il bloquer la redirection automatique vers l'app mobile pour préserver son SEO ?
- 7:53 Les redirections furtives vers les applications mobiles sont-elles un frein au référencement ?
- 8:32 Google propose-t-il vraiment une révision manuelle SEO de votre site ?
- 9:40 Les canonicals JavaScript sont-elles vraiment ignorées par Google ?
- 11:17 Les PWA sont-elles vraiment indispensables pour le référencement naturel ?
- 16:56 Faut-il corriger les URLs marquées 'submitted URL not selected as canonical' ?
- 17:36 Faut-il supprimer un sitemap qui contient trop d'erreurs ?
- 19:40 Comment Google distingue-t-il réellement le contenu dupliqué des adresses identiques ?
- 25:43 Faut-il vraiment rediriger toutes les pages HTTP vers HTTPS pour éviter les problèmes d'indexation ?
- 37:33 Faut-il craindre de trop lier vers Wikipédia ou des sites d'autorité ?
- 42:06 Pourquoi les URL avec dièse (#) bloquent-elles l'indexation de vos pages Angular ?
Google has officially abandoned the hashbang (#!) format for prerendering JavaScript content. The recommendation is clear: prerendering should now rely on detecting the crawler's user-agent, not on the presence of escaped fragments in the URL. For sites still using old AJAX architectures, a swift technical migration is necessary to avoid poorly indexed dynamic content.
What you need to understand
Why is Google abandoning the hashbang format?
The hashbang format (#!) was a solution proposed by Google in 2009 to enable the indexing of JavaScript applications. At the time, Googlebot could not execute client-side JavaScript.
The principle was simple: a URL like example.com/#!page=contact would automatically trigger a server-side prerender. Google would transform this URL into example.com/?_escaped_fragment_=page=contact to retrieve a static HTML version.
Since Googlebot can now execute modern JavaScript, this mechanism has become obsolete. Google had already announced the deprecation of this approach, but many sites still use it out of habit or ignorance.
What is user-agent-based prerendering?
User-agent-based prerendering involves detecting the crawler (Googlebot, Bingbot, etc.) via its HTTP user-agent and serving it a prerendered HTML version of dynamic content. This approach is technically cleaner.
Specifically, when Googlebot arrives at your site, the server detects its signature (Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)) and returns already generated HTML, without waiting for the JavaScript to execute.
This method works with any frontend architecture: React, Vue, Angular, or even plain JavaScript. It does not depend on a specific URL format and avoids complications related to URL fragments.
What risks do sites still using the hashbang face?
If your site is still using the hashbang format, Google will likely no longer fetch prerendered versions of your pages. Googlebot will attempt to execute the JavaScript itself, which may or may not work.
The issue arises when your JavaScript is heavy, poorly optimized, or generates content asynchronously after several seconds. In this case, Googlebot may index an empty shell or an incomplete intermediate state.
E-commerce sites with AJAX filters, older SPA (Single Page Applications), and certain corporate portals are particularly at risk if they have not migrated their infrastructure.
- The hashbang format (#!) is officially obsolete and no longer triggers automatic prerendering
- The user-agent-based prerendering is becoming the recommended method for serving dynamic content to crawlers
- Sites still using _escaped_fragment_ must migrate or risk losing organic visibility
- This migration primarily affects older JavaScript applications (pre-2015) that have not evolved
- Modern Googlebot knows how to execute JavaScript, but prerendering remains more reliable for ensuring complete indexing
SEO Expert opinion
Is this statement consistent with the evolution of Googlebot?
Yes, it aligns with a logical trajectory. Google has long maintained the hashbang for backward compatibility, even after enhancing its JavaScript execution capabilities. The official deprecation comes late.
What is surprising is that Google has not communicated more aggressively about this migration. Many technical sites from 2010-2015 still operate under this logic, and their owners often remain unaware that they have been in degraded mode for several years.
The real signal here is that Google wants to simplify its crawling stack. Fewer special cases to manage means fewer bugs and resources allocated. The underlying message is: adapt to modern standards or face the consequences.
Should prerendering still be done if Googlebot executes JavaScript?
The question is valid, but the answer depends on your architecture. If you have a modern React site with Server-Side Rendering (SSR) via Next.js or Remix, you probably do not need specific prerendering for crawlers.
However, if you have a traditional SPA with a nearly empty index.html and all content generated by client-side JavaScript, prerendering remains relevant. Googlebot can technically execute your JS, but it consumes crawl budget and prolongs the indexing time.
[To be verified]: Google does not provide any precise metrics on the performance delta between a 100% client-side site and a site with prerendering. Field tests show indexing gains between 15% and 40% on catalogs of over 10,000 pages, but these figures vary significantly based on implementation.
Which sites should be truly concerned?
E-commerce sites using old JavaScript frameworks (AngularJS 1.x, Backbone.js) with hashbang URLs are in the red zone. The same applies to corporate applications that have not been refactored since 2012-2015.
WordPress blogs with poorly designed AJAX plugins can also be affected, even if they do not officially use the hashbang. Some filter or pagination plugins generate URL fragments that inadvertently triggered the _escaped_fragment_ mechanism.
If your organic traffic has stagnated or declined without apparent reason over the past 6-12 months, and your site uses a lot of JavaScript, check immediately how Googlebot sees your pages using the URL Inspection tool in Search Console.
Practical impact and recommendations
How can I check that my site no longer uses the hashbang?
First step: inspect your URLs. If you see #! in your internal links or sitemaps, you are affected. Use a crawler like Screaming Frog or Sitebulb in JavaScript mode to capture all dynamically generated URLs.
Second step: check your .htaccess or server configuration. Look for rewrite rules that transform _escaped_fragment_ into something else. If you find this pattern, your infrastructure is obsolete.
Third step: use the URL Inspection tool in Search Console. Test 10-15 important pages of your site and compare the rendered HTML with what you see in your browser. If entire sections are missing in the crawled version, you have a problem.
What technical solution should I adopt for the migration?
The solution depends on your current stack. If you are using modern React, Vue, or Angular, switch to Server-Side Rendering (Next.js, Nuxt.js, Angular Universal). This is the best long-term approach, even if it requires refactoring.
If an SSR migration is too heavy in the short term, implement a user-agent-based prerendering service. Rendertron (Google's open-source) or Prerender.io (commercial) work well. Your server detects Googlebot and redirects it to a static HTML version generated on the fly.
A third, more rudimentary yet effective option: generate static HTML snapshots of your main pages and serve them only to crawlers. This approach is suitable for sites with a relatively stable catalog (fewer than 1,000 active pages).
What mistakes should be avoided during migration?
A classic mistake is serving differing content to users and crawlers without justification. Google calls this cloaking and can penalize you. Prerendering is tolerated as long as the content is strictly identical, only the format changes (HTML vs. JavaScript).
Another trap is forgetting to update the XML sitemap after removing hashbang URLs. If your sitemap still contains URLs with #!, Googlebot will crawl them and may index them as duplicates with the new clean URLs.
Finally, do not abruptly remove old URLs without 301 redirects. Even if Google no longer favors them, they may have accumulated authority and backlinks. Properly redirect each old hashbang URL to its modern equivalent.
- Audit your URLs to detect the presence of #! or _escaped_fragment_ configurations
- Test the rendering of your pages with the URL Inspection tool in Search Console
- Migrate to Server-Side Rendering if your stack allows it (Next.js, Nuxt, Angular Universal)
- Install a user-agent-based prerendering service if SSR is too heavy (Rendertron, Prerender.io)
- Implement 301 redirects from old hashbang URLs to new clean URLs
- Update your XML sitemap to exclude obsolete URLs and include only modern versions
❓ Frequently Asked Questions
Le format hashbang (#!) fonctionne-t-il encore pour l'indexation Google ?
Dois-je obligatoirement passer au Server-Side Rendering (SSR) ?
Comment savoir si Googlebot voit correctement mes pages JavaScript ?
Le pré-rendu basé sur l'user-agent est-il considéré comme du cloaking ?
Que faire des anciennes URLs hashbang après la migration ?
🎥 From the same video 13
Other SEO insights extracted from this same Google Search Central video · duration 56 min · published on 15/05/2018
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.