What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

Google supports JavaScript redirects and follows them similarly to server-side redirects. It is recommended to submit the exact URL you want indexed rather than the one that redirects.
1:37
🎥 Source video

Extracted from a Google Search Central video

⏱ 2:41 💬 EN 📅 08/08/2019 ✂ 3 statements
Watch on YouTube (1:37) →
Other statements from this video 2
  1. 1:07 Googlebot rend-il vraiment les pages comme Chrome le fait ?
  2. 2:08 Faut-il vraiment privilégier la découverte automatique des URLs plutôt que les sitemaps ?
📅
Official statement from (6 years ago)
TL;DR

Google claims to follow JavaScript redirects in a manner similar to server-side redirects. This statement suggests a technical equivalence that simplifies the management of migrations and SPA redesigns. However, the explicit recommendation to submit the final URL directly rather than the redirecting one reveals a nuance: "similar" does not mean "identical," and this difference can impact crawl budget and indexing speed.

What you need to understand

Why did Google need to clarify its stance on JavaScript redirects?

For years, the SEO doctrine held that server-side 301 redirects were the only reliable solution for passing PageRank and avoiding duplicate content. This approach worked perfectly in a world where sites served static HTML, but it came up against the rise of Single Page Applications (SPAs) and JavaScript frameworks like React, Angular, or Vue.js.

These modern architectures often manage navigation and redirections entirely on the client side, via window.location or equivalents. Development teams do not always have control over the server to set up traditional HTTP redirects. The question thus became critical: does Googlebot understand and follow these JS redirects — and more importantly, does it treat them as 301s?

What does "similarly" actually mean in this statement?

The choice of the word "similar" instead of "identical" is not trivial. Google acknowledges that its JavaScript rendering engine detects and executes client-side redirects and then follows the destination URL as it would with an HTTP redirect. Therefore, the transfer of signals (PageRank, authority, backlinks) is theoretically assured.

But "similar" leaves a gray area. A 301 redirect is instantaneous: the server responds before the content is even downloaded. A JS redirect requires Googlebot to download the HTML, load the JavaScript, execute it, and then detect the redirect instruction. This process consumes rendering time and crawl budget — two resources that Google allocates sparingly.

Why does Google still recommend submitting the final URL directly?

This recommendation reveals Google's true position: yes, JS redirects work, but they remain a costly technical detour. By submitting the final URL directly through the sitemap or internal links, you spare Googlebot from going through the JavaScript rendering phase to discover the true destination.

In practical terms, this means that if your migration or redesign requires JS redirects, they won't break your SEO. But if you have a choice, always prefer a server-side 301 redirect or, failing that, clean up your sitemaps and internal links to point directly to the canonical URLs.

  • JavaScript redirects are followed by Googlebot, which executes JS to detect the final destination.
  • The transfer of SEO signals (PageRank, authority) is ensured, but the process is slower than a traditional HTTP redirect.
  • Submitting the final URL directly (sitemap, linking) remains best practice to optimize crawl budget and speed up indexing.
  • JS redirects are not a deal-breaker, but they should not become the norm for technical convenience.

SEO Expert opinion

Is this statement consistent with on-the-ground observations?

On paper, yes. Since the rollout of JavaScript rendering in two waves (the first wave is almost instantaneous, the second wave is deferred), Googlebot indeed detects JS redirects and follows them. Tests in Search Console using the "Inspect URL" tool clearly show that the final URL is identified as canonical after executing the JavaScript.

But in practice, timing poses a problem. JS redirects can take several days to be integrated into the index, while a server 301 is recognized within a few hours. For a website migration or massive URL change, this latency can lead to a temporary loss of organic traffic — long enough to panic a client or a marketing director.

What risks does this approach entail for large sites?

The main risk is the crawl budget. On a site with several tens of thousands of pages, each JS redirect forces Googlebot to load a page, execute JavaScript, then reload the destination. Multiply this by thousands of URLs, and you saturate your JavaScript rendering quota — which is much more limited than the classic HTML crawl quota.

The second risk: redirect chains. If a JS redirect points to a URL that itself does a server redirect, you create a costly cascade in terms of time and crawl budget. Google might decide not to follow the entire chain and consider the intermediate URL as canonical — which breaks your indexing strategy. [To be verified]: Google has never communicated a precise threshold for the number of JS redirects it accepts to follow in cascade.

In what cases does this statement not fully apply?

Conditional JS redirects (based on geo-location, user-agent, or cookies) can be problematic. Googlebot does not send the same signals as a real user: no persistent cookies, no precise geo-location. If your JS redirect depends on these parameters, Googlebot may never see the destination URL you want indexed.

Another edge case: redirects triggered after a user event (click, scroll, timeout of several seconds). Googlebot does not simulate these interactions. If your JS redirect waits for a click to execute, it simply will never be detected. Let's be honest: these situations are more about design error than advanced SEO, but they do exist in the field.

Attention: JavaScript redirects do not transmit HTTP status codes (301, 302, 307). For Google, a JS redirect is always interpreted as a permanent redirect by default, but some third-party tools (crawlers, analytics) may not detect it at all, complicating SEO audits.

Practical impact and recommendations

What should you do if your site uses JavaScript redirects?

First step: audit your existing redirects. Crawl your site with Screaming Frog in "JavaScript Rendering" mode enabled and compare it with a classic HTML crawl. If URLs appear as indexable in HTML mode but redirect in JS mode, you have identified a potential friction point for Googlebot.

Second step: clean up your sitemaps and internal linking. If a URL redirects in JavaScript, do not submit it in your XML sitemap — instead, submit the direct destination URL. The same logic applies to internal links: point to the true final URL. This prevents Googlebot from wasting time and crawl budget executing unnecessary JavaScript.

What mistakes should absolutely be avoided with JavaScript redirects?

Never create mixed redirect chains (JS → 301 → 302 → final URL). Google may decide to stop mid-way, and you will lose control over the indexed canonical URL. If you need to migrate a site, unify the method: either everything in server-side 301, or everything in JS, but never a chaotic mix.

Another common mistake: deploying JS redirects without a canonical tag on the source page. Even if Google follows the redirect, the absence of a canonical can create temporary ambiguity and slow down the consolidation of signals. And this is where it becomes troublesome: during this hesitating period, your traffic may drop.

How can you check that Google is correctly handling your JavaScript redirects?

Use the "URL Inspection" tool in Google Search Console. Test the URL that contains the JS redirect, then click on "Test live URL". In the "Rendered Page" section, verify that the detected canonical URL matches the desired destination URL.

Complement this with server log tracking. If Googlebot continues to massively crawl the old URLs that redirect in JS weeks after implementation, it is a signal that the popularity transfer is not as smooth as with a classic 301. Adjust your strategy accordingly: switch to server redirect or strengthen internal linking towards the final URLs.

  • Crawl the site in "JavaScript Rendering" mode to identify all active JS redirects
  • Remove URLs that redirect from XML sitemaps and replace them with their final destinations
  • Update internal linking to point directly to the canonical URLs
  • Add a rel="canonical" tag on source pages that redirect in JS
  • Test each critical redirect via the "URL Inspection" tool in Search Console
  • Monitor server logs to verify that Googlebot stops crawling the old URLs
JavaScript redirects work and pass SEO signals, but they remain a second-choice solution compared to server redirects. If your technical architecture requires JS, ensure your internal linking and sitemaps point directly to the final URLs. These optimizations can be complex to manage at scale, especially on SPA sites or hybrid architectures. If you lack the time or internal resources to audit and correct these points, engaging a specialized SEO agency can help you avoid costly traffic losses and ensure a controlled technical transition.

❓ Frequently Asked Questions

Les redirections JavaScript transmettent-elles le PageRank comme une redirection 301 ?
Oui, Google affirme que les redirections JS transmettent les signaux SEO de manière similaire aux 301 serveur. Cependant, le processus est plus lent car il nécessite l'exécution du JavaScript, ce qui peut retarder la consolidation des signaux de plusieurs jours.
Faut-il inclure dans le sitemap les URL qui redirigent en JavaScript ?
Non. Google recommande explicitement de soumettre directement l'URL de destination finale dans le sitemap, pas celle qui fait la redirection. Cela évite de gaspiller du crawl budget sur des pages intermédiaires.
Les redirections JavaScript fonctionnent-elles sur mobile et desktop de la même manière ?
En théorie oui, puisque Googlebot mobile exécute également le JavaScript. Mais si votre redirection JS dépend de conditions spécifiques (user-agent, taille d'écran), elle peut ne pas être détectée de manière uniforme. Testez toujours les deux versions dans Search Console.
Peut-on utiliser une redirection JavaScript pour gérer une migration de site complète ?
C'est possible mais fortement déconseillé. Une migration de site nécessite une reconnaissance rapide et fiable des nouvelles URL. Les redirections serveur 301 restent la méthode la plus sûre pour éviter toute perte de trafic ou latence d'indexation.
Comment détecter si Google suit correctement mes redirections JavaScript ?
Utilisez l'outil "Inspection d'URL" dans Search Console et testez l'URL en direct. Vérifiez dans la section "Page rendue" que l'URL canonique détectée correspond à la destination souhaitée. Complétez avec une analyse des logs serveur pour voir si Googlebot continue à crawler l'ancienne URL.
🏷 Related Topics
Crawl & Indexing AI & SEO JavaScript & Technical SEO Domain Name Redirects

🎥 From the same video 2

Other SEO insights extracted from this same Google Search Central video · duration 2 min · published on 08/08/2019

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.