What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

Links that go through JavaScript redirects can pass weight and signals if Googlebot is capable of following them. Otherwise, they are treated as if they do not exist.
30:43
🎥 Source video

Extracted from a Google Search Central video

⏱ 1h01 💬 EN 📅 31/01/2020 ✂ 21 statements
Watch on YouTube (30:43) →
Other statements from this video 20
  1. 1:04 La longueur des URLs affecte-t-elle vraiment le classement dans Google ?
  2. 2:06 La langue des backlinks influence-t-elle vraiment le référencement ?
  3. 4:17 Les interstitiels plein écran tuent-ils vraiment votre SEO ?
  4. 5:32 Les interstitiels en redirection peuvent-ils vraiment tuer votre indexation ?
  5. 9:16 Les liens nofollow dans les exemples de spam doivent-ils vraiment nous inquiéter ?
  6. 13:10 Pourquoi pointer vers les URLs de cache AMP peut-il compromettre votre SEO ?
  7. 15:16 Les plaintes DMCA peuvent-elles vraiment pénaliser votre site dans les SERP ?
  8. 16:16 Faut-il absolument dupliquer les breadcrumbs en version mobile pour rester indexé ?
  9. 18:01 Pourquoi une refonte d'URL prend-elle plus de temps à indexer qu'un changement de domaine ?
  10. 19:15 La vitesse du site est-elle vraiment un facteur de classement négligeable dans Google ?
  11. 24:07 Pourquoi Google indexe-t-il des pages non canoniques malgré un balisage rel=canonical correct ?
  12. 28:31 Pourquoi Googlebot rend-il encore d'anciennes versions de vos pages ?
  13. 33:09 Pourquoi vos pages se battent-elles dans les SERPs alors qu'elles ciblent la même requête ?
  14. 34:17 Les données structurées vont-elles devenir un casse-tête ingérable pour les SEO ?
  15. 36:58 Faut-il vraiment concentrer tous ses contenus sur la page d'accueil pour les sites mono-produit ?
  16. 38:01 Les données structurées mal implémentées induisent-elles Google en erreur ?
  17. 41:13 Les URL bloquées par robots.txt consomment-elles vraiment votre budget de crawl ?
  18. 42:15 Les extraits en vedette peuvent-ils provenir d'URLs hors position #1 ?
  19. 44:37 Les URL avec dates récentes boostent-elles vraiment votre SEO ?
  20. 46:30 Faut-il vraiment recrawler une page pour que Google prenne en compte vos modifications de liens ?
📅
Official statement from (6 years ago)
TL;DR

Google claims that links passing through JavaScript redirects can pass weight and SEO signals, but only if Googlebot can follow them. Otherwise, they are simply ignored. The challenge for an SEO? To ensure that these redirects are technically crawlable; otherwise, the entire link structure linked to them becomes invisible to the engine.

What you need to understand

Why does Google differentiate JavaScript redirects from traditional redirects?

Traditional HTTP redirects (301, 302, 307, 308) are immediately understood by Googlebot during the initial crawl. They are part of the HTTP protocol itself, even before the page content is downloaded.

JavaScript redirects, on the other hand, require the engine to execute client-side code to detect that a redirect exists. This involves an additional step: rendering the page. If Googlebot does not execute the JavaScript, or if the code is poorly implemented, the redirect remains invisible.

What does it really mean by "if Googlebot can follow them"?

This phrasing conceals several technical scenarios. A link may be blocked by robots.txt, the JavaScript may be malformed or refer to external resources that are not loaded, or the execution timeout may exceed the engine's processing capabilities.

In all these cases, Googlebot considers the link to be non-existent. No PageRank is passed, no discovery of the target page, no indexing. The link becomes technically dead from an SEO perspective.

What is the difference between a "non-existent link" and a "nofollow link"?

A nofollow link remains visible in the DOM and is detected by Google, which may choose to ignore it or not based on its algorithm. A link lost in a non-executed JavaScript redirect is never seen.

This is a critical distinction. A nofollow leaves a trace, potentially allowing for URL discovery. A failed JavaScript redirect makes the link disappear from the link graph, as if it were never coded.

  • JavaScript redirects require client-side rendering to be detected by Googlebot
  • A nofollow link is treated as non-existent, with no PageRank passed and no page discovery
  • Blocking robots.txt, a faulty JS code, or a timeout is enough to fail detection
  • A nofollow remains detectable, unlike a failed JavaScript redirect that disappears completely
  • The performance of JavaScript execution directly impacts Google's ability to follow these links

SEO Expert opinion

Is this statement consistent with practices observed in the field?

Yes, and this is the entire problem. We regularly observe sites losing internal link juice or incoming PageRank because part of their navigation relies on poorly implemented JavaScript redirects. Tools like Search Console or Screaming Frog do not always detect these losses, as they crawl differently from Googlebot.

Rendering tests via the URL inspection tool sometimes show glaring discrepancies between raw HTML and what Google actually sees after execution. When a link goes through a JS redirect that silently fails, no error message appears — the link simply disappears, that's it.

What nuances should be added to this claim from Google?

The phrasing "can pass weight and signals" remains vague. [To be verified]: Google does not specify whether the transmission is complete or subject to depreciation compared to a traditional HTML link. Field observations suggest that a link via JavaScript redirect works worse than a direct HTML link, even when it is technically followed.

Another friction point: the delay. If JavaScript rendering takes several seconds and the crawl budget is tight, Googlebot may abandon before executing the redirect. Result? A link that technically exists but remains invisible in practice.

Warning: Sites with thousands of JavaScript redirects risk a bottleneck at the rendering level. Google does not render all pages instantly — some wait in a queue for days or even weeks.

In what contexts do these types of redirects pose the most problems?

Single Page Applications (SPA) and React/Vue/Angular sites not optimized for SSR (Server-Side Rendering) are the primary concerns. If all navigation relies on JavaScript and server-side rendering is absent, every URL change becomes a potentially ignored JS redirect.

E-commerce sites with product filters or sorting options managed in JavaScript also encounter this problem. If a filter triggers a JS redirect to an indexable URL, and Google does not execute it, a whole part of the catalog remains orphaned from a crawl perspective.

Practical impact and recommendations

How can I check if my JavaScript redirects are properly followed by Google?

First step: use the URL inspection tool in Search Console and compare raw HTML with the rendered version. If a link with a JS redirect appears in the rendering but not in the source HTML, it depends on JavaScript execution. Then check that the target URL is indeed discovered and indexed.

Second verification: analyze the server logs to confirm that Googlebot is actually crawling the target URLs after visiting the pages containing JS redirects. If the target URLs are never crawled despite receiving links, it's a warning signal.

What mistakes should absolutely be avoided in the implementation?

Never block critical JavaScript resources via robots.txt. This seems obvious, but it's the most common mistake. If Google cannot load the JS file containing the redirect logic, everything collapses.

Avoid redirects that depend on user events (clicks, scrolls, hovers). Google does not emulate these interactions. A redirect that only triggers when clicking a button remains invisible to the bot. Favor redirects that execute automatically upon page load.

Should JavaScript redirects always be replaced by HTTP redirects?

Ideally, yes — but this is not always technically possible. If your site uses a modern framework with client-side routing, forcing HTTP redirects may break the user experience. In this case, opt for Server-Side Rendering (SSR) or static prerendering.

A hybrid solution is to use standard HTML links for main navigation, reserving JavaScript for secondary interactions. This way, the link structure remains crawlable even without JavaScript execution, and Google can follow critical redirects without depending on rendering.

  • Always check JavaScript rendering via the URL inspection tool in Search Console
  • Analyze server logs to confirm that Googlebot crawls the target URLs after JS redirects
  • Never block critical JavaScript files via robots.txt
  • Favor automatic redirects over those triggered by user interactions
  • Opt for SSR or standard HTML links when technically feasible
  • Monitor the crawl budget: too many JS redirects slow down rendering and create bottlenecks
JavaScript redirects can work for SEO, but they introduce a technical fragility that HTTP redirects do not have. If your architecture heavily relies on this type of links, a thorough audit is essential to avoid silent PageRank losses. These optimizations often require sharp technical expertise and a fine analysis of Googlebot's behavior — skills that a specialized SEO agency can provide to securely enhance your linking structure and maximize signal transmission.

❓ Frequently Asked Questions

Une redirection JavaScript transmet-elle autant de PageRank qu'une redirection 301 ?
Google ne le précise pas officiellement, mais les observations terrain suggèrent qu'une redirection JavaScript bien implémentée transmet du PageRank, probablement avec une légère décote par rapport à une 301. Le risque principal reste l'échec d'exécution, qui annule toute transmission.
Googlebot exécute-t-il toujours le JavaScript de toutes les pages qu'il crawle ?
Non. Le rendu JavaScript est une opération coûteuse que Google met en file d'attente. Certaines pages peuvent attendre des jours ou des semaines avant d'être rendues, surtout si le crawl budget est serré. Les pages à faible priorité risquent de ne jamais être rendues.
Comment savoir si mes liens JavaScript sont bloqués par robots.txt ?
Utilise l'outil de test robots.txt dans Search Console et vérifie que les fichiers .js critiques ne sont pas bloqués. Vérifie aussi que les ressources externes (CDN, bibliothèques tierces) ne sont pas interdites, car elles peuvent contenir la logique de redirection.
Les redirections via framework React ou Vue posent-elles problème pour le SEO ?
Oui, si le site repose uniquement sur le rendu côté client (CSR). Sans Server-Side Rendering (SSR) ou pré-rendu, Google doit exécuter tout le JavaScript pour découvrir les liens, ce qui ralentit l'indexation et peut bloquer certaines pages. Le SSR résout ce problème.
Un lien en JavaScript est-il mieux qu'un lien en nofollow pour le SEO ?
Ça dépend. Un lien JavaScript correctement suivi transmet du PageRank, contrairement à un nofollow. Mais un lien JavaScript qui échoue devient invisible, alors qu'un nofollow reste détectable et peut aider Google à découvrir l'URL. Un lien HTML standard reste toujours préférable.
🏷 Related Topics
Crawl & Indexing AI & SEO JavaScript & Technical SEO Links & Backlinks Redirects

🎥 From the same video 20

Other SEO insights extracted from this same Google Search Central video · duration 1h01 · published on 31/01/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.