Official statement
Other statements from this video 42 ▾
- 42:49 Peut-on vraiment utiliser hreflang entre plusieurs domaines distincts ?
- 48:45 Peut-on vraiment utiliser hreflang entre plusieurs domaines distincts ?
- 58:47 Faut-il vraiment éviter de dupliquer son contenu sur deux sites distincts ?
- 58:47 Faut-il vraiment éviter de créer plusieurs sites pour le même contenu ?
- 91:16 Faut-il vraiment indexer les pages de recherche interne de votre site ?
- 91:16 Faut-il bloquer les pages de recherche interne pour éviter l'indexation d'un espace infini ?
- 125:44 Les Core Web Vitals influencent-ils vraiment le budget de crawl de Google ?
- 125:44 Réduire la taille de page améliore-t-il vraiment le budget crawl ?
- 152:31 Le rapport de liens internes dans Search Console reflète-t-il vraiment l'état de votre maillage ?
- 152:31 Pourquoi le rapport de liens internes de Search Console ne montre-t-il qu'un échantillon ?
- 172:13 Faut-il vraiment s'inquiéter des chaînes de redirections pour le crawl Google ?
- 201:37 Comment Google segmente-t-il réellement vos Core Web Vitals par groupes de pages ?
- 201:37 Comment Google segmente-t-il réellement vos Core Web Vitals par groupes de pages ?
- 248:11 AMP ou canonique : qui récolte vraiment les signaux SEO ?
- 257:21 Le Chrome UX Report compte-t-il vraiment vos pages AMP en cache ?
- 272:10 Faut-il vraiment rediriger vos URLs AMP lors d'un changement ?
- 272:10 Faut-il vraiment rediriger vos anciennes URLs AMP vers les nouvelles ?
- 294:42 AMP est-il vraiment neutre pour le classement Google ou cache-t-il un levier de visibilité invisible ?
- 296:42 AMP est-il vraiment un facteur de classement Google ou juste un ticket d'entrée pour certaines features ?
- 342:21 Pourquoi le contenu copié surclasse-t-il parfois l'original malgré le DMCA ?
- 342:21 Le DMCA est-il vraiment efficace pour protéger votre contenu dupliqué sur Google ?
- 359:44 Pourquoi le contenu copié surclasse-t-il votre contenu original dans Google ?
- 409:35 Pourquoi vos featured snippets disparaissent-ils sans raison technique ?
- 409:35 Les featured snippets et résultats enrichis fluctuent-ils vraiment par hasard ?
- 455:08 Le contenu masqué en responsive mobile est-il vraiment indexé par Google ?
- 455:08 Le contenu caché en CSS responsive est-il vraiment indexé par Google ?
- 563:51 Les structured data peuvent-elles vraiment forcer l'affichage d'un knowledge panel ?
- 563:51 Existe-t-il un balisage structuré qui garantit l'apparition d'un Knowledge Panel ?
- 583:50 Pourquoi la plupart des sites n'obtiennent-ils jamais de sitelinks dans Google ?
- 583:50 Peut-on vraiment forcer l'affichage des sitelinks dans Google ?
- 649:39 Les redirections 301 transfèrent-elles vraiment 100 % du jus SEO sans perte ?
- 649:39 Les redirections 301 transfèrent-elles vraiment 100% du PageRank et des signaux SEO ?
- 722:53 Faut-il vraiment supprimer ou rediriger les contenus expirés plutôt que de les garder indexables ?
- 722:53 Faut-il vraiment supprimer les pages expirées ou peut-on les laisser avec un label 'expiré' ?
- 859:32 Les mots-clés dans l'URL : facteur de ranking ou simple béquille temporaire ?
- 859:32 Les mots dans l'URL influencent-ils vraiment le classement Google ?
- 908:40 Faut-il vraiment ajouter des structured data sur les vidéos YouTube embarquées ?
- 909:01 Faut-il vraiment ajouter des données structurées vidéo quand on embed déjà YouTube ?
- 932:46 Les Core Web Vitals impactent-ils vraiment le SEO desktop ?
- 932:46 Pourquoi Google ignore-t-il les Core Web Vitals desktop dans son algorithme de classement ?
- 952:49 L'API et l'interface Search Console affichent-elles vraiment les mêmes données ?
- 963:49 Peut-on utiliser des templates différents par version linguistique sans pénaliser son SEO international ?
Google follows up to five consecutive redirects and treats them as a single crawl request. Beyond this threshold, each additional redirect becomes a separate request, which directly impacts your crawl budget. Specifically: limit your redirect chains to a maximum of four hops to avoid wasting crawl resources unnecessarily.
What you need to understand
What does it actually mean to 'treat as a single request'? <\/h3>
When Googlebot encounters a URL that redirects to another, it automatically follows the chain of redirects to reach the final URL. Mueller clarifies here that Google follows up to five consecutive redirects <\/strong> without interrupting this process. These five hops are treated as a single crawl operation, counted only once in your crawl budget.<\/p> Beyond this limit, the process fragments. Each additional redirect becomes a separate request <\/strong>, which mobilizes more server resources and disproportionately consumes your crawl budget. For a site with thousands of pages, this difference is not trivial — it can delay the indexing of priority content.<\/p> The main reason is performance <\/strong>. Following infinite chains of redirects would expose Googlebot to redirect loops, configuration errors, or malicious cloaking attempts. By setting a threshold at five, Google strikes a balance between technical flexibility and protection against abuse.<\/p> This limit also corresponds to on-ground reality. In 99% of healthy configurations, you never exceed three redirects <\/strong> — and if you have five, it's usually a sign of a problem with architecture or poorly cleaned up successive migrations. Google leaves you some leeway but does not endorse chaos.<\/p> Mueller does not specify whether the distinction between 301/302/307/308 influences this treatment. In practice, the type of redirect has no impact on the counting of hops <\/strong> — it’s the number of consecutive HTTP requests that matters. Whether you chain five 301s or a mix of 302 and 307, Google stops the clock at the fifth hop.<\/p> However, JavaScript or meta-refresh redirects are a different story <\/strong>. They require client-side rendering, hence a different processing method. Mueller speaks here only of classic server-side HTTP redirects. If your chain mixes server and JS redirects, it is even more costly in resources — and potentially not counted the same way.<\/p>Why does Google impose this limit of five redirects? <\/h3>
Does this rule apply to all types of redirects? <\/h3>
SEO Expert opinion
Is this statement consistent with on-the-ground observations? <\/h3>
Yes, and it's even one of the rare times Google gives a precise number. Empirical tests show that Googlebot really follows up to five hops <\/strong> before treating subsequent redirects as fragmented requests. This behavior is observed in server logs: a chain of four redirects generates a single crawl entry, while a chain of six generates multiple.<\/p> That being said, Mueller says nothing about crawl speed or lost PageRank <\/strong>. Even if Google 'follows' the five redirects, each hop dilutes the link equity transmitted a little more. Tests show that beyond two redirects, you start to lose juice — even if technically, the final URL is indexed properly.<\/p> First point: Mueller talks about 'consecutive' redirects <\/strong>. If your architecture mixes server redirects and client redirects (JS, meta-refresh), the counter does not work the same way. Google must first render the JS page, which adds latency and consumes more budget. A chain of three server redirects + one JS can cost as much as a chain of six classic redirects.<\/p> Second nuance: this limit concerns crawling, not necessarily final indexing <\/strong>. Google can very well index the destination URL even if the chain exceeds five hops — but the process will be slower, less prioritized, and the link equity diminished. [To be verified] <\/strong>: no official data specifies whether a chain of seven redirects completely blocks indexing or simply delays it.<\/p> Google may raise this limit for very high authority domains <\/strong> — it's a plausible hypothesis but not confirmed by Mueller. Sites like Wikipedia or Amazon have almost unlimited crawl budgets. If their technical architecture imposes six redirects on some pages, it's likely that Google still follows them. But for 99% of sites, don’t count on it.<\/p> Another edge case: temporary redirects (302, 307) are not always treated the same way as permanent ones <\/strong>. If you chain five 302s, Google may decide to recrawl the chain multiple times to check if it changed — which doubles the actual budget consumption. Mueller does not specify this point, but it's consistent with observed behavior in logs.<\/p>What nuances should be added to this rule? <\/h3>
In what cases does this rule not strictly apply? <\/h3>
Practical impact and recommendations
What should you do to clean up your redirect chains? <\/h3>
First, audit your site with Screaming Frog or Sitebulb <\/strong> by enabling the 'Redirect Chains' option. These tools automatically detect chains of two or more redirects and list them for you. Export the report, sort by chain length, and focus on those exceeding two hops.<\/p> Next, fix the chains by redirecting directly to the final URL <\/strong>. If A → B → C → D, replace with A → D and B → D. This reduces user latency, preserves link equity, and frees up crawl budget. In 80% of cases, these chains are the result of poorly documented successive migrations — no one ever cleaned up afterwards.<\/p> Never redirect a redirect <\/strong> — this is the golden rule. If you migrate a site that has already been migrated, don't just add another layer of redirects on top of the existing ones. Trace the full chain, identify the original source URL, and redirect it directly to the new final destination.<\/p> Second common mistake: forgetting external backlinks <\/strong>. You can fix your internal redirects, but if thousands of inbound links point to a URL that redirects twice, you're wasting equity. Contact the most significant referring sites to update their links — or use disavow if those links are toxic anyway.<\/p> Inspect your server logs <\/strong> to spot crawl patterns. If you see Googlebot making the same request multiple times on a redirect chain, it's splitting the process — a sign that you're exceeding the limit. Compare with your Search Console reports: if some pages take weeks to be indexed while being crawled, redirect chains are often to blame.<\/p> Also, use the 'URL Inspection' tool in Search Console <\/strong> to manually test your problematic URLs. Google will show you the indexed final URL and the redirect path followed. If you see more than two hops, that’s an immediate red flag. Prioritize correction based on the current organic traffic of those pages.<\/p>What mistakes should be avoided when managing redirects? <\/h3>
How can I check if my site complies with this limit? <\/h3>
❓ Frequently Asked Questions
Que se passe-t-il exactement après la cinquième redirection ?
Les redirections 301 et 302 sont-elles comptées de la même façon ?
Une chaîne de quatre redirections pénalise-t-elle le référencement ?
Comment identifier rapidement les chaînes de redirections sur mon site ?
Les redirections internes au sein d'un CDN comptent-elles dans cette limite ?
🎥 From the same video 42
Other SEO insights extracted from this same Google Search Central video · duration 996h50 · published on 12/03/2021
🎥 Watch the full video on YouTube →Related statements
Get real-time analysis of the latest Google SEO declarations
Be the first to know every time a new official Google statement drops — with full expert analysis.
💬 Comments (0)
Be the first to comment.