Official statement
Other statements from this video 21 ▾
- 1:37 L'en-tête X-Robots-Tag peut-il bloquer Googlebot sur une redirection 301 ?
- 2:16 Le blocage de Googlebot par certains FAI fait-il vraiment chuter votre référencement ?
- 2:16 Le blocage par les FAI mobiles peut-il vraiment tuer votre référencement ?
- 5:21 Pourquoi votre positionnement chute-t-il après la levée d'une action manuelle Google ?
- 5:26 Une pénalité manuelle levée efface-t-elle vraiment toute trace négative sur vos classements ?
- 7:32 Pourquoi les migrations techniques compliquent-elles autant le référencement de votre site ?
- 8:36 Faut-il vraiment éviter de cumuler migration de domaine et refonte technique ?
- 11:37 Faut-il vraiment optimiser Lighthouse si les utilisateurs trouvent votre site rapide ?
- 11:47 Le Time to Interactive est-il vraiment un facteur de classement Google ?
- 13:32 Googlebot précharge-t-il les liens internes comme un navigateur moderne ?
- 13:48 Googlebot charge-t-il vraiment votre site comme un utilisateur anonyme à chaque visite ?
- 14:55 Combien de temps dure vraiment une migration de site aux yeux de Google ?
- 14:55 Combien de temps faut-il vraiment pour récupérer après un transfert de domaine ?
- 17:39 Les paramètres UTM peuvent-ils saborder votre indexation Google ?
- 18:07 Les paramètres UTM peuvent-ils polluer votre indexation Google ?
- 24:50 Google peut-il ignorer votre rel=canonical et indexer une autre version de votre page ?
- 26:32 Faut-il vraiment créer un site par pays pour son SEO international ?
- 33:34 Les liens affiliés nuisent-ils vraiment au classement Google ?
- 39:54 L'UX améliore-t-elle vraiment le classement SEO ou Google contourne-t-il la question ?
- 44:14 Faut-il désavouer des liens pour améliorer son classement Google ?
- 53:03 L'API de Search Console rame-t-elle vraiment, ou est-ce un problème côté utilisateur ?
Google confirms that X-Robots-Tag HTTP headers do not prevent following 301 or 302 redirects. These directives only affect links found in a page's HTML content, not server-side redirects. In practice, a server-side configured redirect will always be followed by Googlebot, even if an X-Robots-Tag: nofollow header is present in the HTTP response.
What you need to understand
What’s the difference between an X-Robots-Tag header and a meta robots tag?
The HTTP X-Robots-Tag header acts as the server-side equivalent of the meta robots tag found in HTML. It allows you to convey indexing and crawl directives directly through HTTP headers without modifying the page's code.
This approach offers a technical advantage: it works on all file types (PDFs, images, videos), not just HTML pages. This way, you can prevent a PDF from being indexed without having to modify it.
Why is there confusion between redirects and crawled links?
Many SEOs believe that a X-Robots-Tag: nofollow on an HTTP response prevents Googlebot from following redirects. This is false. The nofollow directive only applies to hyperlinks contained within the document — it indicates, 'do not utilize the links from this page for your link graph.'
A 301 or 302 redirect is not a link. It is a server instruction that says, 'the content you are looking for is located elsewhere.' Googlebot systematically follows it, as it is its job to resolve chains of redirects to reach the final content.
In what context does this rule apply practically?
Imagine a classic scenario: you redirect an old URL to a new one, but for technical reasons, the HTTP response of the old URL contains an X-Robots-Tag: noindex, nofollow. You might worry that Google will abandon the crawl and never discover the new URL.
Mueller's statement clears up this ambiguity: Google will follow the redirect. The X-Robots-Tag header on the redirect response has no effect on the following behavior. However, if the ultimate destination page itself contains a noindex, then yes, it will not be indexed.
- X-Robots-Tag does not block the following of 301/302 redirects
- These directives only affect HTML links within the page content
- Server redirects are always followed, regardless of present robot headers
- The noindex header on a final destination page is still respected
- This rule applies to all types of server-side redirects (301, 302, 307, 308)
SEO Expert opinion
Is this statement consistent with field observations?
Yes, and it's reassuring. Empirical tests confirm that Googlebot is never blocked by an X-Robots-Tag during a redirect. We observe this during complex migrations where robot headers mistakenly remain on old URLs: PageRank transfer continues regardless.
However — and Mueller does not specify — it should be noted that an X-Robots-Tag: noindex on the source page of a redirect can create strange behaviors. Google may follow the redirect, but if the old URL remains in the index with a noindex, it will eventually disappear without guaranteeing immediate signal transfer. [To verify]: the speed of signal consolidation in this specific case remains unclear.
What nuances should be added to this rule?
Mueller's wording is clear, but it obscures a subtlety: JavaScript redirects are not concerned. If you redirect via a window.location, Google must first render the page, discover the redirect, and then follow it. In this context, an X-Robots-Tag: nofollow could theoretically block the crawl, although in practice Google tends to follow these JS redirects during rendering.
Another point: Mueller speaks of 'server-side redirects,' which includes 301, 302, 307, and 308. But what about meta refresh redirects? They exist in a gray area: they are client-side, but Google often treats them as light redirects. [To verify]: their behavior when faced with an X-Robots-Tag: nofollow is not officially documented.
When does this rule become a trap?
The main risk is believing that an X-Robots-Tag protects against the indexing of a redirect URL. If Google has already indexed the URL before you add the redirect, it can remain visible in the SERPs for a while, even with a noindex. Desindexation takes time, and during that period, you may have duplicates.
Another pitfall: chaining multiple redirects with conflicting robot headers. For example, URL A (noindex) → URL B (nofollow) → URL C (indexable). Google will follow the entire chain, but ranking signals will dilute, and consolidation will be slower. Avoid these baroque configurations — a direct redirect is always preferable.
Practical impact and recommendations
What should you check on a live site?
Start by auditing the HTTP headers of all your active redirects. Use a crawler like Screaming Frog or a manual inspection via curl to list the URLs in 301/302 that carry an X-Robots-Tag. If you find noindex or nofollow headers on redirects, it isn't critical for following, but it creates unnecessary noise.
Next, check the final destination pages. This is where robot directives really count. A redirect can point to a noindex page: Google will follow it but will not index the target. Identify these cases — they are often the result of configuration errors during migrations.
What mistakes should be avoided during a migration or redesign?
Never leave X-Robots-Tag: noindex on old URLs that redirect. Even if Google follows the redirect, these headers can slow down index consolidation. Clean them up as soon as the migration is validated.
Another classic mistake: putting a nofollow on a destination page thinking, 'limit the crawl to internal links.' If that page receives juice through redirects or backlinks, you are wasting PageRank. The nofollow on a page does not block indexing, but it cuts off the transfer of popularity to its outbound links.
How can you test if redirects are being properly followed by Google?
Use Google Search Console: inspect the source URL and the destination URL. If Google indicates 'Redirected URL' on the source and 'Indexed URL' on the target, that's a good sign. However, if the old URL remains 'Excluded' with an ambiguous status, dig deeper: maybe a robot header or a complex redirect chain is disrupting the crawl.
You can also force a recrawl via the inspection tool on the old URL. Google will follow the redirect in real-time and you will immediately see if the path is clean. If the report mentions 'Nofollow detected,' but the redirect is still followed, it is real-world confirmation of what Mueller says.
- Audit the HTTP headers of all active redirects (301, 302, 307, 308)
- Remove X-Robots-Tag: noindex or nofollow from redirect responses
- Check that destination pages do not carry accidental noindex
- Test redirect following via Google Search Console (URL inspection)
- Clean up multiple redirect chains (max 1-2 hops)
- Monitor server logs to spot unusual crawl behaviors
❓ Frequently Asked Questions
Un X-Robots-Tag: nofollow sur une redirection 301 empêche-t-il le transfert de PageRank ?
Faut-il retirer tous les X-Robots-Tag des URLs qui redirigent ?
Les redirections JavaScript sont-elles affectées par un X-Robots-Tag: nofollow ?
Peut-on utiliser X-Robots-Tag pour empêcher l'indexation d'une URL de redirection visible dans l'historique ?
Les redirections 307 et 308 se comportent-elles de la même manière que les 301 et 302 ?
🎥 From the same video 21
Other SEO insights extracted from this same Google Search Central video · duration 54 min · published on 19/02/2019
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.