Official statement
Other statements from this video 21 ▾
- 1:37 Les en-têtes X-Robots-Tag bloquent-ils vraiment le suivi des redirections par Google ?
- 2:16 Le blocage de Googlebot par certains FAI fait-il vraiment chuter votre référencement ?
- 2:16 Le blocage par les FAI mobiles peut-il vraiment tuer votre référencement ?
- 5:21 Pourquoi votre positionnement chute-t-il après la levée d'une action manuelle Google ?
- 5:26 Une pénalité manuelle levée efface-t-elle vraiment toute trace négative sur vos classements ?
- 7:32 Pourquoi les migrations techniques compliquent-elles autant le référencement de votre site ?
- 8:36 Faut-il vraiment éviter de cumuler migration de domaine et refonte technique ?
- 11:37 Faut-il vraiment optimiser Lighthouse si les utilisateurs trouvent votre site rapide ?
- 11:47 Le Time to Interactive est-il vraiment un facteur de classement Google ?
- 13:32 Googlebot précharge-t-il les liens internes comme un navigateur moderne ?
- 13:48 Googlebot charge-t-il vraiment votre site comme un utilisateur anonyme à chaque visite ?
- 14:55 Combien de temps dure vraiment une migration de site aux yeux de Google ?
- 14:55 Combien de temps faut-il vraiment pour récupérer après un transfert de domaine ?
- 17:39 Les paramètres UTM peuvent-ils saborder votre indexation Google ?
- 18:07 Les paramètres UTM peuvent-ils polluer votre indexation Google ?
- 24:50 Google peut-il ignorer votre rel=canonical et indexer une autre version de votre page ?
- 26:32 Faut-il vraiment créer un site par pays pour son SEO international ?
- 33:34 Les liens affiliés nuisent-ils vraiment au classement Google ?
- 39:54 L'UX améliore-t-elle vraiment le classement SEO ou Google contourne-t-il la question ?
- 44:14 Faut-il désavouer des liens pour améliorer son classement Google ?
- 53:03 L'API de Search Console rame-t-elle vraiment, ou est-ce un problème côté utilisateur ?
Google still follows a 301 or 302 redirect even if the HTTP X-Robots-Tag header is present. The bot does not read the content of the source page during a server-side redirect; the nofollow instruction remains ineffective. Only the destination URL matters for indexing and signal transfer.
What you need to understand
Why is Mueller clarifying this now?
This statement addresses a recurring confusion among SEO practitioners: many believe that the X-Robots-Tag header can block the tracking of a redirect. This is incorrect. Mueller states clearly: in the case of a server-side redirect (301, 302, 307, 308), Googlebot never downloads the body of the HTTP response.
It only reads the status code and the Location header, then immediately follows the new URL. The result: everything in the HTML of the source page — meta robots, nofollow, textual content — is completely ignored. The X-Robots-Tag header is read, but it only applies to the returned resource, not to the tracking behavior.
How does the nofollow attribute actually work in this context?
The nofollow — whether via meta robots or X-Robots-Tag — only concerns the hyperlinks present in a page's HTML. A server-side redirect is not a link: it is an instruction at the protocol level. Googlebot therefore has no link to ignore.
Even if you add X-Robots-Tag: nofollow on a 301 response, the bot will follow the redirect. The only way to block tracking would be to return a 4xx or 5xx code, or to use X-Robots-Tag: noindex, nofollow on the destination URL to prevent its indexing — not on the source.
What is the difference between X-Robots-Tag and meta robots directives?
Both are used to convey indexing and crawling instructions, but X-Robots-Tag applies at the HTTP level, which allows for coverage of non-HTML resources (PDFs, images, videos). Meta robots requires parsed HTML.
In the case of a redirect, Googlebot never parses the HTML of the source page, thus meta robots is never read. X-Robots-Tag could theoretically be read, but it does not change the tracking behavior — it only applies to the resource itself, which is never indexed since it redirects.
- A server-side redirect bypasses any processing of the source content — HTML, CSS, JS, everything is ignored.
- X-Robots-Tag applies to the returned resource, not to the bot's browsing behavior.
- Nofollow only concerns HTML links, never HTTP redirects.
- To block indexing of the destination, apply directives to the target URL, not to the source.
- 301/302 codes transfer ranking signal to the destination — no robots directive can prevent that.
SEO Expert opinion
Is this statement consistent with real-world observations?
Yes, and it aligns with the fundamental principles of the HTTP protocol. In practice, we consistently observe that 301 redirects transfer PageRank signal, and the source URL disappears from the index in favor of the destination. No documented case shows an X-Robots-Tag blocking this mechanism.
But be careful: Mueller is only talking about server-side redirects here. JavaScript or meta refresh redirects require Googlebot to download and parse the HTML. In this case, meta robots or X-Robots-Tag directives can indeed interfere — if you put noindex on the source page, the bot may decide not to follow the JS redirect. [To be verified] in edge cases with late JS and tight crawl budget.
What nuances should be considered in complex cases?
The statement holds for standard HTTP redirects, but it does not cover all scenarios. For example: what happens if you put X-Robots-Tag: none on a 301, and then change the redirect to point to a different URL? Will the bot recrawl the chain or remain stuck on the old destination?
Mueller does not specify. In high-volume sites, we see that Google sometimes takes several weeks to detect a change in the 301 destination. Adding an X-Robots-Tag on the source could theoretically slow down recrawl, even if this is not officially documented. [To be verified] with server logs on large-scale migrations.
In what cases could this rule be bypassed?
It cannot be bypassed with standard server redirects. But some edge cases deserve attention: conditional redirects (varying by User-Agent, geolocation, language) can create situations where Googlebot sees a 200 while the user sees a 301. In this case, the robots directives apply normally.
Another point: redirect chains. If A redirects to B which redirects to C, and B contains X-Robots-Tag: noindex, Google will still follow to C, but B will never be indexed. Practically, this has no impact since B is just an intermediate step — but it can complicate debugging if you don't understand the bot's path.
Practical impact and recommendations
What concrete steps should be taken to secure your redirects?
First, audit your redirect chains: each additional jump dilutes the signal and increases the risk of error. Use Screaming Frog or a server crawler to detect series of redirects (A → B → C) and simplify them into direct redirects (A → C).
Next, ensure that your HTTP headers are consistent. If you use X-Robots-Tag on static resources (PDFs, images), make sure it does not inadvertently apply to redirects. A faulty mod_rewrite or nginx rule can propagate a header across all responses, including 301s.
What mistakes should be avoided when setting up redirects?
Never mix server redirects and JavaScript on the same URL. If you return a 200 with JS that then redirects, Googlebot may potentially index the source page before discovering the redirect. The result: duplicate content, signal dilution, unwanted indexing.
Avoid adding robots directives
❓ Frequently Asked Questions
L'en-tête X-Robots-Tag peut-il empêcher Googlebot de suivre une redirection 301 ?
Le nofollow dans X-Robots-Tag s'applique-t-il aux redirections ?
Quelle différence entre X-Robots-Tag et meta robots sur une redirection ?
Comment bloquer l'indexation d'une URL de destination après redirection ?
Les redirections JavaScript sont-elles traitées de la même manière ?
🎥 From the same video 21
Other SEO insights extracted from this same Google Search Central video · duration 54 min · published on 19/02/2019
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.