What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

You should not block noindex URLs in robots.txt, as this prevents Google from seeing the noindex directive, and these pages may stay indexed. Instead, use the URL Parameters Tool to reduce crawling of unwanted pages.
20:01
🎥 Source video

Extracted from a Google Search Central video

⏱ 53:08 💬 EN 📅 29/10/2020 ✂ 26 statements
Watch on YouTube (20:01) →
Other statements from this video 25
  1. 1:41 Faut-il vraiment utiliser des canonical cross-domain pour consolider plusieurs sites thématiques ?
  2. 2:00 Les redirections 302 transmettent-elles le PageRank comme les 301 ?
  3. 2:00 Le canonical tag transfère-t-il vraiment 100% du PageRank sans aucune perte ?
  4. 14:00 Faut-il vraiment éviter de mettre tous ses liens sortants en nofollow ?
  5. 14:10 Faut-il vraiment éviter de mettre tous ses liens sortants en nofollow ?
  6. 16:16 L'outil de paramètres d'URL dans Search Console : mort-vivant ou encore utile pour votre SEO ?
  7. 16:36 L'outil URL Parameters de Google fonctionne-t-il encore malgré son interface cassée ?
  8. 22:03 Les Core Web Vitals sont-ils vraiment le seul critère de vitesse qui compte pour le classement ?
  9. 23:03 Core Web Vitals : pourquoi Google ignore-t-il les autres métriques de performance pour le Page Experience ?
  10. 25:15 Les tests PageSpeed mentent-ils sur vos Core Web Vitals ?
  11. 26:50 Le texte alternatif est-il vraiment décisif pour votre visibilité dans Google Images ?
  12. 26:50 Le texte alternatif des images sert-il vraiment au référencement naturel ?
  13. 28:26 Les redirections 302 transmettent-elles vraiment autant de PageRank que les 301 ?
  14. 30:17 Faut-il vraiment cacher les bannières de consentement cookies à Googlebot ?
  15. 30:57 Faut-il vraiment bloquer les cookie banners pour Googlebot ?
  16. 34:46 Pourquoi Google affiche-t-il encore d'anciens contenus dans vos meta descriptions ?
  17. 34:46 Pourquoi Google affiche-t-il parfois vos anciennes meta descriptions dans les SERP ?
  18. 36:57 Faut-il vraiment afficher les cookie banners à Googlebot ?
  19. 37:56 Les redirections 302 deviennent-elles vraiment des 301 avec le temps ?
  20. 40:01 Faut-il vraiment renvoyer un 404 pour les produits définitivement indisponibles ?
  21. 40:01 Faut-il renvoyer un 404 ou un 200 sur une page produit en rupture de stock ?
  22. 43:37 Faut-il synchroniser les dates visibles et les dates techniques pour booster son crawl ?
  23. 43:38 Faut-il vraiment distinguer la date visible de celle des données structurées ?
  24. 46:46 Pourquoi Google crawle-t-il encore vos anciennes URLs supprimées ?
  25. 47:09 Pourquoi Google continue-t-il de crawler vos anciennes URLs en 404 ?
📅
Official statement from (5 years ago)
TL;DR

Google cannot see the noindex tag if you block the URL in robots.txt — as a result, the page remains indexed despite your directive. This classic configuration error creates a technical conflict: crawling is denied before Googlebot can even read the HTML. The solution lies in the URL parameters tool to control crawling without compromising de-indexing.

What you need to understand

What is the technical conflict between robots.txt and noindex?

The robots.txt acts like a door lock: it prevents Googlebot from entering a URL. If you block access, the bot never downloads the HTML of the page.

However, the noindex directive is a tag located in the <head> of the HTML — or in the HTTP header. To read it, Google must first crawl the page. Blocking in robots.txt is like locking the door before the bot can read the "do not index" sign inside.

What happens concretely if we combine both directives?

Googlebot encounters the robots.txt block, stops crawling, and registers the URL in the index with the note "Blocked by robots.txt". The page appears in the results without a snippet or title — just the naked URL.

Worse: if the URL was already indexed before blocking, it may remain indefinitely. Google will not come back to check the noindex tag since you are denying access. The status stays frozen.

What alternative does Google propose to reduce crawling?

The URL Parameters Tool in Search Console allows you to inform Google that a parameter does not generate unique content. Example: ?sessionID=, ?utm_source=, ?color=.

You indicate that these variations do not need to be crawled intensively. Google adjusts its behavior without blocking access — the noindex directive remains readable if it exists. It’s a fine-tuning of crawl budget, not a brute lock.

  • Robots.txt blocks access before reading the HTML — the noindex tag becomes invisible
  • Noindex alone allows crawling but prohibits indexing — this is the correct configuration
  • URL Parameters reduce crawling of variants without preventing reading of directives
  • A page blocked by robots.txt may appear in the index with the naked URL, without a snippet
  • If an indexed URL is then blocked by robots.txt, it may remain indefinitely without an update

SEO Expert opinion

Is this statement consistent with real-world observations?

Yes, and it’s a classic of SEO audits. Regularly, sites layer Disallow and noindex on the same URLs — often out of overzealousness. The webmaster wants to "be sure" that the page is not indexed, so they pile on the directives.

The problem is that Google Search Console keeps reporting URLs "Blocked by robots.txt" in the coverage report. These pages sometimes appear in the SERP as naked URLs — a classic symptom of this conflict. The crawl logs confirm: Googlebot tries to access, receives a virtual 403 or 404 from robots.txt, and gives up without reading the HTML.

Is the URL Parameters Tool really the optimal solution?

It’s a recommendation from Google, but [To be checked] how actively this tool is maintained. Google has already deprecated several Search Console tools without warning — and the URL parameters tool has not evolved in years.

In practice, canonical tags and clean sitemaps often do a better job. If you have 50,000 pagination or facet URLs, a consistent canonicalization strategy is better than a hack in a Google tool that could vanish overnight. The tool remains useful for edge cases — sessions, tracking, minor variants — but it’s not a magic wand.

In what cases does this rule not apply?

If you want to permanently remove a URL from the index AND prevent crawling, the correct sequence is: (1) keep the page accessible with noindex while Google removes it, (2) monitor Search Console until complete disappearance, (3) block in robots.txt afterwards if necessary.

Another case: sensitive files (personal data PDFs, admin, etc.). Here, robots.txt is not enough — an X-Robots-Tag: noindex in the HTTP header + server authentication is required. Relying solely on robots.txt to protect sensitive content is a mistake: the URL can be discovered through other means (backlinks, shares) and appear in the index without the content being crawled.

Warning: robots.txt is NOT a security measure. It neither prevents indexing of the naked URL nor discovery by third parties. For confidential content, use server authentication + noindex in the HTTP header.

Practical impact and recommendations

What should you do if you combine robots.txt and noindex?

First, identify the affected URLs. In Google Search Console, go to the Coverage section, look for pages "Blocked by robots.txt". Export the list. Cross-reference it with your sitemap or CMS to spot those that also have a noindex.

Next, remove the robots.txt block for these URLs. Leave the noindex in place. Submit the URLs via the Search Console URL inspection tool to force a re-crawl. Monitor the coverage report: the pages should move from "Blocked" to "Excluded (noindex)" within 2 to 4 weeks depending on crawl frequency.

What mistakes should you avoid during the fix?

Do not remove the robots.txt all at once if you have thousands of rules. Proceed in segments: identify patterns (e.g., /admin/*, /?sessionid=*) and test with a sample before global deployment.

Another pitfall: removing the noindex too soon. If you lift the robots.txt block AND remove the noindex simultaneously, Google will index pages you wanted to exclude. Keep the noindex active, lift robots.txt, wait for complete de-indexing, then decide if you want to allow indexing or maintain the noindex.

How can you verify that your configuration is correct?

Use the robots.txt testing tool in Search Console: paste a URL, check that it is not blocked. Then inspect the URL with the inspection tool: Google should be able to crawl the page and detect the noindex tag in the "Coverage" tab.

On the server logs side, filter Googlebot requests: if you see 200 OK with Googlebot user-agent but the URL remains "Blocked" in Search Console, it means there is a cache delay or a dynamic robots.txt rule causing the issue. Compare robots.txt locally vs. what Googlebot sees (using tools like Screaming Frog + rendering).

  • Export the "Blocked by robots.txt" URLs from Search Console and cross-reference with noindex pages
  • Remove the robots.txt block for noindex URLs, without touching the noindex itself
  • Submit a sample of URLs via the inspection tool to quickly force a re-crawl
  • Monitor the coverage report: expected transition from "Blocked" to "Excluded (noindex)" within 2-4 weeks
  • Never combine Disallow and noindex on the same URLs — choose one or the other based on the goal
  • Test with the robots.txt tool in Search Console + URL inspection to validate the configuration
The rule is simple: robots.txt controls access, noindex controls indexing. The two should never overlap on the same URLs. If you want to exclude a page from the index, keep it crawlable with noindex. If you want to reduce the crawl of parameters, use the URL Parameters Tool or canonical tags instead of a brute block. These technical trade-offs can quickly become complex on sites with thousands of pages — with legacy CMS, inherited robots.txt rules, plugins that automatically generate noindex. If you notice persistent inconsistencies or drops in crawl after correction, it might be wise to consult a specialized SEO agency for a complete configuration audit and tailored support during deployment.

❓ Frequently Asked Questions

Peut-on bloquer une URL en robots.txt si elle contient déjà un noindex ?
Non. Le blocage robots.txt empêche Google de voir la balise noindex — la page peut rester dans l'index sous forme d'URL nue. Laissez l'URL crawlable et comptez uniquement sur le noindex pour la désindexation.
Que se passe-t-il si je bloque une page déjà indexée dans robots.txt ?
L'URL peut rester indéfiniment dans l'index avec la mention "Bloquée par robots.txt", sans snippet. Google ne pourra plus crawler la page pour vérifier un éventuel noindex ou suppression, donc le statut se fige.
L'outil de paramètres d'URL remplace-t-il le robots.txt pour gérer le crawl budget ?
Non, c'est complémentaire. L'outil de paramètres indique à Google qu'un paramètre ne génère pas de contenu unique, ce qui réduit le crawl sans bloquer l'accès. Robots.txt est un verrou brutal, paramètres d'URL est un réglage fin.
Comment forcer Google à retirer une URL bloquée par robots.txt de l'index ?
Levez le blocage robots.txt, ajoutez un noindex à la page, soumettez l'URL via l'outil d'inspection de Search Console. Une fois désindexée (statut "Exclue - noindex"), vous pouvez remettre le blocage robots.txt si nécessaire.
Le noindex en HTTP header fonctionne-t-il si la page est bloquée par robots.txt ?
Non. Que le noindex soit en balise HTML ou en en-tête HTTP, Google doit pouvoir crawler la page pour le lire. Bloquer l'accès via robots.txt rend toute directive noindex invisible au bot.
🏷 Related Topics
Domain Age & History Crawl & Indexing AI & SEO Domain Name

🎥 From the same video 25

Other SEO insights extracted from this same Google Search Central video · duration 53 min · published on 29/10/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.