What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

Do not change the configuration of the robots.txt file during a migration. If certain URLs were blocked by robots.txt for good reasons before the migration, they must remain blocked after the migration. A migration is not a reason to change crawling access rules.
17:21
🎥 Source video

Extracted from a Google Search Central video

⏱ 20:15 💬 EN 📅 27/08/2020 ✂ 12 statements
Watch on YouTube (17:21) →
Other statements from this video 11
  1. Faut-il vraiment rediriger toutes les images lors d'une migration de site ?
  2. 2:01 Une migration de domaine fait-elle vraiment perdre du trafic ?
  3. 3:03 L'historique d'un domaine acheté plombe-t-il vraiment une migration SEO ?
  4. 6:42 Fusionner deux sites web : pourquoi Google ne traite-t-il pas ça comme une migration classique ?
  5. 8:14 Comment Google transfère-t-il réellement les signaux lors d'une migration de domaine ?
  6. 9:47 Combien de temps faut-il vraiment pour transférer les signaux SEO lors d'une migration ?
  7. 10:18 Faut-il vraiment utiliser l'outil de changement d'adresse de Google Search Console lors d'une migration ?
  8. 11:23 Une migration déclenche-t-elle une réévaluation qualité par Google ?
  9. 15:05 Faut-il vraiment faire machine arrière après une migration de site qui échoue ?
  10. 18:42 Faut-il vraiment éviter de tout changer en même temps lors d'une migration SEO ?
  11. 19:43 Migrer de domaine efface-t-il vraiment les pénalités SEO et les mauvais signaux ?
📅
Official statement from (5 years ago)
TL;DR

Google states that a website migration is not a valid reason to modify the rules of the robots.txt file. If certain URLs were blocked before the migration, they should remain blocked afterwards. The reasoning: a migration changes the technical structure, not the crawling strategy. For an SEO, this means auditing the robots.txt in advance of the migration, not during.

What you need to understand

Why does Google emphasize stability in robots.txt so much?

The statement from Martin Splitt reminds us of a principle often overlooked in practice: a technical migration should never trigger a revision of crawling rules. The robots.txt file defines what engines can or cannot crawl, and these decisions are based on a prior editorial or technical strategy.

If you had blocked certain sections (filter facets, archives, internal search pages) before the migration, it’s because you had identified a crawl budget, duplicate content, or performance issue. Changing these rules mid-migration introduces an uncontrollable variable into an already risky process.

What justifies a robots.txt block before migration?

Classic reasons include: avoiding the indexing of low-quality content (tags, filters without added value), protecting sensitive data (client areas, APIs), or preserving crawl budget by excluding dynamically generated URLs without SEO interest.

These strategic decisions remain unchanged with the underlying technology. Whether you are moving from WordPress to Shopify or from an Apache server to Nginx, the URLs to exclude remain the same — only their technical structure evolves. Let’s be honest: many SEOs take advantage of a migration to 'start fresh', and this is often where things go wrong.

In what context can this rule be problematic?

Google's statement assumes that your initial robots.txt was relevant. But what if the old file contained errors — overly broad blocks, deprecated directives, poorly calibrated wildcards — then you are stuck. Should you really postpone corrections until after the migration?

Google does not provide a clear answer here. The instruction 'do not change anything' primarily applies to sites where the robots.txt was properly configured. For others, you must balance the stability of the migration process and the necessity of correcting critical errors before transitioning.

  • A migration changes the technical structure, not the crawling strategy
  • robots.txt blocks should be decided beforehand, not during the migration
  • Correcting a faulty robots.txt during a migration adds an additional risk
  • Auditing the robots.txt file before the project prevents unpleasant surprises
  • Google assumes your initial configuration was relevant

SEO Expert opinion

Is this directive realistic in the face of complex migrations?

Google's position makes sense in theory, but it becomes problematic in real-world scenarios. Many sites migrate precisely because their SEO configuration was flawed. Asking to keep a problematic robots.txt means perpetuating mistakes to avoid disrupting crawl.

Specifically, if your old site mistakenly blocks entire categories or contains now-unnecessary directives, you have two options: correct them before the migration (and risk destabilizing an already fragile site) or correct them afterwards (and endure a period where the new site inherits the same limitations). Neither option is ideal. [To be verified] — Google does not specify how to handle such scenarios.

What contradictions do we observe in practice?

In practice, many successful migrations actually include a complete review of the robots.txt. The argument 'don’t change anything' overlooks that some platforms automatically generate unwanted URLs (Shopify with its filtered collections, WordPress with its tag archives) that did not exist on the old site.

And this is where it gets tricky. If you are migrating from a custom site to a CMS that massively generates new URLs, should you really wait 'until after the migration' to block these new URLs? Google’s response remains vague. In the field, SEOs who waited often saw their crawl budget explode before they could react.

When does this rule clearly not apply?

If your migration involves a radical change of structure (switching from an e-commerce site to a showcase site, completely overhauling the architecture), keeping the robots.txt unchanged makes no sense. The blocked URLs may not even exist anymore or correspond to abandoned features.

In such cases, it’s essential to map the old rules, identify which still remain relevant, and build a new file before the switch. Yes, this adds a step. But pretending that a migration is neutral with respect to crawl is a fiction in many projects.

Warning: If your robots.txt is currently blocking critical sections by mistake (like /category/ or /product/), do not transfer it exactly as is to the new site citing 'stability'.

Practical impact and recommendations

What should you check before launching a migration?

Before any switch, audit your robots.txt line by line. Identify each Disallow and User-agent directive, and ask yourself if it still makes sense. Many files contain rules added years ago by service providers who are no longer around, with no documentation.

Use Google Search Console to check which URLs are currently blocked. Compare them with your migration plan: will these URLs still exist? Under the same structure? If the answer is no, you cannot just copy-paste the file.

How to manage changes in CMS or platform?

A platform change often generates new technical URLs: Shopify creates /collections/, PrestaShop generates /index.php?id_category=, Magento adds sorting parameters. These URLs did not exist on the old site, so your current robots.txt does not cover them.

You must anticipate these new patterns and prepare corresponding rules ahead of the migration. Testing on a staging environment is essential: deploy the new site in pre-production, crawl it with Screaming Frog or Oncrawl, and verify that your robots.txt properly blocks what needs to be blocked.

What mistakes should you absolutely avoid during the transition?

Never remove all rules 'just in case' thinking you can restore them later. You will open a broadway for Google's crawl to low-quality or duplicate content, and the damage can be lasting.

Conversely, do not block everything out of fear of making a mistake. Some SEOs place a temporary Disallow: / 'for the time being,' which essentially means deindexing the site. The robots.txt is not an on/off switch for managing visibility: use meta robots tags for that.

  • Audit the current robots.txt at least 30 days before the migration
  • Map the new technical URLs generated by the target platform
  • Test the new robots.txt on a staging environment
  • Compare current rules with the new architecture (matching or obsolete)
  • Document each directive (why it exists, what it blocks)
  • Prepare a rollback plan if a critical block appears post-migration
Google's directive is clear: do not modify your robots.txt during the migration. However, this rule assumes that your current file is relevant and that your migration does not radically change the site's structure. In reality, many projects require an anticipatory review, tested in advance. If you find these compromises complex or if your migration has technical grey areas, consulting a specialized SEO agency can help you avoid costly mistakes and secure the transfer of your organic visibility.

❓ Frequently Asked Questions

Peut-on corriger un robots.txt défaillant juste avant une migration ?
Oui, mais uniquement si vous testez les changements sur un environnement de staging et que vous laissez au moins 2 semaines entre la correction et la bascule. Sinon, vous superposez deux variables et perdez toute traçabilité.
Que faire si la nouvelle plateforme génère des URLs que l'ancien robots.txt ne bloque pas ?
Ajoutez les nouvelles règles au robots.txt avant la migration et testez-les en préproduction. Google recommande de ne pas modifier pendant la migration, mais ne pas bloquer des contenus parasites peut ruiner votre crawl budget.
Comment vérifier que mon robots.txt actuel ne contient pas d'erreurs critiques ?
Utilisez l'outil de test robots.txt dans Google Search Console, et croisez avec un crawl Screaming Frog en mode 'respect robots.txt'. Vérifiez que les URLs stratégiques ne sont pas bloquées par erreur.
Un changement de nom de domaine change-t-il quelque chose au robots.txt ?
Non. Un changement de domaine seul (avec redirections 301) ne justifie aucune modification du robots.txt. Les règles de blocage restent identiques, seul le domaine change.
Faut-il synchroniser robots.txt et sitemap XML pendant une migration ?
Absolument. Si votre sitemap contient des URLs bloquées par robots.txt, Google les ignorera de toute façon. Auditez les deux fichiers ensemble pour éviter les incohérences qui ralentissent l'indexation du nouveau site.
🏷 Related Topics
Crawl & Indexing AI & SEO Domain Name PDF & Files Redirects

🎥 From the same video 11

Other SEO insights extracted from this same Google Search Central video · duration 20 min · published on 27/08/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.