What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

Googlebot does not use the 'crawl-delay' directive from robots.txt files, as it is ineffective with modern servers that can handle more traffic. Instead, Google automatically adjusts its crawling frequency based on server responsiveness. If a server shows signs of overload or error, Googlebot reduces its crawling frequency.
0:37
🎥 Source video

Extracted from a Google Search Central video

⏱ 1:39 💬 EN 📅 21/12/2017 ✂ 4 statements
Watch on YouTube (0:37) →
Other statements from this video 3
  1. Googlebot ignore-t-il vraiment la directive crawl-delay dans votre robots.txt ?
  2. 1:09 Peut-on vraiment contrôler la fréquence de crawl de Google via Search Console ?
  3. 1:09 Peut-on vraiment piloter la fréquence de crawl de Google via Search Console ?
📅
Official statement from (8 years ago)
TL;DR

Google confirms that Googlebot does not consider the crawl-delay directive in the robots.txt file, deeming it outdated for modern servers. Instead, the crawler automatically adjusts its frequency based on server responsiveness. Specifically, if your server shows signs of overload, Googlebot slows down on its own without any manual intervention from you.

What you need to understand

Why is Google abandoning this directive?

The crawl-delay directive historically allowed webmasters to set a minimum delay between two requests from the crawler. Google considers this mechanism unsuitable for current infrastructures.

Modern servers now handle thousands of simultaneous requests effortlessly. Imposing a fixed pause of several seconds between each crawl reflects a logic from the 2000s, when a poorly configured Apache server could be overwhelmed by an overly aggressive bot.

How does Googlebot adjust its crawl speed?

The crawler continuously monitors response times and error codes (503, timeout). When the server struggles to respond, Googlebot automatically reduces the number of parallel connections and the interval between requests.

This reactive system is based on real-time signals rather than a static instruction. If your server responds in 50ms, Googlebot speeds up. If responses take 3 seconds, it slows down.

Does this statement mean that robots.txt is becoming obsolete?

Not at all. Google still respects Disallow and Allow directives, which define what can be crawled. The crawl-delay directive was a non-standard extension, mainly used by Bing and other minor engines.

Googlebot has always prioritized its own adjustment logic over this directive. This official clarification simply puts to rest a stubborn myth that adding crawl-delay to robots.txt would control server load on Google's side.

  • Googlebot ignores crawl-delay because automatic adjustment is more effective than static instructions
  • The crawler monitors server responsiveness and error codes to adjust its frequency in real-time
  • The Disallow and Allow directives remain fully respected and essential
  • This position only concerns Google — Bing and other engines continue to honor crawl-delay

SEO Expert opinion

Is this statement consistent with field observations?

Absolutely. SEO professionals have known for years that crawl-delay has no effect on Googlebot. Real-life testing shows that Google crawls at the rate it deems optimal, regardless of the specified value.

What actually works to control load is the Search Console: the crawl rate adjustment option allows manual frequency capping. But Google has removed this feature for most sites, believing that its algorithm does the job better.

What nuances should be added to this position?

Google claims that modern servers can handle load without issue. This is true for recent cloud infrastructure, but false for sites hosted on low-end shared servers or poorly optimized CMSs.

A WordPress site with 40 active plugins and a non-indexed database can crash under 10 requests per second, even on a recent server. [To be verified]: Google claims to adjust its frequency to signs of overload, but webmasters of under-equipped sites regularly report aggressive crawls leading to 503 errors.

If your server shows repeated 503 errors in the Search Console during crawl peaks, Googlebot's automatic adjustment is not enough. You need to optimize your infrastructure or manually limit it via server rules.

What should you do if Googlebot still overloads your server?

First step: check your server logs to identify the URLs that trigger long response times. Often, the problem comes from dynamically generated pages with heavy SQL queries.

Then, block unnecessary sections in robots.txt (filter facets, URLs with infinite parameters). If the problem persists, consider a smart caching CDN that serves static versions to Googlebot. As a last resort, contact Search Console support to report abnormally aggressive crawling — but responses are often generic copy-paste replies.

Practical impact and recommendations

Should you remove crawl-delay from your robots.txt?

No, unless you have no other engines but Google that matter. Bing, Yandex, and most alternative crawlers still respect this directive. Removing it risks overloading your server with these third-party bots.

Keep the directive if your SEO traffic partly comes from Bing or if commercial scraping crawlers regularly hit your site. For Google specifically, that line is simply ignored.

How can you verify that Googlebot is not affecting your performance?

Analyze your server logs by correlating the timestamps of Googlebot requests with your load metrics (CPU, response time). If you notice latency spikes correlated with the crawler’s visits, that’s a clear signal.

The Search Console also displays crawl statistics: number of pages crawled per day, average download time, response codes. An average time over 500ms indicates that your server is struggling.

What concrete actions can you implement?

First, optimize your page generation times: object cache, SQL optimization, lazy loading of heavy resources. Then, clean up your URL architecture to prevent Googlebot from wasting time on duplicate or worthless pages.

If your site generates thousands of filtered or paginated pages, use strategic robots.txt and noindex to channel the crawl budget towards priority content. A well-structured XML sitemap also helps Googlebot to concentrate its efforts intelligently.

  • Keep crawl-delay if you target Bing or other engines besides Google
  • Regularly monitor crawl statistics in Search Console
  • Block unnecessary sections (facets, parameter URLs) in robots.txt
  • Optimize server response times to under 300ms for strategic pages
  • Cache frequently crawled pages with a CDN or server-side cache
  • Analyze logs to detect correlations between crawling and overload
Googlebot autonomously manages its crawl frequency based on your server's responsiveness. Your priority is to ensure fast response times and a clear architecture. For high-volume sites or complex infrastructure, these optimizations may require specialized expertise. Engaging a specialized SEO agency can help you audit your infrastructure, identify bottlenecks, and implement tailored solutions suited to your technical constraints.

❓ Frequently Asked Questions

Est-ce que Bing respecte la directive crawl-delay ?
Oui, Bingbot et la plupart des autres crawlers (Yandex, DuckDuckBot) respectent encore crawl-delay. Ne la retirez pas de robots.txt si vous visez d'autres moteurs que Google.
Comment Google adapte-t-il sa fréquence de crawl concrètement ?
Googlebot surveille les temps de réponse et les codes erreur 503 ou timeout. Si le serveur ralentit ou renvoie des erreurs, le crawler réduit automatiquement le nombre de requêtes parallèles.
Peut-on encore limiter manuellement le taux de crawl dans Search Console ?
Google a retiré cette option pour la plupart des sites, estimant que l'ajustement automatique est plus efficace. Seuls quelques cas spécifiques conservent cet accès.
Que faire si Googlebot provoque des erreurs 503 répétées ?
Optimisez d'abord vos temps de réponse serveur et bloquez les sections inutiles via robots.txt. Si le problème persiste, contactez le support Search Console avec les logs détaillés.
Faut-il supprimer crawl-delay pour accélérer l'indexation par Google ?
Non, car Googlebot l'ignore déjà. La supprimer n'accélèrera rien côté Google, mais risque de surcharger votre serveur avec d'autres bots qui respectent encore cette directive.
🏷 Related Topics
Crawl & Indexing PDF & Files

🎥 From the same video 3

Other SEO insights extracted from this same Google Search Central video · duration 1 min · published on 21/12/2017

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.