Official statement
Google confirms that Googlebot does not consider the crawl-delay directive in the robots.txt file, deeming it outdated for modern servers. Instead, the crawler automatically adjusts its frequency based on server responsiveness. Specifically, if your server shows signs of overload, Googlebot slows down on its own without any manual intervention from you.
What you need to understand
Why is Google abandoning this directive?
The crawl-delay directive historically allowed webmasters to set a minimum delay between two requests from the crawler. Google considers this mechanism unsuitable for current infrastructures.
Modern servers now handle thousands of simultaneous requests effortlessly. Imposing a fixed pause of several seconds between each crawl reflects a logic from the 2000s, when a poorly configured Apache server could be overwhelmed by an overly aggressive bot.
How does Googlebot adjust its crawl speed?
The crawler continuously monitors response times and error codes (503, timeout). When the server struggles to respond, Googlebot automatically reduces the number of parallel connections and the interval between requests.
This reactive system is based on real-time signals rather than a static instruction. If your server responds in 50ms, Googlebot speeds up. If responses take 3 seconds, it slows down.
Does this statement mean that robots.txt is becoming obsolete?
Not at all. Google still respects Disallow and Allow directives, which define what can be crawled. The crawl-delay directive was a non-standard extension, mainly used by Bing and other minor engines.
Googlebot has always prioritized its own adjustment logic over this directive. This official clarification simply puts to rest a stubborn myth that adding crawl-delay to robots.txt would control server load on Google's side.
- Googlebot ignores crawl-delay because automatic adjustment is more effective than static instructions
- The crawler monitors server responsiveness and error codes to adjust its frequency in real-time
- The Disallow and Allow directives remain fully respected and essential
- This position only concerns Google — Bing and other engines continue to honor crawl-delay
SEO Expert opinion
Is this statement consistent with field observations?
Absolutely. SEO professionals have known for years that crawl-delay has no effect on Googlebot. Real-life testing shows that Google crawls at the rate it deems optimal, regardless of the specified value.
What actually works to control load is the Search Console: the crawl rate adjustment option allows manual frequency capping. But Google has removed this feature for most sites, believing that its algorithm does the job better.
What nuances should be added to this position?
Google claims that modern servers can handle load without issue. This is true for recent cloud infrastructure, but false for sites hosted on low-end shared servers or poorly optimized CMSs.
A WordPress site with 40 active plugins and a non-indexed database can crash under 10 requests per second, even on a recent server. [To be verified]: Google claims to adjust its frequency to signs of overload, but webmasters of under-equipped sites regularly report aggressive crawls leading to 503 errors.
What should you do if Googlebot still overloads your server?
First step: check your server logs to identify the URLs that trigger long response times. Often, the problem comes from dynamically generated pages with heavy SQL queries.
Then, block unnecessary sections in robots.txt (filter facets, URLs with infinite parameters). If the problem persists, consider a smart caching CDN that serves static versions to Googlebot. As a last resort, contact Search Console support to report abnormally aggressive crawling — but responses are often generic copy-paste replies.
Practical impact and recommendations
Should you remove crawl-delay from your robots.txt?
No, unless you have no other engines but Google that matter. Bing, Yandex, and most alternative crawlers still respect this directive. Removing it risks overloading your server with these third-party bots.
Keep the directive if your SEO traffic partly comes from Bing or if commercial scraping crawlers regularly hit your site. For Google specifically, that line is simply ignored.
How can you verify that Googlebot is not affecting your performance?
Analyze your server logs by correlating the timestamps of Googlebot requests with your load metrics (CPU, response time). If you notice latency spikes correlated with the crawler’s visits, that’s a clear signal.
The Search Console also displays crawl statistics: number of pages crawled per day, average download time, response codes. An average time over 500ms indicates that your server is struggling.
What concrete actions can you implement?
First, optimize your page generation times: object cache, SQL optimization, lazy loading of heavy resources. Then, clean up your URL architecture to prevent Googlebot from wasting time on duplicate or worthless pages.
If your site generates thousands of filtered or paginated pages, use strategic robots.txt and noindex to channel the crawl budget towards priority content. A well-structured XML sitemap also helps Googlebot to concentrate its efforts intelligently.
- Keep crawl-delay if you target Bing or other engines besides Google
- Regularly monitor crawl statistics in Search Console
- Block unnecessary sections (facets, parameter URLs) in robots.txt
- Optimize server response times to under 300ms for strategic pages
- Cache frequently crawled pages with a CDN or server-side cache
- Analyze logs to detect correlations between crawling and overload
❓ Frequently Asked Questions
Est-ce que Bing respecte la directive crawl-delay ?
Comment Google adapte-t-il sa fréquence de crawl concrètement ?
Peut-on encore limiter manuellement le taux de crawl dans Search Console ?
Que faire si Googlebot provoque des erreurs 503 répétées ?
Faut-il supprimer crawl-delay pour accélérer l'indexation par Google ?
🎥 From the same video 3
Other SEO insights extracted from this same Google Search Central video · duration 1 min · published on 21/12/2017
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.