What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

Googlebot does not interpret the crawl-delay directive in robots.txt files because servers are dynamic enough to handle more traffic without needing a fixed pause between requests. We automatically adjust crawl frequency based on server responsiveness. If your server shows errors or becomes slow, we will reduce our crawl activity.
🎥 Source video

Extracted from a Google Search Central video

⏱ 1:39 💬 EN 📅 21/12/2017 ✂ 4 statements
Watch on YouTube →
Other statements from this video 3
  1. 0:37 Googlebot ignore-t-il vraiment la directive crawl-delay de votre robots.txt ?
  2. 1:09 Peut-on vraiment contrôler la fréquence de crawl de Google via Search Console ?
  3. 1:09 Peut-on vraiment piloter la fréquence de crawl de Google via Search Console ?
📅
Official statement from (8 years ago)
TL;DR

Google claims that Googlebot does not take the crawl-delay directive into account in robots.txt files. The engine automatically adjusts its crawl speed based on how responsive your servers are. Essentially, this means you have no manual control over the crawl rate through this directive, and Google alone decides when to slow down in case of server overload.

What you need to understand

What is the crawl-delay directive and why does it exist?

The crawl-delay directive was introduced in the robots.txt standard to allow webmasters to define a minimum delay between two consecutive requests from a bot. Its purpose is to protect fragile servers from being overloaded by overly aggressive crawling.

Initially supported by engines like Bing or Yandex, this directive has always been ignored by Google. The official statement confirms this unambiguously: Googlebot does not interpret it, period.

How does Google handle crawl frequency then?

Google utilizes a dynamic adjustment system based on the health signals of your infrastructure. If your server responds quickly without errors, Googlebot speeds up. If 5xx errors appear or response times lengthen, the bot automatically slows down.

This approach assumes that modern servers are capable of handling traffic spikes. Google believes that a fixed pause between requests is obsolete compared to a reactive mechanism.

Why does Google maintain this position?

The answer can be summed up in one word: efficiency. Google wants to crawl the web as quickly as possible without waiting for arbitrary delays. A well-configured server with CDN, cache, and scalable infrastructure doesn’t need a crawl-delay.

The problem? Not all sites have modern infrastructure. Small sites on shared hosting or undersized servers may suffer from this policy.

  • Googlebot ignores crawl-delay: no exceptions to this rule
  • The adjustment is automatic: based on actual server performance
  • No direct manual control: you cannot enforce a slowdown via robots.txt
  • Search Console remains your only lever: modifying the crawl rate for critical cases
  • Infrastructure is key: slow servers are penalized by default

SEO Expert opinion

Does this statement reflect reality on the ground?

Yes, and it is consistent with what has been observed for years. Googlebot has never respected crawl-delay, even when webmasters set values of 5 or 10 seconds. Server logs confirm that the bot sends requests without regard to this directive.

However, the claim that "servers are dynamic enough" needs nuance. In practice, thousands of sites run on shared hosting for €5/month that cannot handle 20 requests/second. Google either underestimates this reality or prefers to ignore it.

Does automatic adjustment really work?

In most cases, yes. When your server starts returning 503 or 504 errors, Googlebot indeed reduces its intensity. I have observed reductions of up to 70% of crawl activity after a series of server errors across multiple clients.

The catch: this mechanism is reactive, not preventive. Google waits for your server to show signs of weakness before slowing down. In the meantime, your site may have already experienced an overload, potentially impacting real users. [To be verified]: Google does not communicate the exact thresholds that trigger a slowdown.

What concrete alternatives exist to control crawling?

The Search Console still allows (for how long?) adjustments to the crawl rate, but only downwards and in exceptional cases. This option is gradually disappearing from the interface, replaced by a message urging improvements to infrastructure.

Real solutions involve technical optimization: implementing a CDN for static resources, aggressively configuring server caching, monitoring response times. If your server can handle normal load, Googlebot has no reason to slow down — and you have no means to force it to do so.

Attention: Some hosts that artificially throttle performance may trigger a permanent slowdown of Google's crawl. Check your access logs to identify any suspicious patterns.

Practical impact and recommendations

What should you do if your server is suffering from aggressive crawling?

The first step: analyze your server logs to confirm that Googlebot is indeed responsible for the overload. Sometimes, other bots (Bing, SEMrush, Ahrefs) are more resource-intensive. Isolate the user-agent Googlebot and count requests per hour.

If Google's crawl is indeed problematic, focus on optimizing response times. Enable GZIP compression, optimize your database queries, reduce TTFB. The faster your server responds, the less time Googlebot will spend connected.

What mistakes should you absolutely avoid?

Do not block Googlebot in robots.txt hoping to reduce load. You will kill your SEO. Do not set rate limiting at the firewall too aggressively: there is a risk of temporarily banning the bot and slowing down the indexing of your new content.

Also, avoid believing that adding crawl-delay will change anything for Google. This directive only serves to control Bingbot, Yandex, or third-party crawlers. For Googlebot, it is invisible.

How can you check if your infrastructure is suitable?

Check the Crawl Stats section in Search Console. If you see spikes in server errors (5xx) correlated with increases in crawling, your infrastructure is undersized. The goal: keep the error rate below 1%.

Test load capacity with tools like Apache Bench or Load Impact. Simulate 50 concurrent requests and observe response times. If they explode beyond 2 seconds, invest in a server upgrade or migrate to scalable hosting.

  • Analyze logs to isolate Googlebot requests and quantify the load
  • Optimize TTFB, compression, and server cache to reduce crawl time per page
  • Monitor 5xx errors in Search Console and immediately fix the sources
  • Never rely on crawl-delay to slow down Google — invest in infrastructure
  • Consider a CDN to relieve your origin server of static resources
  • Document crawl patterns to anticipate spikes and adjust resources
Google's lack of support for crawl-delay necessitates an infrastructure-first approach. You do not control the crawl rate, so your server must withstand whatever Google sends. For sites on complex infrastructures or tight budgets, this optimization can quickly become technical. Engaging a specialized SEO agency allows you to obtain a precise diagnosis of your server bottlenecks and a tailored optimization strategy based on your actual resources.

❓ Frequently Asked Questions

Puis-je utiliser crawl-delay pour ralentir Googlebot sur mon site ?
Non, Googlebot ignore complètement cette directive. Elle ne fonctionne que pour d'autres moteurs comme Bing ou Yandex. Pour Google, la seule solution est d'optimiser votre infrastructure ou de bloquer temporairement certaines sections via robots.txt.
Comment Google détecte-t-il qu'un serveur est surchargé ?
Google surveille les erreurs HTTP 5xx (503, 504), les timeouts et l'allongement des temps de réponse. Si ces signaux augmentent, Googlebot réduit automatiquement son activité de crawl.
Existe-t-il un moyen officiel de limiter le crawl Google ?
La Search Console permet encore dans certains cas d'ajuster le taux de crawl à la baisse, mais Google retire progressivement cette fonctionnalité. L'approche recommandée est d'améliorer les performances serveur.
Un crawl trop intense peut-il nuire à mon référencement ?
Indirectement, oui. Si le crawl sature votre serveur au point de ralentir les utilisateurs réels ou de générer des erreurs, Google peut dégrader votre classement. Un serveur performant est un facteur de ranking indirect.
Dois-je quand même inclure crawl-delay dans mon robots.txt ?
Uniquement si vous souhaitez contrôler d'autres bots (Bing, Yandex, crawlers SEO). Pour Googlebot, cette ligne sera ignorée mais elle ne nuit pas. Autant la garder pour les autres moteurs si votre serveur est limité.
🏷 Related Topics
Crawl & Indexing PDF & Files

🎥 From the same video 3

Other SEO insights extracted from this same Google Search Central video · duration 1 min · published on 21/12/2017

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.