What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

Even when pages return 404 or 410 errors, Googlebot may continue to crawl them intermittently to check if they become active again, particularly to counter temporary errors.
40:19
🎥 Source video

Extracted from a Google Search Central video

⏱ 1h00 💬 EN 📅 23/10/2017 ✂ 9 statements
Watch on YouTube (40:19) →
Other statements from this video 8
  1. 3:40 Comment la nouvelle Google Search Console va-t-elle transformer votre quotidien SEO ?
  2. 5:43 Search Console va-t-elle enfin dépasser les 90 jours d'historique ?
  3. 7:47 L'indexation mobile-first va-t-elle vraiment chambouler votre stratégie SEO ?
  4. 15:11 Le 304 Not Modified booste-t-il vraiment votre budget de crawl ?
  5. 19:51 Comment structurer la pagination pour maximiser l'indexation Google ?
  6. 31:49 Googlebot peut-il vraiment remplir des formulaires pour explorer votre contenu caché ?
  7. 57:00 Les liens en dessous de la ligne de flottaison ont-ils moins de poids pour Google ?
  8. 59:56 Pourquoi Google recrute-t-il un évangéliste du Search pour parler SEO ?
📅
Official statement from (8 years ago)
TL;DR

Google regularly crawls pages that return 404 or 410 codes, even after their deindexation. The aim is to detect whether a deleted resource comes back online, either intentionally or due to a temporary technical error. For an SEO, this means that these URLs constantly consume crawl budget and need to be actively managed, not just ignored.

What you need to understand

Does Google really revisit all error pages?

Yes, and it's a core mechanic of the crawler. When Googlebot encounters a 404 or 410, it does not completely write off that URL. It marks it as deleted in the index, of course, but keeps it in its address book with a reduced crawl frequency.

This logic addresses two operational needs of Google. First, to distinguish between a one-time server error and an actual deletion. A site can return a temporary 404 due to a deployment bug or overload. Second, to capture resources that come back online after restoration, migration, or content republication.

What’s the difference between a 404 and a 410 from the crawler's perspective?

On paper, the 410 should signal a permanent deletion, while the 404 remains ambiguous (page not found, temporarily or not). Google itself has long encouraged the use of the 410 for faster deindexation.

In practice, the crawling behavior remains similar. Both codes trigger intermittent revisits, perhaps with a slightly lower frequency for the 410. But Google does not trust blindly: it checks anyway. A site that restores a mistakenly deleted page will be crawled again, regardless of the originally returned code.

How long does Googlebot continue these checks?

Google does not communicate a specific duration. Field observations show that URLs can be revisited for months or even years, with intervals gradually increasing. A popular page with a history of incoming links will be crawled longer than a marginal resource without backlinks.

The frequency decreases exponentially: several visits in the first week, then one per month, then quarterly. But it never strictly drops to zero as long as the URL remains technically accessible (even in error) and has historical interest for Google.

  • 404/410 errors remain crawled intermittently, often for months after deletion
  • The revisit frequency depends on historical PageRank and the number of links pointing to the resource
  • A 410 does not guarantee an immediate stop to crawling, contrary to what the HTTP RFC suggests
  • Temporary errors (overloads, bugs) motivate this logic of repeated checking
  • The crawl budget consumed by these URLs remains significant on sites with many historical errors

SEO Expert opinion

Is this statement consistent with what we observe in the field?

Absolutely. Server logs have confirmed this behavior for years. We often see Googlebot returning to deleted URLs, sometimes with predictable patterns: crawl spikes after external link-building operations, monthly revisits on old pages with strong historical traffic.

The interesting point is that Google does not provide any precise timeframes or stopping criteria. We know that the frequency decreases, but not at what rate or when it completely stops. Observations show a logarithmic decay rather than a sharp extinction. [To be verified]: does the frequency actually reach zero one day, or does it stay at a minimal floor indefinitely?

What are the concrete implications for crawl budget?

On a medium-sized site (under 50,000 pages), the impact remains negligible. Googlebot has plenty of capacity to crawl these errors without penalizing active pages. But on large sites with a heavy history, the cost becomes significant.

Imagine an e-commerce site that has removed 200,000 obsolete listings over the years. If Googlebot revisits just 5% of those URLs each month, that represents 10,000 requests for dead content. That’s 10,000 hits that do not serve to index fresh content. And if those pages return slow 404s (poorly optimized CMS, internal redirects before the error), the crawl time skyrockets.

Should we really be concerned about this or is it unnecessary micromanagement?

It depends on the context. For 90% of sites, ignoring these crawls is acceptable. Google prioritizes effectively, and important pages are crawled first. Intermittent 404s do not cause ranking penalties.

However, three situations require active management. One: sites with constrained crawl budget (millions of pages, low authority). Two: failed migrations leaving thousands of orphaned URLs crawled in a loop. Three: technically slow 404s (timeout, chained redirects before the error). In these cases, actively cleaning via robots.txt or responding with accelerated 410s can free up budget for priority content.

Warning: mass-blocking 404s in robots.txt prevents Google from confirming that they are indeed dead. It will continue to consider them potentially active. It’s better to allow the crawl but optimize the response speed of errors.

Practical impact and recommendations

What should you do with old error URLs that drain crawl budget?

First step: identify 404/410 URLs that are still regularly crawled. Google Search Console (Coverage report, Excluded section) provides a partial view, but server logs remain essential for accurate diagnosis. Filter Googlebot hits on 4xx codes and sort by frequency.

Next, segment these URLs. Those with active backlinks deserve a 301 redirect to equivalent content or a parent category. Those without links or historical interest can remain with a quick 404 (response time < 100 ms). If they represent a huge volume, a properly configured 410 can speed up the decline of crawling, without a guarantee.

How to technically optimize the response of error pages?

Many sites serve heavy 404s: full CMS loading, database queries, complex templates. Result: each error crawl costs 500 ms to 2 seconds, whereas a static 404 responds in 20 ms.

Configure your server (Nginx, Apache, CDN) to return a minimal 404 without going through the application. Capture the patterns of dead URLs (e.g., /product-*, /archive-*) at the reverse proxy level and return a direct HTTP response with minimal HTML body. Googlebot doesn’t care about the content of the error page; it only reads the status code.

Should you use 410 instead of 404 to speed up forgetting?

Google has always stated that 410 allows for faster deindexation. This is true for leaving the index, but it doesn’t cut off verification crawls. The difference remains marginal in practice.

Use 410 if you want to explicitly signal a permanent deletion (legally removed content, discontinued product without equivalent). But don’t expect Googlebot to stop its visits immediately. The real savings come from response speed, not the exact code used.

  • Audit server logs to identify 404/410 URLs regularly crawled by Googlebot
  • 301 redirect error URLs that still have active backlinks
  • Implement ultra-fast 404s (< 100 ms) at the server level, without going through the CMS
  • Use 410 only for documented permanent deletions, not by default
  • Do not block 404s in robots.txt, as it prevents Google from confirming the deletion
  • Monitor changes in crawl budget before/after optimization through Search Console and logs
404 or 410 pages continue to be crawled, sometimes for years. On small sites, the impact remains anecdotal. On larger volumes, cleaning up dead backlinks and speeding up the technical response of errors frees up crawl budget for active content. These optimizations require a detailed log analysis and advanced server configuration. If your infrastructure is complex or if you lack visibility into Googlebot’s actual behavior, support from a specialized SEO agency can help diagnose precisely where your crawl budget is going and prioritize actions with quick impact.

❓ Frequently Asked Questions

Combien de temps Googlebot continue-t-il à crawler une page en 404 ?
Google ne donne pas de durée précise. Les observations montrent des revisites pendant plusieurs mois, voire années, avec une fréquence décroissante. Les pages avec backlinks ou historique de trafic sont crawlées plus longtemps.
La 410 arrête-t-elle vraiment le crawl plus vite que la 404 ?
La 410 accélère la désindexation, mais ne coupe pas net les crawls de vérification. Google revisite quand même ces URLs de façon intermittente pour détecter d'éventuels retours en ligne.
Dois-je bloquer les pages 404 dans robots.txt pour économiser du crawl budget ?
Non, c'est contre-productif. Bloquer une 404 empêche Google de confirmer qu'elle est bien morte, il la considère alors comme potentiellement active. Mieux vaut laisser le crawl se faire mais optimiser la vitesse de réponse.
Les 404 consomment-elles vraiment beaucoup de crawl budget ?
Sur les petits sites, l'impact est négligeable. Sur les gros sites avec des milliers d'URLs historiques en erreur, surtout si elles répondent lentement, le volume de crawl gaspillé devient significatif et mérite une optimisation.
Comment savoir quelles 404 sont encore crawlées par Google ?
Google Search Console donne une vue partielle via le rapport Couverture. Pour un diagnostic complet, analyse tes logs serveur : filtre les requêtes Googlebot avec codes 4xx et trie par fréquence pour identifier les URLs problématiques.
🏷 Related Topics
Domain Age & History Crawl & Indexing AI & SEO

🎥 From the same video 8

Other SEO insights extracted from this same Google Search Central video · duration 1h00 · published on 23/10/2017

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.