Official statement
Other statements from this video 9 ▾
- 1:06 Le dynamic rendering est-il vraiment sans risque pour le SEO ?
- 1:38 Le dynamic rendering ralentit-il vraiment votre serveur ou améliore-t-il le crawl budget ?
- 2:39 Pourquoi Google traite-t-il les redirections JavaScript comme des 302 et non des 301 ?
- 2:39 Google fait-il vraiment une différence entre redirections 301 et 302 pour le SEO ?
- 3:42 Googlebot peut-il vraiment crawler les liens cachés dans un menu hamburger ?
- 7:01 Comment gérer correctement les erreurs 404 dans une SPA sans risquer la désindexation ?
- 14:57 Pourquoi Googlebot rate-t-il vos contenus chargés par Web Workers ?
- 30:51 Le contenu masqué dans les accordéons est-il vraiment indexé par Google ?
- 31:49 Faut-il vraiment abandonner l'implémentation manuelle du structured data ?
Google strongly discourages serving bots versions of pages without webfonts for performance gains. Bot detection, extra code, and potential bugs create complexity that far outweighs the marginal speed benefits. Two simple and less risky alternatives exist: block webfonts via robots.txt or use font-display:swap for asynchronous loading of fonts.
What you need to understand
What Makes This Practice Appealing to Some SEOs?
The idea of serving lightweight pages to bots stems from a legitimate observation: Googlebot and other crawlers consume server time and crawl budget. Removing webfonts—which can weigh anywhere from a few dozen to several hundred kilobytes—seems like a logical optimization to speed up rendering and reduce load.
In practice, some high-volume sites or those with a constrained technical infrastructure have been tempted by this approach. The reasoning: why serve a custom font to a bot that doesn't "see" the design? Why not send it a streamlined version, faster to load, to maximize crawled pages?
What Is Google’s Concern?
Martin Splitt points out three major weaknesses in this strategy. First, bot detection is never 100% reliable—spoofed user agents, third-party crawlers, SEO tools masquerading as Googlebot. You risk serving the wrong version to the wrong visitor.
Next, the additional code necessary to manage this conditional logic introduces fragility. Each layer of detection is an opportunity for bugs: misconfiguration errors, improperly invalidated caches, server rules that apply too broadly. And when it breaks, Google might index a degraded version of your site.
Finally, the actual gain is minimal compared to the risk. Modern webfonts are compressed, often cached, and their impact on overall rendering time remains marginal compared to JavaScript, images, or multiple network requests. In other words: you're complicating your technical stack to gain a few milliseconds.
What Alternatives Does Google Recommend?
Two simple approaches without the risk of cloaking. First option: block webfonts via robots.txt. Googlebot will not download them but will continue to crawl and index your content normally. No bot detection needed, no conditional logic on the server side.
Second approach: use font-display:swap in your @font-face declarations. The browser immediately displays the text with a system font, then swaps to the webfont as soon as it’s loaded. Googlebot gets a quick rendering, users do too, and you avoid any unnecessary technical complexity.
- Bot Detection: never 100% reliable, risk of serving the wrong version
- Additional Code: each layer of conditional logic = opportunity for bugs
- Marginal Gains: webfonts are minor compared to JS, images, network requests
- Alternative 1: block webfonts via robots.txt (simple, low-risk)
- Alternative 2: font-display:swap for asynchronous loading
SEO Expert opinion
Is This Statement Consistent with Ground Observations?
Yes, and it aligns with a consistent principle at Google: avoid any form of cloaking, even if well-intentioned. Serving different content to bots remains a gray area—technically allowed in certain cases (mobile vs desktop in responsive design, for example), but closely monitored by the algorithm. Here, the difference concerns aesthetic resources, but the mechanism remains the same: detection + variation = risk.
In practice, sites that have tried this approach often encounter unpredictable indexing issues. Misconfigured CDN caches serving the bot version to users, overly broad server rules blocking legitimate crawlers, or Google detecting manipulation and applying penalties. The risk is not worth the potential gain.
In What Scenarios Could This Rule Be Nuanced?
It's hard to imagine a legitimate scenario. Even on sites with an ultra-constrained crawl budget (millions of pages, daily updates), blocking webfonts via robots.txt suffices. No need for bot detection, no conditional code, same result.
The only edge case: a site with extremely large webfonts (multiple megabytes, unoptimized custom fonts) where download time genuinely hampers crawling. But then, the real problem lies upstream—it’s necessary to optimize or replace those fonts, not circumvent the issue through cloaking. [To be verified] if Google allows for a documented exception in this context, but no official communication supports this.
What Is Googlebot's Real Priority Regarding Performance?
Reduce Time to First Byte (TTFB), limit chain redirection, optimize server velocity. Webfonts rank much lower in the hierarchy of optimizations. A server that takes 800 ms to respond penalizes crawling far more than a 50 KB webfont.
And for user-side Core Web Vitals, font-display:swap elegantly solves layout shift and rendering time issues. You benefit on both fronts—bot and human—without technical compromises. This is exactly what Martin Splitt recommends and aligns with the "single version, optimized for all" approach that Google has advocated for years.
Practical impact and recommendations
What Should You Do If You Are Already Serving Different Pages to Bots?
First step: audit your server configuration to identify all bot detection rules. Look for conditions on user-agent, variations in CSS/fonts according to visitor, CDN caches with keys based on user-agent. Document each divergence point between bot version and user version.
Next, test the real impact of this optimization. Measure the crawl time before/after removing webfonts for bots. If the gain is less than 100 ms per page, you are in the noise measurement—better to simplify your stack. If the gain exceeds 200 ms, ask yourself: why are your webfonts so heavy?
How to Shift to a Compliant Approach Without Losing Performance?
Option 1: add your webfont files to robots.txt with Disallow. Googlebot will no longer download them but will continue to crawl your HTML/CSS pages normally. No risk of cloaking, no additional code. Test in Search Console (URL Inspection tool) to verify that rendering remains correct.
Option 2: implement font-display:swap in your @font-face declarations. Text displays instantly with a system font, the webfont loads in the background and swaps as soon as available. You improve LCP and CLS at the same time, and Googlebot gets a quick rendering without downloading the font. This is the solution recommended by Google, and it also benefits users on slow connections.
What Mistakes to Avoid During This Transition?
Do not block your entire CSS via robots.txt thinking you’ll save time—Google needs styles to understand layout and detect main content. Only block .woff, .woff2, .ttf, .eot files if you opt for this approach.
Also, avoid keeping a dormant bot detection in your code, even if disabled. If a developer mistakenly reactivates it during a deployment, you risk backlash. Completely remove the conditional logic to eliminate risk at the source.
- Audit all bot detection rules (server, CDN, application)
- Measure the real performance gain before deciding
- Block webfonts via robots.txt OR implement font-display:swap
- Test bot rendering with the URL Inspection tool in Search Console
- Completely eliminate bot detection logic from the source code
- Document the new configuration to avoid future regressions
❓ Frequently Asked Questions
Bloquer les webfonts via robots.txt impacte-t-il le rendu côté Google ?
Font-display:swap crée-t-il un layout shift pénalisant pour les Core Web Vitals ?
Peut-on servir une version AMP aux bots et la version classique aux users ?
La détection de Googlebot via user-agent est-elle fiable ?
Existe-t-il d'autres ressources à bloquer via robots.txt pour économiser du crawl budget ?
🎥 From the same video 9
Other SEO insights extracted from this same Google Search Central video · duration 38 min · published on 18/05/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.