Official statement
Other statements from this video 13 ▾
- 0:33 La pagination en JavaScript pose-t-elle vraiment un problème pour Google ?
- 1:36 Faut-il vraiment corriger toutes les erreurs 404 remontées dans Search Console ?
- 4:04 Le server-side rendering est-il vraiment la solution miracle pour le SEO JavaScript ?
- 5:16 Les graphiques JavaScript créent-ils du contenu dupliqué sur vos pages ?
- 5:49 Faut-il vraiment regrouper vos fichiers JavaScript pour préserver votre budget de crawl ?
- 5:49 Pourquoi fixer les dimensions CSS de vos graphiques peut-il sauver vos Core Web Vitals ?
- 7:00 Les redirections JavaScript géolocalisées peuvent-elles vraiment être crawlées sans risque ?
- 11:30 Faut-il vraiment s'inquiéter des titres corrompus dans l'opérateur site: ?
- 12:35 Faut-il vraiment faire du server-side rendering pour ses métadonnées ?
- 16:50 Faut-il vraiment limiter le nombre d'appels API côté client pour améliorer son SEO ?
- 21:01 Faut-il vraiment sacrifier la précision du tracking pour accélérer le chargement de vos pages ?
- 30:33 Faut-il vraiment considérer Googlebot comme un utilisateur avec besoins d'accessibilité ?
- 31:59 Faut-il traiter la visibilité SEO comme une exigence technique au même titre que la performance ?
Martin Splitt reminds us that CDNs are designed to serve static resources (CSS, JS, images, fonts) and not dynamic API calls, which may be slowed down by the caching layer. To optimize speed and avoid unnecessary latencies, APIs should be hosted on a dedicated domain like api.example.com. This architectural separation becomes critical when aiming to improve Core Web Vitals without sacrificing backend performance.
What you need to understand
Why is this distinction between static and dynamic resources important?
A CDN (Content Delivery Network) deploys your files across geographically distributed servers to bring them closer to the end user. The result: an image, a CSS file, or a JavaScript file loads faster from a local node than from your main data center.
The problem arises when you send dynamic API calls through the same CDN. These requests often require real-time validation, database access, and authentication. However, a CDN introduces an intermediate layer that, instead of speeding things up, adds latency: the CDN node must query your origin server, then send back the response. This results in two round-trips instead of just one.
How does this affect crawling and indexing?
Googlebot consumes a lot of static resources during modern JavaScript rendering. If your JS bundles or CSS take 500 ms to load, the bot wastes time — and your crawl budget diminishes. An effective CDN directly enhances rendering speed and content discovery on Google's side.
However, if your API endpoints also go through the same CDN and accumulate 200-300 ms of additional latency, you degrade the user experience without providing any benefit to the bot: it doesn't crawl your internal APIs, it crawls the final rendered HTML/JSON. Separating static and dynamic domains clarifies architecture and avoids invisible bottlenecks in waterfalls.
What is the recommended best practice?
Splitt advocates using a dedicated domain for API calls, typically api.example.com, which points directly to your backend infrastructure without going through the CDN layer. Static files remain served via cdn.example.com or your main CDN (Cloudflare, Fastly, CloudFront, Bunny, etc.).
This separation prevents the CDN configuration (cache headers, edge rules, TTL) from interfering with the sensitive cache headers of the APIs: you don’t want a CDN caching an authentication response or an e-commerce cart for an hour. By isolating traffic flows, you maintain fine control over each type of traffic.
- CDN for static resources: JavaScript, CSS, images, videos, fonts, SVG — everything that does not change often and can support long caching.
- Direct domain for APIs: REST/GraphQL endpoints, webhooks, authentication, real-time user data.
- Separate monitoring: Distinct APM and RUM to diagnose slowdowns on the API side without polluting frontend metrics.
- CORS and security: The API domain can implement strict CORS headers without affecting the security policy of the static CDN.
- Scalability: You can scale your API infrastructure (backend autoscaling) and your CDN (image/video bandwidth) independently.
SEO Expert opinion
Does this statement truly reflect modern architectures?
Yes and no. Splitt states a good foundational practice, but the real-world scenario is more nuanced. Many modern CDNs (Cloudflare Workers, Fastly Compute@Edge, AWS CloudFront Functions) allow you to deploy serverless logic directly on edge nodes. In this case, the API doesn't necessarily add latency: it runs as close to the user as possible.
Let’s be honest: if you're using a basic CDN with just passive HTTP caching, then yes, routing your API calls through this CDN is counterproductive. But if you're utilizing edge functions or intelligent conditional caching, you can serve certain API responses from the edge without going back to the backend — and there, the latency gain is real. [To be verified]: Google does not specify if this rule also applies to JAMstack architectures with edge APIs.
When does this separation become critical?
The separation between static and API domains makes sense when you have a high volume of user API requests: real-time dashboards, SaaS, mobile applications backed by a web backend, e-commerce with dynamic stock. If each visit triggers 10-20 API calls and each faces 150 ms of CDN latency, the cumulative impact degrades your Core Web Vitals (FID, INP).
On the other hand, for a WordPress showcase site with 2-3 occasional AJAX calls, the impact remains marginal. The risk? Implementing this recommendation without measuring the existing situation — complicating infrastructure for a gain of 50 ms on 3 requests per session. Prioritize based on your actual traffic profile, not according to an abstract doctrine.
What common mistakes are seen in the field?
The classic mistake: routing all traffic through a CDN set up for aggressive caching, including endpoints that return custom JSON. Result: users see outdated data, the CDN cache serves stale responses, and developers spend hours debugging misunderstood Cache-Control.
Another frequent trap: using a CDN subdomain (cdn.example.com) to serve both assets AND APIs, thinking a Vary header will suffice to differentiate. Spoiler: it never is enough. Edge nodes apply their own purge rules, and you end up with inconsistencies across regions. Separating domains eliminates this kind of bugs in one fell swoop.
CORS headers are correctly configured, otherwise your AJAX calls will fail silently in production — and Googlebot won’t care, but your users definitely will.Practical impact and recommendations
How to audit your current architecture?
Start with a waterfall analysis under real conditions (Chrome DevTools, WebPageTest, SpeedCurve). Filter requests by type: identify which pass through the CDN and which go directly to the backend. If you see API calls with a X-Cache: HIT or CF-Cache-Status: HIT, it's a red flag — unless you explicitly configured a conditional cache on these endpoints.
Next, measure the TTFB latency (Time To First Byte) of your APIs from several locations (Pingdom, GTmetrix multi-region). If the API TTFB exceeds 300 ms and the CDN adds 100-150 ms, you have a quick win by separating domains. Compare before/after migration to validate the actual gain, not just theoretical.
What configuration should be set up concretely?
Create a dedicated subdomain api.example.com pointing directly to your backend cluster (without going through the CDN reverse proxy). Set up a separate SSL/TLS certificate if necessary — many CDNs offer wildcard certificates, but for production APIs, a dedicated certificate with expiration monitoring is safer.
On the frontend side, replace all your fetch('/api/...') with fetch('https://api.example.com/...'). Adjust the CORS headers on the backend to allow https://www.example.com and your other legitimate origins. Test in staging with tools like curl -I to ensure that the Access-Control-Allow-Origin headers are present and correct.
What metrics should be monitored after migration?
Monitor the evolution of your INP (Interaction to Next Paint) and FID (First Input Delay) in the Search Console and in your RUM (Real User Monitoring) tools. A reduction in API latency of 100-200 ms can shift your pages from “Needs Improvement” to “Good” in the Core Web Vitals — which has a direct ranking impact on mobile.
Additionally, monitor the 5xx error rate on the API side: if the CDN masked server incidents by serving stale cache, you'll now see them clearly. It's a necessary evil — better to fix backend bugs than to hide them under opportunistic caching. Set up alerts on API latency thresholds (p95, p99) to detect regressions before they impact organic traffic.
- Audit waterfalls to identify API calls currently routed through CDN
- Create a dedicated subdomain
api.example.comwith direct DNS resolution to the backend - Configure server-side CORS headers to allow legitimate frontend origins
- Migrate all API endpoints in frontend code to the new domain
- Test in staging with multiple geolocations to validate latency reduction
- Monitor INP, FID, and API TTFB in Search Console and your RUM tools post-migration
❓ Frequently Asked Questions
Puis-je quand même utiliser un CDN pour certaines réponses API mises en cache ?
Est-ce que Googlebot crawle les appels API internes de mon site ?
Quel impact sur le SEO si mes API sont lentes mais servies depuis un CDN ?
Dois-je créer un certificat SSL distinct pour api.example.com ?
Cette recommandation s'applique-t-elle aussi aux architectures JAMstack avec edge functions ?
🎥 From the same video 13
Other SEO insights extracted from this same Google Search Central video · duration 36 min · published on 30/10/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.