What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

For sites sensitive to crawl budget, client-side JavaScript with multiple API requests (e.g., 5 requests per page on 10 million pages) counts against the crawl budget and can accumulate quickly.
23:32
🎥 Source video

Extracted from a Google Search Central video

⏱ 31:53 💬 EN 📅 09/12/2020 ✂ 16 statements
Watch on YouTube (23:32) →
Other statements from this video 15
  1. 2:49 Pourquoi Google rend-il quasi systématiquement vos pages avant de les indexer ?
  2. 3:52 Faut-il abandonner le modèle des deux vagues d'indexation ?
  3. 7:35 Google utilise-t-il une sandbox ou une période de lune de miel pour les nouveaux sites ?
  4. 8:02 Google devine-t-il vraiment où classer un nouveau site avant même d'avoir des données ?
  5. 9:07 Pourquoi les nouveaux sites connaissent-ils des montagnes russes dans les SERP ?
  6. 13:59 Faut-il vraiment se préoccuper du crawl budget pour son site ?
  7. 15:37 Faut-il vraiment s'inquiéter du crawl budget sous le million d'URLs ?
  8. 16:09 Le crawl budget existe-t-il vraiment ou est-ce juste un mythe SEO ?
  9. 17:42 Google bride-t-il volontairement son crawl pour ménager vos serveurs ?
  10. 18:51 Googlebot peut-il vraiment arrêter de crawler votre site à cause de codes d'erreur serveur ?
  11. 20:24 Comment détecter un vrai problème de crawl budget sur votre site ?
  12. 21:57 Élaguer le contenu faible améliore-t-il vraiment le crawl budget ?
  13. 22:28 Faut-il sacrifier la vitesse serveur pour économiser du crawl budget ?
  14. 24:36 Le crawl budget : toutes vos URLs comptent-elles vraiment autant que Google l'affirme ?
  15. 25:39 Faut-il vraiment s'inquiéter du cache agressif de Googlebot sur vos ressources statiques ?
📅
Official statement from (5 years ago)
TL;DR

Google confirms that every client-side API request generated by JavaScript counts against the crawl budget. On a site with 10 million pages with 5 API calls per page, that’s 50 million requests for Googlebot to handle. The impact is direct: crawling slows down, new pages are indexed later, and content updates are delayed. The solution? Switch to server-side rendering or consolidate API calls.

What you need to understand

What is crawl budget and why do APIs consume it?

The crawl budget refers to the amount of resources that Googlebot allows to explore your site within a given time frame. Google does not crawl everything all the time — it allocates a quota based on site popularity, freshness, and the server load its bot generates.

When you use client-side JavaScript, each script often triggers multiple API requests to your backend or third-party services. Googlebot doesn’t just run the initial HTML — it renders the complete page, executes the JS, and follows those API calls. The result? A page becomes 6 resources to crawl instead of just one.

What’s the difference between server-side rendering and client-side rendering for Googlebot?

In server-side rendering (SSR), the complete HTML arrives pre-assembled. Googlebot reads it directly without executing complex JavaScript. It’s fast, it’s clean, and only one HTTP request is needed.

In client-side rendering (CSR), the initial HTML is an empty skeleton. The browser downloads the JS, executes it, and then makes 3, 5, sometimes 10 API requests to assemble the content. Googlebot does exactly the same — and each API call counts against your quota. If your React or Vue framework loads content product by product via separate endpoints, you mechanically multiply the number of hits.

At what scale does this become problematic?

Martin Splitt gives a quantified example: 10 million pages with 5 API calls each = 50 million additional requests. That's colossal. But the problem arises with just a few hundred thousand pages if your site generates few inbound links or if Google detects server slowdowns.

The crawl budget is not fixed — it adjusts dynamically. If your server responds slowly or returns 5xx errors, Google reduces its rate. Adding tens of thousands of API calls to process amplifies this phenomenon.

  • Every API request counts as a resource to crawl, just like an HTML page.
  • Sites with several million pages are the most exposed, but the problem exists from 100k URLs if the JS architecture is heavy.
  • Server-side rendering (SSR) or static generation (SSG) eliminates this overhead.
  • Google does not index a CSR site faster — on the contrary, the delay between publication and indexing increases.
  • Monitoring crawl budget via Search Console becomes crucial to identify this bottleneck.

SEO Expert opinion

Is this statement consistent with real-world observations?

Yes, and it confirms what we've seen for years with large e-commerce or listing sites. Sites that have migrated from full-JS frameworks to Next.js in SSR or Nuxt with static generation have systematically observed faster crawling and better indexing freshness.

The paradox is that many sites continue to think that “Googlebot understands JavaScript” means “it’s fine to use JS everywhere.” Understanding does not mean processing as efficiently. JS rendering consumes machine time, crawl budget, and introduces latency. Sites with high volume should treat every millisecond of rendering as a real cost.

When does this rule not pose any problems?

If your site has less than 10,000 pages, with a good link profile and a fast server, the crawl budget is probably not your priority. Google will crawl everything without difficulty, even if you make 10 API calls per page. The real challenge is the scale.

“One-page” sites like SaaS with a landing + blog of a few dozen articles can continue in full React without worry. Conversely, a pure e-commerce player with 500k product listings that loads each description, price, and customer review via separate endpoints? That’s a textbook case of poor SEO architecture.

What nuances should be added to this statement?

Martin Splitt talks about sites that are “sensitive to crawl budget,” but he doesn’t precisely define the threshold. [To verify]: at what point does a site become “sensitive”? 100k? 500k? 1M? It also depends on internal PageRank, update frequency, and the number of backlinks.

Another point: not all API calls are equal. A call to your own CDN with a response in 20ms has less impact than a call to a third-party service that takes 800ms to respond. Google also measures the total response time — if your APIs are slow, the bot will wait, and this mechanically reduces the number of pages it can crawl in the same time frame.

Note: some modern JS frameworks (Next.js, Nuxt, SvelteKit) offer hybrid rendering — SSR for critical pages, CSR for interactions. It’s a balanced approach, but it requires fine-tuning to prevent Google from consistently encountering CSR routes.

Practical impact and recommendations

How to audit the impact of JavaScript on my crawl budget?

Start with Search Console, under the “Crawl Stats” section. Look at the total number of requests crawled per day, the average download time, and server errors. If you notice a stagnation in the number of pages crawled while you are regularly publishing content, that’s a signal.

Next, use server logs. Extract all Googlebot requests and filter by resource type: HTML, JS, API endpoints. If you see Googlebot heavily hitting your /api/* routes, it indicates that client rendering is active. Compare the volume of hits on /api/* vs the volume of hits on HTML pages — a ratio above 3:1 indicates a structural issue.

What technical solutions can be implemented quickly?

If you are on plain client-side React, Vue, or Angular, migrating to SSR or SSG is the only truly sustainable solution. Next.js for React, Nuxt for Vue, Angular Universal for Angular. It’s a big project, but it’s essential for long-term SEO.

In the short term, you can consolidate API calls. Instead of making 5 separate requests (product, price, stock, reviews, recommendations), create a single endpoint /api/product-full/{id} that returns everything at once. This divides the number of hits Googlebot has to handle by 5.

How to prioritize pages to migrate to SSR?

Don't migrate everything at once — start with high search volume pages: bestseller product listings, main categories, blog articles ranking in the top 10. Use Search Console data to identify URLs that generate the most impressions but have a low CTR or a stagnant average position.

User account pages, cart, and checkout can remain in CSR — Google doesn't need to crawl them. Focus your efforts on indexable, high SEO value content. An e-commerce site with 2M products often has only 200k references generating 80% of organic traffic — start there.

  • Audit server logs to quantify API calls crawled by Googlebot
  • Check the API requests / HTML pages ratio in Search Console
  • Migrate critical templates (product listings, categories) to SSR or SSG
  • Consolidate API endpoints to reduce the number of calls per page
  • Monitor the evolution of crawl budget after each technical change
  • Prioritize pages with high SEO value for migration, not the entire site at once
Optimizing crawl budget on a high-volume site requires an architectural redesign of JavaScript rendering. It’s a complex technical project that touches on the core of the front-end stack and may involve trade-offs between user experience and SEO performance. For sites with several hundreds of thousands of pages, it may be wise to collaborate with an SEO agency specialized in web architecture and crawl budget issues — a thorough audit and a tailored migration plan will avoid costly mistakes in time and traffic loss.

❓ Frequently Asked Questions

Le passage au SSR garantit-il une indexation plus rapide ?
Oui, dans la majorité des cas. En éliminant le rendering JavaScript côté client et les multiples appels API, Googlebot accède au contenu en une seule requête HTTP. Les sites qui ont migré observent généralement une réduction du délai entre publication et indexation de 30 à 70%.
Les appels API vers des CDN tiers (Analytics, publicité) comptent-ils aussi ?
Google ne crawle normalement pas les scripts analytics ou publicitaires bloqués par robots.txt. En revanche, si ton JavaScript client appelle des API de contenu hébergées sur des domaines tiers, Googlebot peut les suivre et les compter dans le budget.
Peut-on utiliser le pre-rendering pour contourner le problème ?
Le pre-rendering (servir du HTML statique aux bots) fonctionne, mais c'est une solution de contournement fragile. Google peut détecter du cloaking si le contenu diffère trop entre users et bots. Le SSR natif reste l'approche la plus sûre et pérenne.
Comment savoir si mon site est vraiment limité par le crawl budget ?
Regarde dans la Search Console si le nombre de pages crawlées par jour stagne alors que tu publies régulièrement. Compare aussi le volume de pages découvertes vs crawlées — un écart important indique que Google connaît tes URLs mais ne les visite pas assez souvent.
Les Progressive Web Apps (PWA) ont-elles le même problème ?
Oui, si elles utilisent du rendering client avec multiples API calls. Une PWA en mode app shell + fetch dynamique consomme autant de budget qu'un site React classique. Passe en SSR ou SSG pour les contenus indexables, garde le mode app pour les zones privées.
🏷 Related Topics
Domain Age & History Crawl & Indexing JavaScript & Technical SEO Links & Backlinks

🎥 From the same video 15

Other SEO insights extracted from this same Google Search Central video · duration 31 min · published on 09/12/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.