What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

Google cannot cache POST requests, leading to greater crawl budget consumption. For rendering APIs, use GET requests. GraphQL can be employed to reduce the number of requests, but only in GET mode.
12:05
🎥 Source video

Extracted from a Google Search Central video

⏱ 18:56 💬 EN 📅 14/07/2020 ✂ 7 statements
Watch on YouTube (12:05) →
Other statements from this video 6
  1. 1:37 Le crawl budget se résume-t-il vraiment à la somme de deux variables simples ?
  2. 3:42 Comment Google détecte-t-il vraiment les changements de contenu sur votre site ?
  3. 4:45 Le crawl budget ne concerne-t-il vraiment que les très gros sites ?
  4. 10:30 Le crawl budget impacte-t-il vraiment la phase de rendering de vos pages JavaScript ?
  5. 12:05 Pourquoi le hashing de contenu dans les URLs booste-t-il vraiment votre crawl budget ?
  6. 17:54 Peut-on vraiment forcer Google à crawler plus son site ?
📅
Official statement from (5 years ago)
TL;DR

Google cannot cache POST requests, which leads to unnecessary crawl budget consumption with every bot visit. For any API needed for rendering indexable content, Martin Splitt recommends using GET requests, even for GraphQL. Specifically, each POST endpoint requested during rendering forces Googlebot to repeat the request on every crawl, while a GET request would allow for effective caching.

What you need to understand

Why can’t Google cache POST requests?

POST requests are inherently non-idempotent. In other words, sending the same POST request twice could theoretically yield two different results or trigger two separate actions on the server side.

This is why browsers and caching systems — including Googlebot — cannot store responses to POST requests in the same way they do for GET requests. A POST could create a resource, change a state, or trigger a side effect. Caching it would mean ignoring the intention of the HTTP protocol.

What does it change for crawl budget?

Crawl budget refers to the number of requests Googlebot can allocate to your site within a given time frame. Each time a bot visits a page needing JavaScript rendering, it executes the code, triggers API requests, and waits for responses.

If these APIs operate in POST, Googlebot must repeat the call on every visit, even if the content hasn’t changed. On a site with thousands of dynamic pages, this represents a significant server load and a waste of crawl budget. With GET, the response can be cached (with proper HTTP headers), which avoids unnecessary server requests and speeds up rendering.

Can GraphQL work in GET mode?

Yes, and that’s precisely what Martin Splitt recommends. GraphQL is generally used with POST because queries can be lengthy and complex, which poses a problem with the URL size limit in GET.

However, technically, nothing prevents you from passing a GraphQL query as a URL parameter via GET, as long as it stays below the roughly 2000-character limit. For common and repetitive queries needed for rendering, this is entirely feasible. The advantages are twofold: reduction in the number of requests due to the flexibility of GraphQL, and possible caching thanks to GET.

  • POST cannot be cached by Googlebot, resulting in unnecessary crawl budget consumption on every visit.
  • GET allows for caching via standard HTTP headers (Cache-Control, ETag, etc.).
  • GraphQL in GET combines the benefits of query flexibility and caching.
  • Each POST endpoint requested during rendering generates a systematic server request, even if the content is identical.
  • On a site with a high volume of dynamic pages, switching from POST to GET can drastically reduce server load and free up crawl budget.

SEO Expert opinion

Is this statement consistent with real-world observations?

Yes, and it’s even one of Martin Splitt's clearest recommendations. It’s regularly observed that JavaScript-heavy sites with POST APIs experience slow indexing problems, especially as page volume increases.

Server logs show that Googlebot consistently re-issues POST calls during every crawl, even on stable content. Conversely, when these same endpoints are switched to GET with appropriate cache headers (max-age, ETag), there’s a significant drop in server hits and faster indexing progress. This is not surprising; it’s basic HTTP, but many modern developers overlook it.

What are the practical limits of this recommendation?

The main limitation is the URL size. GET requests pass their parameters in the URL, and most servers accept up to 2000-8000 characters depending on the configuration. For GraphQL, this can quickly become problematic if your queries are complex with many nested fields.

Another point is that sensitive data should never be transmitted via GET, because the complete URL appears in server logs, referrer headers, and browser history. If your API handles tokens, user IDs, or any data that should not be logged in plain text, POST remains the better choice — but then that content should probably not be exposed to Googlebot anyway.

[To verify] Martin Splitt does not specify whether Googlebot respects cache directives for GET during rendering. It’s assumed that it does, but no official documentation details Googlebot's exact internal cache behavior during JavaScript rendering. Field tests suggest it respects Cache-Control, but further validations across different types of content would be necessary.

In what cases does this rule not apply?

If the content returned by the API is not necessary for indexing (member areas, admin interfaces, content behind login), then it doesn’t matter whether it’s POST or GET. Googlebot will not log in or attempt to crawl those areas.

Similarly, if your site primarily operates in server-side rendering (SSR) and APIs are only called client-side for post-load interactions, it doesn’t impact crawling. The bot retrieves the already rendered HTML without needing to execute API calls. This is, in fact, one of the reasons SSR remains the most robust solution for SEO.

Practical impact and recommendations

What should you concretely do to migrate from POST to GET?

First, identify which APIs are called during the rendering of indexable pages. Use Chrome DevTools in "Disable cache" mode and inspect the Network tab during loading. Filter by XHR/Fetch and note all POST requests that occur before the main content is visible.

Next, for each identified POST endpoint, assess whether it can be converted to GET. Most classic REST APIs (listing products, retrieving an article, loading metadata) can easily switch to GET. For GraphQL, transform your POST queries into GET with the query as a URL parameter, provided it stays under 2000 characters.

On the server side, set up the appropriate HTTP cache headers: Cache-Control with a reasonable max-age (300-3600 seconds depending on update frequency), ETag for conditional validation, and Vary if content changes based on certain headers (Accept-Language, etc.). Without these headers, even with GET, caching won’t be effective.

What mistakes to avoid during the migration?

Do not convert endpoints that modify data (creation, updating, deletion) into GET. This violates HTTP standards and can create serious security issues. GET must remain idempotent and safe.

Also, avoid caching content that changes frequently with a max-age that is too long. If you cache a product stock API for 1 hour while stock changes every 5 minutes, Googlebot (and your users) will see stale data. Adjust the cache duration based on the actual volatility of your data.

Finally, beware of too lengthy GraphQL queries. If you exceed the server's URL limit, the request will fail silently or return a 414 (URI Too Long). Systematically test your GET endpoints across various environments (dev, staging, prod) before deployment.

How to ensure that the optimization is effective?

Compare the server logs before/after migration. You should notice a decrease in hits for endpoints converted to GET, especially for repeated Googlebot visits. Also, use Search Console to monitor crawl budget evolution: if indexing speeds up or the number of crawled pages per day increases, that's a good sign.

Test with the URL Inspection tool in Search Console and compare rendering before/after. The content should appear identical, but rendering time may decrease if caching works well. You can also use tools like Screaming Frog in JavaScript rendering mode to simulate Googlebot behavior.

  • Audit API calls triggered during the JavaScript rendering of indexable pages.
  • Convert POST endpoints to GET for any API needed for displaying crawlable content.
  • Configure Cache-Control, ETag, and Vary on the server side to enable caching.
  • Test the length of GraphQL URLs in GET and ensure they remain under 2000 characters.
  • Validate that GET endpoints never modify data (compliance with HTTP standards).
  • Compare server logs and crawl metrics before/after migration to measure impact.
Migrating your crawlable APIs from POST to GET is a technical optimization that involves both the frontend and backend, as well as server configuration. If you manage a site with thousands of dynamic pages or a complex JavaScript architecture, this type of intervention can quickly become tricky without sharp expertise in technical SEO and web architecture. For custom support and a thorough audit of your technical stack, engaging an SEO agency specialized in JavaScript rendering can save you valuable time and avoid costly mistakes.

❓ Frequently Asked Questions

Pourquoi Googlebot ne peut-il pas cacher les requêtes POST ?
Les requêtes POST ne sont pas idempotentes, ce qui signifie qu'elles peuvent produire des résultats différents ou déclencher des actions côté serveur à chaque appel. Les systèmes de cache, y compris Googlebot, ne peuvent donc pas stocker leurs réponses de manière fiable comme ils le font pour les GET.
Est-ce que toutes les APIs doivent être converties en GET ?
Non, seulement celles qui sont nécessaires au rendu des contenus indexables. Les APIs qui modifient des données (création, mise à jour, suppression) doivent rester en POST. Les APIs privées ou derrière authentification ne sont pas concernées.
GraphQL fonctionne-t-il vraiment en mode GET ?
Oui, techniquement GraphQL peut passer la query en paramètre d'URL via GET. La limite principale est la taille de l'URL (environ 2000 caractères), ce qui peut poser problème pour des queries complexes avec beaucoup de champs imbriqués.
Quels headers HTTP faut-il configurer pour que le cache fonctionne ?
Cache-Control avec un max-age adapté à la fréquence de mise à jour de vos données, ETag pour permettre la validation conditionnelle, et Vary si le contenu change selon certains headers (langue, device, etc.). Sans ces headers, même en GET, le cache ne sera pas efficace.
Comment mesurer l'impact de cette optimisation sur le crawl budget ?
Comparez les logs serveur avant/après pour observer la baisse du nombre de hits sur les endpoints convertis. Surveillez aussi les métriques de crawl dans la Search Console : nombre de pages crawlées par jour, vitesse d'indexation des nouvelles pages, et temps de rendu dans l'outil URL Inspection.
🏷 Related Topics
Domain Age & History Crawl & Indexing AI & SEO JavaScript & Technical SEO Web Performance

🎥 From the same video 6

Other SEO insights extracted from this same Google Search Central video · duration 18 min · published on 14/07/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.