What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

Google caches resources loaded via GET requests during JavaScript rendering but does not cache POST request responses. This can affect rendering performance and indexing consistency.
31:36
🎥 Source video

Extracted from a Google Search Central video

⏱ 46:02 💬 EN 📅 25/11/2020 ✂ 29 statements
Watch on YouTube (31:36) →
Other statements from this video 28
  1. 1:02 Google rend-il vraiment toutes les pages JavaScript, quelle que soit leur architecture ?
  2. 1:02 Google rend-il vraiment TOUT le JavaScript, même sans contenu initial server-side ?
  3. 2:05 Comment vérifier que Googlebot crawle vraiment votre site ?
  4. 2:05 Comment vérifier que Googlebot est vraiment Googlebot et pas un imposteur ?
  5. 2:36 Google limite-t-il vraiment le temps CPU lors du rendu JavaScript ?
  6. 2:36 Google limite-t-il vraiment le temps CPU lors du rendu JavaScript ?
  7. 3:09 Faut-il arrêter d'optimiser pour les bots et se concentrer uniquement sur l'utilisateur ?
  8. 5:17 La propriété CSS content-visibility impacte-t-elle le rendu dans Google ?
  9. 8:53 Comment mesurer les Core Web Vitals sur Firefox et Safari sans API native ?
  10. 11:00 Combien de temps Google attend-il vraiment avant d'abandonner le rendu JavaScript ?
  11. 11:00 Combien de temps Googlebot attend-il vraiment pour le rendu JavaScript ?
  12. 20:07 Pourquoi Google affiche-t-il des pages vides alors que votre site JavaScript fonctionne parfaitement ?
  13. 20:07 AJAX fonctionne en SEO, mais faut-il vraiment l'utiliser ?
  14. 21:10 Le JavaScript bloquant peut-il vraiment empêcher Google d'indexer tout le contenu de vos pages ?
  15. 24:48 Le prérendu dynamique est-il devenu un piège pour l'indexation ?
  16. 26:25 Pourquoi vos ressources supprimées peuvent-elles détruire votre indexation en prérendu ?
  17. 26:47 Que fait vraiment Google avec votre HTML initial avant le rendu JavaScript ?
  18. 27:28 Google analyse-t-il vraiment tout dans le HTML initial avant le rendu ?
  19. 27:59 Pourquoi Google ignore-t-il le rendu JavaScript si votre balise noindex apparaît dans le HTML initial ?
  20. 27:59 Pourquoi une page 404 avec JavaScript peut-elle faire désindexer tout votre site ?
  21. 28:30 Pourquoi Google refuse-t-il de rendre le JavaScript si le HTML initial contient un meta noindex ?
  22. 30:00 Google compare-t-il vraiment le HTML initial ET rendu pour la canonicalisation ?
  23. 30:01 Google détecte-t-il vraiment le duplicate content après le rendu JavaScript ?
  24. 31:36 Les APIs GET sont-elles vraiment mises en cache par Google comme les autres ressources ?
  25. 34:47 Est-ce que Google indexe vraiment toutes les pages après rendu JavaScript ?
  26. 35:19 Google rend-il vraiment 100% des pages JavaScript avant indexation ?
  27. 36:51 Pourquoi vos APIs défaillantes sabotent-elles votre indexation Google ?
  28. 37:12 Les données structurées sur pages noindex sont-elles vraiment perdues pour Google ?
📅
Official statement from (5 years ago)
TL;DR

Google caches resources loaded via GET requests during JavaScript rendering but completely ignores POST responses. This distinction can create discrepancies between what Googlebot sees and what the browser displays, with a direct impact on indexing consistency. For sites loading critical content via POST, this can result in partial or inconsistent indexing.

What you need to understand

Why does Google differentiate between GET and POST in its caching mechanism?

The behavior of Googlebot during JavaScript rendering is based on a logic of performance and consistency. GET requests, by nature idempotent, can be replayed without risk: retrieving the same resource twice produces the same result. Thus, Google can cache these responses and reuse them during successive renders, saving processing time and server resources.

POST requests, on the other hand, are designed to change a server-side state — submitting a form, creating a resource, triggering an action. Caching them would be a technical aberration: each POST should theoretically produce a unique result. Google cannot assume that a POST response will be identical during a second render.

What is the real difference between the initial render and subsequent renders?

During the first render of a page, Googlebot executes JavaScript and performs all requests — both GET and POST. The browser receives the responses, the DOM is constructed, and the content is displayed. If a critical part of the page (text blocks, products, reviews) is loaded via POST, it will be visible during this initial render.

However, during a later render — for example, if Google re-evaluates the page a few days later — POST requests will not be replayed. Google will reuse cached GET resources, but the content loaded via POST will be absent. The result: two different versions of the same URL in the index, with a risk of inconsistency between the indexed content and the actual content.

What impact does this have on indexing and the performance perceived by Googlebot?

If your architecture relies on POST calls to load critical content, you create a dependency on the freshness of the render. Googlebot might index an incomplete state of the page if the last render did not replay the POSTs. Worse still: rendering performance can vary depending on whether POSTs are executed or not, skewing the evaluation of Core Web Vitals and response times.

Specifically, an e-commerce site that loads prices or inventory via POST risks having Googlebot index product pages without prices. A blog that loads comments or related articles via POST may end up with pages considered lacking in content during subsequent renders.

  • GET requests are cached by Googlebot during JavaScript rendering, ensuring consistency between successive renders.
  • POST requests are never cached, which can create indexing discrepancies if critical content relies on them.
  • An initial render may display complete content via POST, but subsequent renders will ignore this data.
  • The impact on rendering performance varies based on the presence or absence of cached POST responses.
  • SPA or headless sites that abuse POST to retrieve content expose themselves to partial or inconsistent indexing.

SEO Expert opinion

Is this statement consistent with field observations?

Yes, and it clarifies a commonly misunderstood point. Many SEOs believe that all HTTP requests made during rendering will be systematically replayed. This is false. Google optimizes its process by reusing what it can — namely GET requests. POST requests, however, are treated as one-time actions.

In practice, we indeed observe cases where JavaScript-rendered pages display variable content depending on the timing of the render. A site that loads its FAQs via POST may see Google index sometimes the complete version, sometimes a version without answers. [To be verified]: Google rarely communicates about the frequency of JavaScript re-execution for a given URL, making it difficult to predict when a POST will be replayed or not.

What nuances should be applied to this rule?

The cache of Googlebot is not eternal. GET resources remain cached for a limited duration — likely tied to Cache-Control headers and the crawl frequency of the URL. If a GET resource expires, it will be replayed. But for POSTs, there is no caching mechanism: every initial render executes them, and each subsequent render ignores them.

Another nuance: this rule applies to JavaScript rendering, not static HTML crawling. If your page serves complete HTML server-side and then JavaScript subsequently adds content via POST, the static content will always be indexed. The issue only concerns sites that rely on JavaScript to display critical content.

In what cases does this rule become a real problem?

Let’s be honest: most well-designed sites do not use POST to load indexable content. Modern REST APIs favor GET for data retrieval. However, certain frameworks or legacy architectures, especially headless CMS or poorly configured React/Vue applications, may send POSTs by default.

The real risk concerns sites that load critical SEO elements via POST: titles, descriptions, body text, structured data. If these elements are only visible during the first render, Google may index a stripped-down version of the page during subsequent renders. And that’s where it gets problematic: indexing inconsistency harms rankings.

Caution: If you are using a JavaScript framework that sends POST requests to retrieve content (and not to submit data), you are potentially exposed to this issue. Check your network calls during rendering.

Practical impact and recommendations

What concrete steps should be taken to avoid this trap?

The first step: audit the network calls made during the rendering of your critical pages. Open the developer console, load a page, and filter requests by POST method. If you see POSTs loading indexable content — text blocks, products, articles — it’s an immediate red flag.

The second step: migrate these calls to GET. If your API or framework is sending POSTs by default, reconfigure them to use GET when retrieving data. It’s a good HTTP practice anyway: GET for reading, POST for writing. Most modern frameworks allow specifying the HTTP method in the API call configuration.

What errors should be avoided during migration?

Don't just replace POST with GET without checking the Cache-Control headers. If your GET responses are marked as non-cacheable (Cache-Control: no-store), Google won't cache them either, and you'll lose the benefits of the migration. Ensure your GET responses include appropriate caching directives.

Another common mistake: sending complex payloads in the URL via GET. Technically possible, but overly long URLs can cause issues — server limits, truncation in logs, poor UX. If your POST was sending a 2kb JSON, reconsidering the API architecture is probably necessary instead of passing everything through the query string.

How can I verify that my site is compliant after the fix?

Use Google Search Console and the URL inspection tool. Request a live render, then compare the rendered version with what you see in production. If critical content appears in both cases, that’s a good sign. Otherwise, dig into the network tab to identify missing requests.

Also, test with tools like Screaming Frog in JavaScript rendering mode or Oncrawl. These tools simulate Googlebot’s behavior and can reveal inconsistencies between static HTML and rendered content. If you find discrepancies, it’s likely related to un-replayed POSTs.

  • Audit all POST calls made during the JavaScript rendering of critical pages.
  • Migrate to GET any calls that load indexable content (text, products, structured data).
  • Ensure that GET responses include appropriate Cache-Control directives to be cacheable.
  • Test rendering through Google Search Console and compare it with the production version.
  • Monitor server logs for any POSTs still present after migration.
  • Document API calls and their HTTP methods to prevent regressions during future updates.
The distinction between GET and POST in Googlebot’s cache is not trivial. If your architecture relies on POST to load critical content, you risk inconsistent indexing and visibility loss. The solution is technical — migrating to GET and configuring the cache correctly — but it can be complex depending on your tech stack. If you identify this type of issue on your site and lack internal resources to address it, consulting a specialized technical SEO agency could save you time and prevent costly mistakes during migration.

❓ Frequently Asked Questions

Google indexe-t-il le contenu chargé via POST lors du premier rendu ?
Oui, lors du premier rendu, Googlebot exécute toutes les requêtes, y compris les POST. Le contenu chargé via POST sera visible et potentiellement indexé à ce moment-là. Mais lors des rendus ultérieurs, ces POST ne seront pas rejouées.
Pourquoi Google ne cache-t-il pas les réponses POST ?
Les requêtes POST sont conçues pour modifier un état côté serveur et ne sont pas idempotentes. Les mettre en cache serait techniquement incorrect, car chaque POST peut produire un résultat différent. Google respecte cette convention HTTP.
Comment savoir si mon site utilise des POST pour charger du contenu indexable ?
Ouvrez la console développeur de votre navigateur, chargez une page critique, et filtrez les requêtes par méthode POST dans l'onglet réseau. Si des POST chargent du texte, des produits ou des données structurées, c'est un problème potentiel.
Peut-on forcer Google à rejouer les POST lors de chaque rendu ?
Non, il n'existe aucun mécanisme pour forcer Google à rejouer les POST. La seule solution est de migrer ces appels vers GET pour que les réponses soient mises en cache et réutilisées lors des rendus suivants.
Les en-têtes Cache-Control influencent-ils le cache de Googlebot pour les GET ?
Probablement, bien que Google ne détaille pas exactement comment il gère les en-têtes de cache. Si vos réponses GET sont marquées no-store ou no-cache, il est peu probable que Googlebot les réutilise entre rendus.
🏷 Related Topics
Crawl & Indexing AI & SEO JavaScript & Technical SEO Web Performance Search Console

🎥 From the same video 28

Other SEO insights extracted from this same Google Search Central video · duration 46 min · published on 25/11/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.