What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

Googlebot does not retain cookies between sessions. If your pages heavily depend on cookies to display content, make sure they are rendered correctly on the first visit, as no session state is maintained.
47:08
🎥 Source video

Extracted from a Google Search Central video

⏱ 1h00 💬 EN 📅 17/03/2020 ✂ 10 statements
Watch on YouTube (47:08) →
Other statements from this video 9
  1. 4:50 Pourquoi votre contenu disparaît-il des résultats de recherche malgré une technique irréprochable ?
  2. 10:32 Pourquoi Google ne fournit-il aucune donnée Discover dans Analytics ?
  3. 17:28 Faut-il encore optimiser vos pages AMP avec le mobile-first indexing ?
  4. 25:53 Peut-on migrer un site multilingue sans implémenter hreflang immédiatement ?
  5. 29:05 Comment reprendre le contrôle de votre Search Console après une rupture avec votre agence SEO ?
  6. 35:15 Faut-il vraiment multiplier ou réduire vos pages produits pour le SEO ?
  7. 35:20 Faut-il vraiment créer une page par variante produit ou miser sur des pages consolidées ?
  8. 39:06 Faut-il vraiment passer toutes les pages de catégories en noindex sauf une ?
  9. 44:07 La vitesse de chargement est-elle vraiment un facteur de classement déterminant ?
📅
Official statement from (6 years ago)
TL;DR

Googlebot does not maintain any session state between its visits: each visit is a blank slate, with no persisted cookies. If your content relies on cookies to display, it may be invisible to the bot. Essentially, your pages must be fully rendered on the first request, without relying on any prior session context.

What you need to understand

Why does Googlebot refuse to play along with persistent cookies?

The operation of Googlebot is fundamentally different from that of a regular browser. A human user browses, clicks, goes back, and their browser stores cookies that maintain session state from visit to visit. Googlebot, on the other hand, does not work that way.

Every crawled page is treated as an isolated and ephemeral session. The bot arrives, loads the page, executes JavaScript if necessary, renders the DOM, and then leaves without retaining any cookie trace. The next URL crawled — even on the same domain, even two seconds later — starts from scratch.

What does this change for server-side or client-side rendering?

If your site serves conditional content based on cookies — for example, a consent wall that hides content until validated, or a geolocation system that displays variants based on a stored preference — Googlebot will see the version "first visit without cookie".

On the JavaScript rendering side, it’s even more critical. Many SPA frameworks (React, Vue, Angular) use cookies or localStorage to manage application state. If your rendering logic expects a session token or a stored flag, the bot will see an empty or incomplete page. Therefore, the server must deliver complete and self-sufficient HTML on the first request, without depending on prior state.

Does this limitation apply to other Google bots as well?

Yes. All Google crawlers — Googlebot Desktop, Googlebot Smartphone, Google-InspectionTool (used by Search Console), Storebot for Google Shopping — share this behavior. None persist cookies between requests.

This means that even if you test with the URL inspection tool from Search Console, you are simulating a visit without a session. If the rendering is broken, that's what Google actually sees in its index.

  • Googlebot never stores cookies between two requests, even consecutive ones on the same domain.
  • Each crawled page must be self-contained and complete from the first visit, without any prior session state.
  • Sites that use cookies to display content (consent, geolocation, preferences) risk incomplete rendering for the bot.
  • This rule applies to all Google crawlers, including the URL inspection tool.
  • JavaScript rendering must be totally independent of localStorage or cookies to be indexed correctly.

SEO Expert opinion

Is this statement consistent with field observations?

Absolutely. In thousands of SEO audits, it is systematically observed that sites hiding content behind cookie consent — due to GDPR requirements — end up with crawled pages that are empty or partial in the index. Google sees the banner, but not the content below.

Server logs confirm this: each hit from Googlebot arrives without a Cookie header, even when the bot crawls 50 URLs from the same domain in quick succession. No persistence. It’s documented behavior, stable for years, and Mueller states it clearly here. So no surprise — but a classic pitfall for misconfigured sites.

What nuances should be added to this rule?

Be careful: Googlebot accepts and reads cookies during a single session. If your server sends a Set-Cookie in the initial response, and your JavaScript then makes an AJAX request to load content on the same page, the bot will send that cookie back in the same session. This is documented in the rendering specs.

The problem lies in the persistence between distinct crawl sessions. If Googlebot crawls your homepage today, then returns in 3 days, it will have no memory of the cookie received the first time. Each visit is a clean slate. [To be verified] however: the exact behavior of Googlebot regarding SameSite=None or Secure cookies in a multidomain HTTPS context is still poorly documented by Google.

When does this rule cause real problems?

Mal-implemented GDPR consent walls are the number one issue. If your site blocks all content until the user clicks "Accept" (and that click writes a cookie), Googlebot will see nothing. The same goes for paywalls that store a count of free articles in a cookie.

E-commerce sites with currency or language selection via cookies also face issues. If the content varies based on a stored preference, the bot will always see the default version — sometimes the wrong language, sometimes an empty page if the server expects a cookie to know what to display. You should use distinct URLs or the Accept-Language header, not cookies.

Attention: Sites that use cookies to manage the display of critical content (products, articles, regional variants) risk partial or incorrect indexing. Always prefer distinct URLs or header-based routing for content variations.

Practical impact and recommendations

What concrete steps should be taken to ensure optimal rendering?

First, audit your consent banners. If they hide content via CSS or JavaScript until validation, that’s a red flag. The best practice: display the complete content in HTML, and overlay the banner without blocking the underlying rendering. Googlebot ignores visual overlays but crawls the full DOM.

Next, systematically test with the URL inspection tool from Search Console. It simulates exactly the behavior of Googlebot: no cookies, no state. If the live rendering differs from the raw HTML rendering, you have a problem. Compare also with a browser in incognito mode, with cookies disabled — it’s a good proxy for bot behavior.

What errors must absolutely be avoided?

Never store critical rendering logic in localStorage or sessionStorage on the client side. These APIs persist in a regular browser, but Googlebot does not maintain them between requests. The same applies to cookies: if your React app expects a JWT token in a cookie to display products, the bot will see an empty page.

Avoid redirects or content variants solely based on cookies. If you must geolocate, use the Accept-Language header or IP, but always serve complete HTML by default. Never assume that the bot "will remember" a stored preference from a previous crawl — it will not.

How can I check if my site complies with this constraint?

Implement a server log monitoring filtered for the Googlebot user-agent. Check that each request arrives without a Cookie header, or with only the cookies you send yourself in the initial response. No cookie should persist from one session to another.

Then, crawl your site with a bot simulating Googlebot (Screaming Frog, Oncrawl, Botify) while disabling cookie management. Compare the rendering obtained with that of a regular crawl. Any difference indicates a problematic dependency. Automate this test in your CI/CD to detect regressions before going live.

  • Audit your consent banners: content should remain visible in HTML even without cookie validation.
  • Test each critical page with the URL inspection tool from Search Console to verify rendering without state.
  • Never store display logic in localStorage, sessionStorage, or cookies on the client side.
  • Avoid redirects or content variants solely based on cookies — prefer distinct URLs.
  • Monitor your server logs to verify that Googlebot always arrives without persisted cookies.
  • Automate a crawl without cookies in your testing pipeline to detect regressions.
These optimizations affect both front-end, back-end, and rendering infrastructure — meaning that it is rarely trivial to fix alone, especially on complex stacks or legacy CMSs. If your team lacks resources or expertise on these topics, consulting a specialized SEO agency can save you months of trial and error and ensure optimal rendering for search engines from the first iteration.

❓ Frequently Asked Questions

Googlebot peut-il lire les cookies envoyés par le serveur lors d'une même session ?
Oui. Si votre serveur envoie un Set-Cookie dans la réponse initiale, Googlebot le renverra dans les requêtes AJAX ou sous-ressources de cette même session. Mais il ne le conservera pas pour la prochaine visite.
Les murs de consentement RGPD bloquent-ils vraiment l'indexation de mon contenu ?
Oui, si la bannière cache le contenu via CSS ou JavaScript jusqu'à validation. Googlebot verra la bannière mais pas le contenu en dessous. Utilisez un overlay qui ne masque pas le DOM sous-jacent.
Est-ce que localStorage ou sessionStorage sont persistés par Googlebot ?
Non. Googlebot ne maintient aucun état côté client entre les requêtes. Tout contenu dépendant de localStorage sera invisible ou incomplet pour le bot.
Comment tester le rendu de mes pages tel que Googlebot le voit ?
Utilisez l'outil d'inspection d'URL de la Search Console, qui simule exactement Googlebot sans cookies ni état. Comparez avec un navigateur en mode privé, cookies désactivés.
Les autres moteurs de recherche ont-ils le même comportement que Googlebot ?
Bingbot et la plupart des crawlers modernes ne persistent pas non plus les cookies entre sessions. C'est un standard de facto pour éviter les dérives de tracking et simplifier le crawl.
🏷 Related Topics
Domain Age & History Content Crawl & Indexing AI & SEO Local Search

🎥 From the same video 9

Other SEO insights extracted from this same Google Search Central video · duration 1h00 · published on 17/03/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.