What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

Googlebot does not handle cookies. Do not rely on cookies to redirect users to a specific language version of a page.
6:35
🎥 Source video

Extracted from a Google Search Central video

⏱ 57:05 💬 EN 📅 07/09/2017 ✂ 29 statements
Watch on YouTube (6:35) →
Other statements from this video 28
  1. 1:05 Les redirections d'images vers des pages HTML transfèrent-elles du PageRank ?
  2. 1:05 Pourquoi rediriger vos images vers des pages tierces détruit-il leur valeur SEO ?
  3. 2:12 Faut-il vraiment se préoccuper du TLD pour un site international ?
  4. 2:37 Les domaines .eu peuvent-ils vraiment cibler plusieurs pays sans pénalité SEO ?
  5. 4:15 Faut-il vraiment automatiser les redirections linguistiques de son site multilingue ?
  6. 7:38 Faut-il vraiment héberger son domaine dans le pays ciblé pour ranker localement ?
  7. 9:00 Faut-il éviter les multiples balises H1 quand le logo est en texte ?
  8. 9:01 Faut-il vraiment limiter le nombre de balises H1 sur une page pour le SEO ?
  9. 11:28 Les impressions GSC reflètent-elles vraiment ce que voient vos utilisateurs ?
  10. 12:00 Qu'est-ce qu'une impression réelle en Search Console et pourquoi le viewport change tout ?
  11. 14:03 Le lazy loading d'images bloque-t-il vraiment Googlebot ?
  12. 14:08 Le lazy loading des images peut-il compromettre leur indexation par Google ?
  13. 17:21 Faut-il vraiment éviter de modifier le contenu d'une page récente ?
  14. 19:30 Les mauvais backlinks peuvent-ils vraiment couler votre classement Google ?
  15. 19:47 Changer vos ancres de liens internes déclenche-t-il vraiment un recrawl Google ?
  16. 21:34 Google peut-il vraiment ignorer vos backlinks non naturels sans vous pénaliser ?
  17. 24:05 Pourquoi les migrations partielles de sites provoquent-elles des fluctuations SEO plus longues que les migrations complètes ?
  18. 27:00 La structure de site suffit-elle vraiment à améliorer son indexation ?
  19. 30:41 Pourquoi utiliser un 301 plutôt qu'un 307 lors d'une migration HTTPS ?
  20. 33:35 Pourquoi la commande 'site:' met-elle jusqu'à deux mois pour refléter vos modifications réelles ?
  21. 34:54 La balise unavailable_after peut-elle vraiment contrôler la durée de vie de vos contenus dans l'index Google ?
  22. 35:56 Pourquoi Googlebot crawle-t-il trop vos CSS et JS ?
  23. 39:19 Le tag 'Unavailable After' permet-il vraiment de programmer la disparition d'une page de l'index Google ?
  24. 50:12 Faut-il vraiment réindexer tout le site après un changement d'URL ?
  25. 50:34 Faut-il vraiment éviter de modifier la structure de vos URLs ?
  26. 53:00 Faut-il retraduire ses ancres de backlinks quand on change la langue principale de son site ?
  27. 53:00 Changer la langue principale d'un site : faut-il craindre une perte de backlinks ?
  28. 54:12 La nouvelle Search Console va-t-elle vraiment changer votre diagnostic SEO ?
📅
Official statement from (8 years ago)
TL;DR

Googlebot neither records nor transmits cookies during crawling, which prevents any language redirection logic based on this mechanism. If your site redirects users to localized versions via session cookies, the bot will always see the same URL, creating issues with indexing and international visibility. The solution: use detectable technical signals server-side, such as Accept-Language headers or URL parameters, and implement hreflang correctly.

What you need to understand

What does it really mean when we say 'Googlebot does not handle cookies'?

When Googlebot crawls a page, it acts like a very basic browser. It makes a standard HTTP request, retrieves the HTML, but does not store any cookies returned by the server in the Set-Cookie header.

During its subsequent requests, even on the same domain, no cookie is transmitted in the Cookie header. Therefore, the bot always arrives like a completely new visitor, with no history, no preferences, and no session.

Why does this limitation pose a problem for multilingual sites?

Many sites detect the visitor's language via IP or browser, and then store this preference in a cookie for future visits. A French user arriving for the first time sees the FR version, and a cookie 'lang=fr' is set.

But if your redirection logic relies on this cookie to always display /fr/ instead of /en/, Googlebot will never see this redirection. It will always crawl the default version (often /en/ or the root), creating incomplete or incorrect indexing of your localized content.

What are the risks of language redirection based on cookies?

The first risk is the complete invisibility of your language variants. If Googlebot always accesses /en/, the /fr/, /de/, and /es/ versions will never be crawled unless you manually submit them or they are linked from other pages.

Furthermore, even with correctly implemented hreflang tags, Google may encounter inconsistencies between crawled content and hreflang declarations. If hreflang points to /fr/ but Googlebot never accesses it because it is redirected to /en/ in the absence of a cookie, the signal is diluted.

  • Googlebot does not record any cookie returned by the server in Set-Cookie.
  • No cookie is transmitted in subsequent requests, even on the same domain.
  • Cookie-based redirections are invisible to the bot and create multilingual indexing issues.
  • Uncrawled language variants disappear from the index or are never discovered.
  • Using server-detectable signals (Accept-Language, URL parameters, subdomains) is the only reliable approach.

SEO Expert opinion

Is this statement consistent with observed practices in the field?

Absolutely. Technical tests have confirmed for years that Googlebot arrives without cookies and does not store any. Server logs show requests devoid of any Cookie header, even after multiple successive visits to the same site.

Some developers have attempted 302 redirections based on the presence of a session cookie, thinking the bot would follow the user journey. The result: indexing of the wrong language version, confusion in the SERPs, and English URLs surfacing for French queries.

What nuances should be added to this absolute rule?

Mueller says 'do not rely on cookies,' but he does not claim that any use of cookies is toxic. You can definitely use cookies to enhance the UX for human visitors, as long as your crawling logic does not depend on them.

The ideal: a dual mechanism. Detect the language via Accept-Language or IP for Googlebot (which sends a basic but usable Accept-Language), and store the user preference in a cookie for humans. This way, the bot sees the correct version on the first hit, and the user enjoys smooth navigation.

Note: JavaScript redirections that read a cookie on the client side pose the same problem. Googlebot executes JS, but if the cookie does not exist in its context, the redirection never triggers. It is better to prefer a server-side logic or rel="alternate" hreflang tags.

In what cases does this rule not apply or become secondary?

If your site is monolingual or uses strictly separate subdomains (fr.example.com, de.example.com) with geolocated DNS, the cookie issue does not arise. Each regional bot crawls its own subdomain.

Similarly, if you implement a proper HTTP content negotiation (Vary: Accept-Language, returning different content on the same URL according to the header), Googlebot adapts automatically. But this approach is rare and technically demanding.

Practical impact and recommendations

What concrete actions should be taken to avoid pitfalls related to cookies?

The first action: audit your language redirection logic. Manually test by disabling cookies in your browser, or use a tool like cURL without passing a cookie. If you are consistently redirected to a single language, Googlebot will experience the same.

Next, move the detection logic server-side. Analyze the Accept-Language header (sent by Googlebot based on the originating data center), the IP, or implement a visible and crawlable language selector in HTML. Links to /fr/, /de/, /es/ must be accessible without JavaScript or cookies.

What mistakes should absolutely be avoided in a multilingual architecture?

Never automatically redirect a user to a language version without giving them a choice. Google dislikes forced redirections that trap the bot in a loop or block it on an irrelevant version.

Also, avoid permanent 302 redirects based on IP without alternatives. If US Googlebot crawls your site from California, it will always be redirected to /en-us/, making /fr/ or /de/ invisible. Offer a visible language selector in the footer or header, crawlable, and declare hreflang properly.

How can I check that my site complies and that Googlebot sees the correct versions?

Use Google Search Console and inspect the URL of each language variant. The 'URL Inspection' tool shows you exactly what Googlebot has crawled, including the redirects followed. If /fr/ consistently redirects to /en/, you will see the redirection in the report.

Complement this with a cURL test without cookies: curl -I https://example.com/fr/. If you get a 200 with the correct content, you're good. If you see a 302 to /en/, you have an issue. Finally, validate your hreflang tags with an external validator or the GSC internationalization report.

  • Audit the language redirection logic by disabling cookies in the browser.
  • Move language detection server-side (Accept-Language, IP, URL parameters).
  • Implement a crawlable HTML language selector, without mandatory JavaScript.
  • Correctly declare hreflang tags on each language variant.
  • Test each URL with the URL Inspection tool in Google Search Console.
  • Validate redirects with cURL without passing cookies.
Managing cookies and multilingual architectures can quickly become a technical headache, especially on complex CMS or hybrid stacks (SPA, SSR). If you lack internal resources or encounter recurring indexing issues, engaging an SEO agency specialized in international can save you months and prevent costly visibility errors.

❓ Frequently Asked Questions

Googlebot exécute-t-il JavaScript et peut-il donc lire des cookies côté client ?
Googlebot exécute JavaScript, mais il n'enregistre ni ne transmet de cookies même dans ce contexte. Une redirection JS basée sur document.cookie ne fonctionnera pas car le cookie n'existe jamais dans l'environnement du bot.
Peut-on utiliser des cookies pour l'UX sans pénaliser le SEO ?
Oui, à condition que la logique de crawl soit indépendante des cookies. Détectez la langue côté serveur pour Googlebot, et utilisez un cookie pour mémoriser la préférence des humains lors des visites ultérieures.
Les balises hreflang suffisent-elles si Googlebot ne voit qu'une seule version linguistique ?
Non. Hreflang indique des relations entre URLs, mais si Googlebot ne crawle jamais /fr/ à cause d'une redirection par cookie, il ne peut pas indexer cette variante ni valider le signal hreflang. Les deux doivent être accessibles.
Comment tester si mon site redirige Googlebot correctement sans cookies ?
Utilisez cURL sans transmettre de cookies : curl -I https://example.com/fr/. Ou testez avec l'outil Inspection d'URL de Google Search Console pour voir exactement ce que le bot a crawlé et les redirections suivies.
Un CDN peut-il gérer la détection de langue sans cookies ?
Oui. Certains CDN (Cloudflare, Fastly, Akamai) permettent de router les requêtes selon l'en-tête Accept-Language ou l'IP géographique, sans dépendre de cookies. C'est une solution robuste pour les sites internationaux à fort trafic.
🏷 Related Topics
Domain Age & History Crawl & Indexing

🎥 From the same video 28

Other SEO insights extracted from this same Google Search Central video · duration 57 min · published on 07/09/2017

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.