What does Google say about SEO? /

Official statement

If the site's language is managed solely by JavaScript/cookies (same URL for all languages), Google can only index one language version because Googlebot does not follow language switchers or use cookies. To index multiple languages, each version must have a distinct URL.
47:48
🎥 Source video

Extracted from a Google Search Central video

⏱ 55:06 💬 EN 📅 14/08/2020 ✂ 17 statements
Watch on YouTube (47:48) →
Other statements from this video 16
  1. 1:33 Does a hierarchical structure really boost SEO compared to a flat architecture?
  2. 2:38 Does changing your navigation really impact your rankings?
  3. 3:44 Why does Google keep 404 URLs in Search Console for years?
  4. 4:24 Can you inject video tags via JavaScript without facing SEO penalties?
  5. 4:44 Is Google really cropping your recipe images if you fail to provide the right formats?
  6. 5:42 How does Google adapt AMP display based on the browser's technical capabilities?
  7. 5:45 Should You Really Include Modification Dates in Your XML Sitemaps?
  8. 8:42 Are iframes really neutral for SEO, or should you be cautious about them?
  9. 9:03 Can Google redirect your competitors' backlinks to your PDF?
  10. 12:26 Is cross-domain duplicate content really harmless for your SEO?
  11. 17:20 Is it really necessary to remove your old content to boost your SEO?
  12. 42:28 Should you limit the number of outbound links to the same domain to avoid a Google penalty?
  13. 43:33 Why does Google take longer to index a simple title change?
  14. 45:35 How does Google truly calculate the crawl budget for your site?
  15. 50:53 Should you worry when the number of indexed pages fluctuates by 50% in just a few days?
  16. 53:32 Does using nofollow really stop Google from crawling your links?
📅
Official statement from (5 years ago)
TL;DR

Google cannot track JavaScript language switchers or use cookies to discover the different language versions of the same URL. The result: only the default language is indexed, while other versions remain invisible. To be indexed in multiple languages, each version must have its own distinct URL — that's non-negotiable.

What you need to understand

Why does Googlebot ignore JavaScript language switchers?

Googlebot crawls the web fundamentally differently than a human user. When a visitor clicks on a language selector, the site dynamically loads the new version via JavaScript or stores this preference in a cookie. The bot does not click, does not change language preference, and does not retain cookies between crawl sessions.

Specifically, if your site displays French by default and English, Spanish, or German are only accessible by changing a local setting in the browser, Googlebot only sees the French version. There is no technical way for it to discover that other languages exist on the same URL. This is not a bug — it is a deliberate architectural limitation to prevent unpredictable behavior and inflation of the crawl budget.

What does a distinct URL for each language mean?

A distinct URL means that each language version has its own unique identifier in Google’s index. This can take three main forms: subdomains (en.example.com, fr.example.com), subdirectories (example.com/en/, example.com/fr/) or geographic top-level domains (example.fr, example.de).

This approach allows Google to crawl, index, and serve each version independently based on the user’s geolocation and language preferences. Hreflang tags can then signal the relationships between these versions to avoid duplicate content issues and optimize display in local SERPs.

Does this limitation only affect multilingual sites?

No. The same principle applies to any client-side content customization without distinct URLs. If your site displays different products based on geolocation detected via JavaScript, or if you customize prices via cookies without changing the URL, Google will only index the default version served server-side.

This is particularly problematic for international e-commerce sites trying to manage multiple currencies, regional inventories, or local catalogs on a single URL. Without URL differentiation, you lose control over what gets indexed and for which market.

  • Googlebot does not execute user actions like clicking on language switchers
  • Cookies are not retained between crawl sessions
  • A unique URL = one indexed version, regardless of the JavaScript logic behind it
  • Subdomains, subdirectories, or ccTLDs are the three structures validated by Google
  • Hreflang tags require distinct URLs to function

SEO Expert opinion

Is this statement consistent with observed practices in the field?

Yes, and it’s actually one of Google’s most stable assertions for years. Tests consistently show that sites using a JavaScript switcher without URL change see only one language indexed — usually the one set server-side for the initial load. There’s no ambiguity here.

Where it gets interesting is that some modern frameworks (Next.js, Nuxt) allow for hybrid rendering with server-side language detection but client-side navigation. If the implementation is clean and SSR serves distinct URLs to Googlebot, it works. But as soon as everything relies on client-side routing with a single URL, you fall back into the trap described by Mueller.

What nuances should be added to this rule?

The statement is binary, but the technical reality has some gray areas. Google can execute modern JavaScript and index dynamically loaded content — but only if that content is discoverable via the initial crawl. A language switcher that alters the local DOM state without changing the URL remains invisible.

Also, be cautious about timing: even if you implement distinct URLs, if your JavaScript automatically redirects users based on their detected browser language without allowing Googlebot to access all versions, you recreate the problem. 302 redirects based on the Accept-Language header are acceptable if they include a crawlable fallback mechanism for the bot.

In what cases might this rule seem not to apply?

Some sites appear to bypass this limitation by serving multilingual content on a single URL while appearing in several local SERPs. In 99% of cases, this is a fluke or misunderstanding. Google indeed indexes only one version, but it may rank in multiple countries if the content is neutral enough or if backlinks originate from different geographic areas.

The other apparent exception concerns sites with automatic translation enabled by Google (via Google Translate in SERPs). It is not your multilingual site that is indexed — it’s Google providing a translation of your unique version. You have no control over quality or precise targeting. This is not a viable SEO strategy for a serious site.

Practical impact and recommendations

What should you concretely do if your site switches through JavaScript?

First, audit the current architecture. Test using the URL inspection tool in Search Console: submit your homepage and check what language Google sees in the HTML rendering. If you have 5 languages but only one appears, you’re in the scenario described by Mueller.

Next, plan the migration to distinct URLs. This is a non-trivial technical project: you need to choose between subdomains, subdirectories, or ccTLDs (each has its SEO and budget implications), refactor routing, implement hreflang tags correctly, manage redirects from the old structure, and avoid duplication errors. Timing and method matter greatly to avoid losing traffic during the transition.

What mistakes should be avoided when implementing multilingual URLs?

The classic mistake: using URL parameters (example.com?lang=fr) instead of clean structures. Technically, this is a distinct URL, but Google explicitly recommends not using this method as it complicates hreflang management, dilutes ranking signals, and poses canonicalization issues.

Second pitfall: implementing distinct URLs but allowing JavaScript to automatically redirect based on browser language without offering a static version to Googlebot. The bot must be able to freely access all versions without being forced into just one. Use 302 redirects with a visible link to other languages, or better, serve the appropriate version server-side via the Accept-Language header while keeping all URLs crawlable.

How to verify that the implementation works correctly?

After migration, use Search Console for each language version (set up a property for each subdomain or subdirectory). Check that each version is indexed independently, that hreflang tags appear correctly in URL inspection, and that organic traffic comes from the right geolocations.

Also, monitor Core Web Vitals by language: one version may perform very differently from another if resources (fonts, images) are not uniformly optimized. Coverage reports should show an indexing volume proportional to the actual content of each language — if English has 500 pages but only 50 are indexed, investigate.

  • Audit the current architecture using the URL inspection tool in Search Console
  • Choose an appropriate URL structure (subdirectories recommended for most cases)
  • Implement hreflang tags correctly between all versions
  • Avoid JavaScript automatic redirects that block bot access
  • Set up a separate Search Console property for each language version
  • Verify indexing and performance of each version independently
Migrating a monolingual site managed by JavaScript to a multilingual architecture with distinct URLs is a structural project that touches on development, infrastructure, and SEO strategy. If your organization lacks internal expertise on these aspects — particularly to avoid critical hreflang, canonicalization, or traffic loss errors during the transition — working with an SEO agency specialized in international architectures can secure the operation and accelerate results.

❓ Frequently Asked Questions

Est-ce que Google peut indexer plusieurs langues si j'utilise uniquement des paramètres d'URL comme ?lang=fr ?
Techniquement oui, car ce sont des URL distinctes, mais Google déconseille fortement cette méthode. Elle complique la gestion des hreflang, dilue les signaux de ranking et pose des problèmes de canonicalisation. Privilégiez les sous-répertoires ou sous-domaines.
Les balises hreflang fonctionnent-elles si toutes mes langues partagent la même URL ?
Non. Les hreflang nécessitent des URL distinctes pour chaque version linguistique. Si vous n'avez qu'une seule URL avec switcher JavaScript, les balises hreflang n'ont aucun sens et seront ignorées par Google.
Mon site Next.js utilise du client-side routing — est-ce que je suis concerné par ce problème ?
Ça dépend de votre implémentation. Si vous utilisez du SSR (Server-Side Rendering) avec des routes distinctes par langue servies côté serveur, Googlebot verra des URL différentes et pourra indexer chaque version. Si tout repose sur du client-side routing avec une URL unique, vous êtes dans le cas problématique décrit par Mueller.
Est-ce que Googlebot respecte les cookies pour détecter la langue préférée de l'utilisateur ?
Non. Googlebot ne conserve pas les cookies entre les sessions de crawl et n'a pas de 'préférence utilisateur'. Il voit uniquement ce qui est servi côté serveur lors du premier chargement de l'URL.
Puis-je utiliser une redirection automatique basée sur l'Accept-Language header sans nuire à l'indexation ?
Oui, à condition d'utiliser une redirection 302 (temporaire) et de laisser Googlebot accéder librement à toutes les versions linguistiques. Ne forcez jamais le bot vers une seule langue — toutes les URL doivent rester crawlables indépendamment.
🏷 Related Topics
Domain Age & History Crawl & Indexing JavaScript & Technical SEO Domain Name International SEO

🎥 From the same video 16

Other SEO insights extracted from this same Google Search Central video · duration 55 min · published on 14/08/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.