What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

It is acceptable to include Googlebot in a temporary A/B test (e.g., menu change) or to exclude it by treating it as a special category (based on geolocation, language, capabilities). If separate URLs are used for the variants, rel=canonical must point to the preferred version. Googlebot does not use cookies, so be cautious with cookie-based tests.
14:04
🎥 Source video

Extracted from a Google Search Central video

⏱ 58:01 💬 EN 📅 14/09/2020 ✂ 20 statements
Watch on YouTube (14:04) →
Other statements from this video 19
  1. 1:06 Les backlinks du blog vers les pages produits transmettent-ils vraiment l'autorité ?
  2. 3:14 Un blog sur sous-domaine peut-il vraiment transmettre de l'autorité SEO au site principal ?
  3. 10:37 Pourquoi une migration JavaScript peut-elle détruire votre indexation à cause du cache ?
  4. 10:37 Faut-il utiliser Prerender pour servir du HTML statique à Googlebot ?
  5. 17:53 Les backlinks haute DA sans valeur sont-ils vraiment sans danger pour votre SEO ?
  6. 19:19 Faut-il vraiment quitter Blogger pour WordPress pour améliorer son SEO ?
  7. 20:30 Les core updates Google suivent-ils vraiment un calendrier prévisible ?
  8. 23:06 Les balises <p> sont-elles vraiment utiles pour le SEO ou Google s'en fout complètement ?
  9. 26:55 Pourquoi la Search Console ne remonte-t-elle que des données partielles pour la section News au lancement ?
  10. 27:27 Les liens internes jouent-ils vraiment un rôle dans le ranking Google ?
  11. 31:07 Les pénalités manuelles de Google sont-elles toujours visibles dans Search Console ?
  12. 33:45 L'attribut alt sert-il encore au référencement des pages web ?
  13. 35:50 Pourquoi Google affiche-t-il du spam dans les résultats de recherche de marque au-delà de la première page ?
  14. 38:46 Pourquoi vos balises meta peuvent-elles être invisibles pour Google sans que vous le sachiez ?
  15. 38:46 Le JavaScript tiers ralentit votre site : Google vous en tient-il vraiment responsable pour le ranking ?
  16. 41:34 Google Tag Manager modifie-t-il votre contenu au point d'affecter votre SEO ?
  17. 43:48 Restaurer une URL 404 : Google efface-t-il vraiment toute trace de son autorité passée ?
  18. 49:38 Les guest posts sont-ils un schéma de liens répréhensible aux yeux de Google ?
  19. 53:42 Faut-il vraiment s'inquiéter de la duplication de produits en scroll infini ?
📅
Official statement from (5 years ago)
TL;DR

Google officially allows the inclusion of Googlebot in temporary A/B tests or its exclusion by treating it as a special category. The use of rel=canonical becomes mandatory if you expose distinct URLs for each tested variant. However, be cautious: Googlebot ignores cookies, making cookie-based tests for variant detection invalid.

What you need to understand

Why is Google clarifying its position on A/B testing?

A/B testing is a common practice for optimizing conversion rates, but its technical implementation regularly generates legitimate SEO concerns. Showing different versions of the same page to distinct users could theoretically be interpreted as cloaking — a prohibited technique that serves different content to search engines.

Mueller's statement aims to clear up this confusion. It establishes a clear framework: A/B tests do not constitute cloaking if their purpose is to optimize user experience and not manipulate rankings. The word "temporary" in his formulation is, however, deliberately vague—how long can a test last before Google considers it permanent?

What’s the difference between including and excluding Googlebot from a test?

Including Googlebot means the bot will see one of the tested variants, just like a real user. This approach is suitable for light tests (button color changes, CTA text changes, wording) where all variants remain semantically equivalent and do not affect the engine's understanding of the content.

Conversely, excluding Googlebot involves treating it as a separate category of agent—typically by serving it the control version. This strategy is necessary for deep structural tests (complete redesigns, changes in content architecture) where exposing an unstable variant to the bot might disrupt indexing. Google specifies that this exclusion can be based on geolocation, language, or detected technical capabilities.

What does the mention of cookies and distinct URLs really mean?

Googlebot does not save or transmit cookies during crawling. If your A/B testing framework relies solely on cookies to assign the user to a variant, the bot will always see the default version—often the A variant or control. This can skew your observations if you thought Google was also indexing variants B, C, or D.

When variants use distinct URLs (example.com/page?variant=b or example.com/page-b), Mueller insists on using rel=canonical. Without this directive, Google might view each URL as a unique page and dilute relevance signals. The canonical must point to the preferred version—generally the control variant or the final URL you want to rank.

  • A/B tests are not cloaking if their aim is optimizing UX, not manipulating search results.
  • Including Googlebot works for light changes; excluding it is suitable for deep structural overhauls.
  • rel=canonical is mandatory if variants have distinct URLs, to avoid signal dilution.
  • Googlebot ignores cookies—if your test relies solely on cookies, the bot will only see one variant.
  • The "temporary" duration of a test remains vague—Google has never communicated a precise numerical threshold.

SEO Expert opinion

Is this guideline consistent with on-the-ground observations?

Yes, in most cases. Agencies that have included Googlebot in lightweight A/B tests (button color changes, headline rephrasing) have not observed any algorithmic penalties or drops in visibility. In contrast, deep structural tests—those altering content hierarchy, presence of entire blocks, or main navigation—have generated temporary ranking fluctuations, even with a correctly implemented canonical.

The ambiguity persists around the notion of "temporary." I've observed tests maintained for several months without visible consequences, while others, halted after three weeks, had already triggered a partial reassessment of relevance signals by Google. [To be verified]: is there a crawl number or calendar duration threshold beyond which Google reconsiders the "test" status?

What are the grey areas not covered by Mueller?

The statement does not address multi-step tests where the user goes through several variants during their journey (e.g., variant A on the homepage, variant B on the product page). In this context, does Googlebot follow consistency, or does it switch between variants depending on the crawled pages? No official documentation clarifies this.

It also sidesteps the matter of server-side vs client-side testing. A test deployed in JavaScript—where the initial content sent to the bot differs from the final rendered version—can be misinterpreted if the rendering budget is insufficient. Mueller talks about "capabilities" as an exclusion criterion, but does not elaborate on whether this covers JavaScript, media queries, or other front-end technologies. [To be verified]: is a full-JS A/B test acceptable if the semantic content remains equivalent after rendering?

In what cases does this recommendation become risky?

If your test alters critical ranking elements—title, H1, presence of main keywords in the body text—excluding Googlebot exposes you to ranking volatility. The bot may randomly or semi-randomly index one of the variants, which can temporarily degrade your positions if the crawled variant is less optimized than the control version.

Sites with a high volume of simultaneously tested pages (marketplaces, aggregators) must also remain vigilant. If 30% of the URLs in a category show a radically different variant B and Googlebot heavily crawls this category, you risk a semantic confusion concerning the overall thematic understanding of the site. In this context, excluding Googlebot becomes more prudent, even if it reduces the representativeness of the test.

Point of caution: Mobile A/B tests present an increased risk if you use distinct URLs without a variant meta tag or without a clear match between desktop and mobile versions. Google could index both versions separately and create unintended duplicates.

Practical impact and recommendations

How to implement an A/B test without negatively impacting SEO?

Start by defining whether your test alters structural elements (navigation, major content blocks, H1) or cosmetic elements (color, font size, CTA wording). In the latter case, including Googlebot generally poses no problem—the bot will see one variant, index it normally, and the semantic content remains identical.

If you use distinct URLs for each variant (query string or different paths), you should consistently implement a rel=canonical pointing to the reference version. Check in Search Console that Google respects this canonical and does not index the variants as standalone pages. Regular auditing through the "Coverage" tab can help detect test URLs that may have been incorrectly indexed.

What technical errors should you absolutely avoid?

Never serve Googlebot a blank page or radically different content under the pretext of excluding it from the test. This would be interpreted as outright cloaking. If you exclude the bot, it must see the stable control version, the one you wish to see indexed. Use user-agent or IP detection to route Googlebot to this version, but document this logic to avoid any ambiguity during a manual audit.

Avoid tests based solely on cookies without a fallback. If your A/B tool detects no cookie, it must have an explicit default behavior (typically: display the control variant). Otherwise, Googlebot and users in private browsing will see inconsistent versions, fragmenting the experience and behavioral signals.

How to verify that your implementation is compliant?

Use the URL Inspection tool in Search Console to test a URL under A/B testing. Compare the HTML rendering that Google sees with what an average user receives. If you have included Googlebot in the test, the bot will see one variant—note which one and ensure it is semantically equivalent to the control version.

Run a Screaming Frog or Oncrawl crawl simulating the user agent Googlebot. List all tested URLs and check that the canonicals are correctly set. Cross-reference this data with server logs to confirm that Googlebot is accessing the right variants and not generating soft 404s or chain redirections.

  • Determine if the test affects structural or cosmetic elements before deciding to include/exclude Googlebot.
  • Implement rel=canonical to the preferred version if you use distinct URLs for each variant.
  • Check in Search Console that the tested variants are not indexed as standalone pages.
  • Never serve a blank page or radically different content to Googlebot under the pretext of excluding it from the test.
  • Ensure an explicit fallback (control version) for agents that do not register cookies.
  • Use URL Inspection and a crawl simulating Googlebot to validate the technical implementation before large-scale deployment.
Well-implemented A/B tests do not penalize SEO, but their technical complexity—management of canonicals, user-agent detection, semantic consistency between variants—requires sharp expertise. If your organization lacks internal resources to audit each test, engaging a specialized SEO agency ensures a risk-free implementation and rigorous monitoring of impacts on organic visibility.

❓ Frequently Asked Questions

Google pénalise-t-il les sites qui incluent Googlebot dans un test A/B ?
Non, Google autorise explicitement l'inclusion de Googlebot dans un test A/B si l'objectif est l'optimisation UX et non la manipulation des classements. Le test doit rester temporaire et les variantes sémantiquement équivalentes.
Combien de temps un test A/B peut-il durer sans être considéré comme permanent par Google ?
Google n'a jamais communiqué de seuil chiffré. Le terme « temporaire » reste flou. En pratique, des tests de plusieurs semaines à quelques mois sont tolérés, mais maintenir un test au-delà de six mois peut susciter des interrogations.
Que se passe-t-il si Googlebot voit une variante moins optimisée que la version de contrôle ?
Le bot indexera cette variante, ce qui peut temporairement dégrader vos positions si elle contient moins de mots-clés pertinents ou une structure H1/title moins optimisée. D'où l'importance de tester des éléments cosmétiques plutôt que structurels.
Puis-je utiliser des query strings pour mes variantes sans impacter le SEO ?
Oui, à condition d'implémenter un rel=canonical vers l'URL préférée. Sans canonical, Google peut indexer chaque query string comme une page distincte, diluant les signaux de pertinence.
Comment exclure proprement Googlebot d'un test A/B sans risquer une accusation de cloaking ?
Servez à Googlebot la version de contrôle stable — celle que vous souhaitez voir indexée. Utilisez la détection du user-agent ou de l'IP, et documentez cette logique pour éviter toute ambiguïté lors d'un audit manuel.
🏷 Related Topics
Crawl & Indexing AI & SEO Domain Name Pagination & Structure Local Search International SEO

🎥 From the same video 19

Other SEO insights extracted from this same Google Search Central video · duration 58 min · published on 14/09/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.