Official statement
Other statements from this video 10 ▾
- 11:53 HTTP/2 booste-t-il vraiment votre classement Google ?
- 18:04 Redirections 301 vs 404 vs 410 lors d'un relaunch : lequel choisir pour préserver votre référencement ?
- 18:12 Google accélère-t-il vraiment son crawl après des redirections massives ?
- 18:29 Faut-il vraiment désindexer vos pages de recherche interne ?
- 23:36 Faut-il vraiment dupliquer tous vos contenus dans les pages AMP ?
- 24:31 Les pages AMP sont-elles vraiment un levier de classement mobile pour le SEO ?
- 37:06 Comment Search Console rafraîchit-elle réellement vos données de performance ?
- 40:42 Les meta descriptions améliorent-elles vraiment le CTR si Google les réécrit ?
- 50:05 Un serveur lent peut-il vraiment freiner le crawl de Google sur votre site ?
- 55:05 Faut-il vraiment créer une sitemap distincte pour chaque sous-domaine ?
Google discourages the use of noindex on URLs shared between two versions of an A/B test, as this can lead to accidental deindexing. For an SEO practitioner, this means rethinking the structure of the tests: either segmenting the URLs by variation or using server-side mechanisms that do not affect indexing directives. This issue is significant, as an error can make critical pages disappear from the index for several weeks.
What you need to understand
Why does Google caution against using noindex in A/B testing?
A/B testing often involves showing two different versions of the same page to distinct groups of users. Some developers think they can protect version B by adding a temporary noindex, to prevent Google from crawling it and viewing it as duplicate content. The issue? If the URL remains the same for both versions and the noindex is applied inconsistently, Google may interpret this directive as permanent and deindex the entire page.
This confusion typically arises when the server decides which version to serve based on cookies or HTTP headers, without changing the URL. Google crawls the URL, encounters version B with noindex, and triggers deindexing. The result: the page disappears from the index, even if version A never had a noindex. The reaction time can be lengthy, with immediate consequences for organic traffic.
What are the risky configurations in a typical A/B test?
The typical scenario: a product page example.com/product-x serves two different layouts depending on whether the user belongs to group A or B. The developer adds a conditional noindex to version B to "avoid duplicates". But Googlebot does not have a persistent cookie and may randomly hit A or B with each crawl.
If Googlebot crawls version B with noindex three times in a row, it interprets this directive as intentional and removes the page from the index. Even if version A without noindex still exists, it becomes invisible to Google. This is not a bug; it’s the normal operation of the bot in response to a clear directive.
How does Google distinguish between a test variation and duplicate content?
Google does not make the distinction. For the engine, two versions served on the same URL with distinct content look like cloaking or a technical instability. If you add noindex to "clarify" things, you worsen the problem by explicitly signaling that part of the content should not be indexed.
The solution involves using distinct URLs for each variation (e.g., ?variant=a and ?variant=b), then canonicalizing to version A if necessary. Alternatively, don't alter any indexing directives at all and allow Google to crawl the main version naturally. Client-side tests (JavaScript) avoid this pitfall as the served HTML remains identical.
- Never apply a conditional noindex on a URL shared by multiple test variations
- Prefer client-side tests (JavaScript) that do not modify the initial HTML crawled by Google
- If server-side tests are necessary, segment the URLs by variation and use canonicals
- Monitor indexing via Google Search Console throughout the test duration to detect any abnormal deindexing
- Document precisely which version is served to Googlebot to avoid surprises
SEO Expert opinion
Is this recommendation consistent with real-world observations?
Yes, and it's even a classic in SEO incidents. I've seen e-commerce sites lose 40% of their organic traffic in a week because a poorly configured A/B test triggered a global noindex on major categories. The worst part is that the product team couldn't understand why: "the noindex was only on version B, not A." But Google sees only one URL, and if it fluctuates between index and noindex, it ultimately makes a decision.
What’s missing from Google's statement is the critical exposure duration. How many crawls with noindex before deindexing occurs? No public data. [To verify] We typically observe 3 to 5 successive crawls, but this depends on the site's crawl frequency. A news site can be deindexed in 24 hours, while a low-authority blog may take several weeks.
What nuances should be added to this directive?
Google refers to "shared URLs," but the reality is more blurred. If you use URL parameters (?test=b) and apply noindex only to the version with the parameter, you should be safe. However, Google sometimes treats parameters as distinct URLs, sometimes not, depending on the settings in Search Console. This very gray area is what makes the recommendation so important.
Another nuance: client-side tests (Google Optimize, VWO, AB Tasty) pose no indexing issues, as the initial HTML remains unchanged. JavaScript modifies the page after the first render, but Googlebot typically indexes the source HTML. If your test is purely visual (button color, block positioning), go with JavaScript and forget about noindex.
In what cases does this rule not apply?
If you are testing a complete redesign with a totally different URL structure (e.g., migration to a new structure), you can use separate environments (staging, subdomain) with general noindex. Google will never see these environments if you block the crawl or do not link to them from the main site.
The same goes for closed internal tests behind authentication or IP whitelisting. If Googlebot cannot physically access version B, you can put all the noindex you want there; it will never impact the indexing of the public version A. But once a URL is publicly accessible with variations served randomly, the rule applies.
Practical impact and recommendations
What should you do concretely before launching an A/B test?
The first step: precisely map out which testing method you are using. If it's client-side JavaScript (Google Optimize, Optimizely), you can rest easy. If it's server-side with URL variation (?variant=a vs ?variant=b), configure canonicals pointing all to version A. If it’s server-side without URL variation, stop everything and rethink your architecture.
Next, check in the HTML source code what Googlebot will actually crawl. Use "Inspect URL" in Search Console or simulate a crawl with curl, pretending to be Googlebot (User-Agent: Googlebot). If you see a noindex, it’s a red flag. If the content changes with each request without a clear logic, that’s also a red flag.
What mistakes should you absolutely avoid during an A/B test?
First mistake: adding a temporary noindex "just for the test." There is no temporary noindex for Google, there’s just noindex. Once the directive is read, the deindexation process starts, and reindexing later can take weeks. If you want to protect a variation, use URL parameters and configure Search Console to ignore those parameters.
Second mistake: launching a test without indexing monitoring. Set up a Search Console alert that notifies you if the number of indexed pages drops dramatically. Implement a script that checks daily the indexing of your critical pages (using the query site:example.com/exact-url). You need to detect deindexing within 48 hours, not three weeks later when traffic has already dropped by 30%.
How can I verify that my A/B test is SEO-safe?
Technical checklist before launch: crawl your site with Screaming Frog or Oncrawl simulating Googlebot. Ensure that all tested pages return the same HTTP status (200), the same base content, and no noindex directives. If you use canonicals, make sure they all point to the same reference URL, not to variations.
During the test, monitor three metrics in Search Console: the number of indexed pages (Coverage tab), crawling errors (Settings tab > Crawling), and performance (impressions and clicks). A drop in impressions of more than 15% on the tested pages within a week is a warning sign. If you see "Excluded by noindex tag" in Coverage, cut the test immediately.
- Ensure that the source HTML served to Googlebot never contains noindex on the tested URLs
- Set up canonicals if multiple URLs serve variations (all pointing to version A)
- Implement automated indexing monitoring before the test launch
- Test with
curlor Screaming Frog simulating Googlebot to detect inconsistencies - Prefer client-side JavaScript tests for visual or usability changes
- Document in a dashboard which version is served to which type of user-agent (real users vs bots)
❓ Frequently Asked Questions
Puis-je utiliser un noindex sur une URL de test A/B si je la bloque aussi dans le robots.txt ?
Les tests A/B côté client (JavaScript) posent-ils un risque d'indexation ?
Combien de temps faut-il pour qu'une page se réindexe après suppression d'un noindex accidentel ?
Peut-on utiliser des paramètres d'URL (?variant=b) avec un noindex sans risque ?
Comment Google détecte-t-il qu'une page sert du contenu différent à chaque crawl ?
🎥 From the same video 10
Other SEO insights extracted from this same Google Search Central video · duration 54 min · published on 08/03/2018
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.