What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

To prevent syndicated versions of your content from appearing in Google Discover, use the meta robots noindex tag in addition to the canonical link. Canonical alone is an insufficient indicative signal to completely block the appearance of syndicated content.
🎥 Source video

Extracted from a Google Search Central video

💬 EN 📅 07/06/2023 ✂ 19 statements
Watch on YouTube →
Other statements from this video 18
  1. Deux domaines pour un même pays : où commence vraiment la manipulation ?
  2. Les failles JavaScript de vos bibliothèques font-elles chuter votre positionnement Google ?
  3. Peut-on vraiment empêcher Google de crawler certaines parties d'une page HTML ?
  4. Faut-il encore perdre du temps à soumettre son sitemap XML ?
  5. Pourquoi les données structurées Schema.org ne suffisent-elles pas toujours pour obtenir des résultats enrichis Google ?
  6. Les en-têtes HSTS ont-ils vraiment un impact sur votre référencement ?
  7. Google retraite-t-il vraiment votre sitemap à chaque crawl ?
  8. Sitemap HTML vs XML : pourquoi Google insiste-t-il sur leur différence de fonction ?
  9. Les données structurées avec erreurs sont-elles vraiment ignorées par Google ?
  10. Les chiffres dans vos URLs pénalisent-ils vraiment votre référencement ?
  11. L'index bloat existe-t-il vraiment chez Google ?
  12. Comment bloquer définitivement Googlebot de votre site ?
  13. Google délivre-t-il vraiment des certifications SEO officielles ?
  14. Plusieurs menus de navigation nuisent-ils vraiment au SEO ?
  15. Les host groups indiquent-ils vraiment une cannibalisation à corriger ?
  16. Peut-on désavouer des backlinks toxiques en ciblant leur adresse IP ?
  17. Faut-il supprimer la balise meta NOODP de vos sites Blogger ?
  18. Comment obtenir une vignette vidéo dans les SERP : qu'entend Google par « contenu principal » ?
📅
Official statement from (2 years ago)
TL;DR

Canonical alone doesn't guarantee the exclusion of syndicated content from Google Discover. Google recommends adding a meta robots noindex tag on syndicated versions to completely block their appearance. A recommendation that challenges the traditional use of canonical as a deduplication signal.

What you need to understand

Why isn't canonical enough for Discover anymore?

The canonical link has always been presented as the preferred tool to tell Google which version of duplicate content to prioritize. In traditional SEO logic, a properly implemented canonical should be enough to consolidate signals toward the original version.

Except that Google Discover works differently. The feed doesn't just crawl and index: it actively selects and pushes content to users. In this context, canonical remains an indicative signal — Google can choose to ignore it if other criteria (perceived freshness, the syndicator's trusted domain, anticipated engagement) point to the syndicated version.

What does Mueller's statement actually mean in practice?

Mueller recommends adding a meta robots noindex tag on syndicated versions if you want to guarantee their exclusion from Discover. The noindex transforms a weak signal (canonical) into a strict directive: the page should not appear in results, including Discover.

This is a paradigm shift. Until now, you used canonical to say "this page exists elsewhere in better form." Now, for Discover, you need to say "this page should not be shown at all." The distinction is important.

What are the risks if you ignore this recommendation?

Without noindex, your syndicated versions can appear in Discover instead of the original. Result: you lose qualified traffic, engagement signals (clicks, time spent), and potentially the visibility you should have captured on your own domain.

Even worse, if the syndicator is a large news site with strong authority, Google may prioritize its version — even if the canonical points to you. This is exactly the scenario this directive aims to prevent.

  • Canonical alone is an indicative signal, not an absolute directive for Discover
  • Noindex completely blocks syndicated content from appearing in Discover
  • Without noindex, syndicated versions can overshadow the original in the feed
  • This recommendation applies specifically to Discover, not necessarily to traditional search

SEO Expert opinion

Is this guidance consistent with real-world observations?

Yes and no. We do observe cases where syndicated versions appear in Discover despite properly implemented canonical. This confirms that Google treats Discover with its own rules — and that canonical doesn't carry the absolute weight we attribute to it elsewhere.

On the other hand, recommending noindex for syndicated content poses an obvious problem: strict noindex prevents all indexation, not just Discover appearance. If the syndicator wants their content to remain in traditional search (with canonical pointing to the original), this directive becomes inapplicable. [To verify]: Mueller doesn't clarify whether Google is considering a more granular mechanism (like data-nosnippet or conditional X-Robots-Tag) to target only Discover.

In which cases doesn't this rule apply?

If you're the original publisher and want your syndication partners to index the content while pointing to you via canonical, noindex isn't an option. You'd lose link distribution, brand signals, and the expanded reach that syndication can provide.

Mueller's directive is better suited to situations where you control both versions (original + syndicated) or have an agreement with the syndicator to completely block their version from Discover. In other cases, you face a trade-off between Discover visibility and traditional SEO benefits of syndication.

Warning: A noindex mistakenly applied to the original instead of the syndicated version can cost you all Discover visibility — not just the duplicate version. Verify implementation carefully before deployment.

What's the alternative if you want to keep syndicated content indexable?

Frankly, Google doesn't offer a clean solution for this scenario. Data-nosnippet prevents rich snippets but doesn't affect Discover. X-Robots-Tag with server-side conditions could theoretically target Discover, but nothing is officially documented.

In practice, you're stuck: either you accept that syndicated content sometimes appears in Discover (and lose traffic), or you block it entirely with noindex (and lose syndication's SEO benefits). [To verify]: it would be helpful if Google clarified whether there's a way to target Discover without impacting traditional indexation.

Practical impact and recommendations

What do you need to do concretely if you syndicate your content?

If you control syndicated versions (for example, republishing on a partner site or third-party platform), add meta name="robots" content="noindex, follow" in the of the syndicated version. The follow allows you to preserve SEO juice from the canonical, even if the page isn't indexed.

If you don't directly control the syndicator, integrate this clause into your syndication contracts: require that the partner apply noindex on their version to preserve your Discover visibility. This is a negotiation to conduct upfront.

How do you verify your implementation is correct?

Use the URL Inspection tool in Search Console on syndicated versions. Verify that Google detects the noindex and that the page isn't indexed. In parallel, check that the canonical points to the original.

Monitor your appearances in Discover via the Discover report in Search Console. If syndicated URLs continue to appear despite noindex, there's an implementation problem or processing delay on Google's end.

What mistakes should you absolutely avoid?

Never put noindex on the original — this seems obvious, but deployment errors happen more often than you'd think. A misconfigured template or CMS applying the directive at the wrong level can cost you all visibility.

Also avoid combining noindex and canonical in contradictory ways. If a page has noindex, canonical no longer really makes sense from an indexation standpoint — even though technically Google can still follow the link. Clarify your intention: either deduplication (canonical alone) or blocking (noindex).

  • Add meta robots noindex on all syndicated versions meant to be excluded from Discover
  • Keep the canonical pointing to the original even with noindex, to preserve consolidation signals
  • Verify implementation using the URL Inspection tool in Search Console
  • Integrate this clause into syndication contracts if you don't directly control the partner
  • Monitor the Discover report to detect any undesired appearances
  • Clearly document which version should be indexed and which should be blocked
Content syndication, especially in contexts where Discover plays a strategic role, requires fine orchestration between canonical, noindex, and contractual agreements. Technical trade-offs can quickly become complex, particularly when multiple partners are involved or when CMSes don't allow granular management of directives per version. If you face difficulties securing your Discover visibility while preserving syndication benefits, working with a specialized SEO agency can provide tailored support — from implementation audits to technical clause negotiations with your partners.

❓ Frequently Asked Questions

Le noindex sur une version syndiquée empêche-t-il aussi son indexation dans la recherche classique ?
Oui, le noindex bloque toute indexation, Discover inclus. Si vous voulez que le contenu syndiqué reste dans la recherche classique, cette solution n'est pas applicable.
Peut-on utiliser le robots.txt pour bloquer uniquement Discover ?
Non, le robots.txt empêche le crawl, pas l'apparition sélective dans Discover. Il n'existe pas actuellement de directive officielle pour cibler Discover seul sans impacter l'indexation globale.
Si le syndicateur refuse d'ajouter noindex, quelles options reste-t-il ?
Vous pouvez renforcer les signaux vers votre version originale (maillage interne, promotion sociale, qualité du contenu) pour maximiser vos chances d'être privilégié, mais sans garantie absolue. Le canonical seul reste un signal faible pour Discover.
Le noindex affecte-t-il le transfert de jus SEO via le canonical ?
Techniquement, une page en noindex peut encore transmettre du PageRank via ses liens sortants, mais Google ne garantit pas explicitement ce comportement. Le follow dans la directive noindex est censé préserver ce transfert, mais les tests terrain montrent des résultats variables.
Cette recommandation s'applique-t-elle aussi aux agrégateurs de flux RSS ?
Oui, si un agrégateur republie votre contenu et que vous voulez éviter qu'il apparaisse dans Discover à votre place, le noindex reste la solution la plus sûre — à condition que l'agrégateur respecte cette directive.
🏷 Related Topics
Content Crawl & Indexing Discover & News Links & Backlinks

🎥 From the same video 18

Other SEO insights extracted from this same Google Search Central video · published on 07/06/2023

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.