What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

Sometimes articles that replicate original news content rank better than the original content. Google is interested in specific examples of this issue to try to improve its systems.
34:57
🎥 Source video

Extracted from a Google Search Central video

⏱ 1h05 💬 EN 📅 26/09/2018 ✂ 11 statements
Watch on YouTube (34:57) →
Other statements from this video 10
  1. 2:22 Pourquoi Google déploie-t-il ses fonctionnalités de recherche d'abord aux États-Unis ?
  2. 9:08 L'indexation mobile-first provoque-t-elle vraiment des chutes de classement temporaires ?
  3. 16:26 Pourquoi Google n'indexe-t-il pas tous les sites en mobile-first simultanément ?
  4. 18:25 Le texte caché pour l'accessibilité peut-il pénaliser votre référencement ?
  5. 21:31 Faut-il vraiment conserver ses URL lors d'une migration de site ?
  6. 26:16 Le rendu dynamique est-il vraiment la solution miracle pour indexer vos applications React ?
  7. 28:09 Pourquoi Googlebot bloque-t-il sur Chrome 41 pour rendre votre JavaScript ?
  8. 32:45 Vos fluctuations de classement sont-elles vraiment dues à votre site ?
  9. 34:16 Les attributs ARIA influencent-ils vraiment le classement Google ?
  10. 49:40 Le lazy loading tue-t-il l'indexation de vos images dans Google ?
📅
Official statement from (7 years ago)
TL;DR

Google acknowledges that articles replicating original news content can rank higher than the primary source. The company actively seeks concrete examples to enhance its identification and valuation systems for original content. This statement confirms a structural issue observed for years and opens the door to a direct influence from publishers on the algorithm.

What you need to understand

Is Google finally admitting to a chronic dysfunction in its algorithm?

John Mueller's statement is notable for its frankness. Google explicitly acknowledges that its engine can favor derivative content at the expense of original journalistic sources. This is not a one-time anomaly but a recurring pattern that particularly affects news sites.

The issue arises when a media outlet publishes an exclusive investigation or verified information, and an aggregator, curation site, or competitor takes that content with minimal rephrasing. The result: the copy ranks better than the original, capturing traffic and advertising revenue intended for the creator.

What does this request for concrete examples mean?

Google is asking for documented real cases, which indicates two things. First, the current algorithm lacks reliable signals to systematically detect the temporal originality of news content. Second, the company is likely seeking to compile a dataset to improve its machine learning systems.

This approach suggests that the problem does not have a simple solution on the algorithm side. Traditional signals — publication freshness, domain authority, inbound links — are evidently insufficient to reliably identify the primary source. Google needs examples to refine its understanding of abusive republishing patterns.

Which sites are affected by this issue?

Pure news media are on the front lines: regional press, investigative sites, specialized media. These publishers invest in on-the-ground journalists but sometimes lack SEO power against generalist giants who aggregate without creating.

The phenomenon also affects expert blogs, financial analysts, and scientific sites. Whenever content provides verifiable new information, it becomes a target for sites that recycle the info with an optimized title and a SEO-friendly structure, without sourcing work.

  • Local and regional media losing traffic to national aggregators that pick up their scoops
  • Specialized technical sites seeing their analyses copied by generalist platforms with higher domain authority
  • Expert blogs whose in-depth content is rephrased into listicles by high-traffic sites
  • Agency dispatches republished by hundreds of sites without added value, diluting the original source
  • Content under embargo where third-party sites publish a few minutes after the original and capture the initial search peak

SEO Expert opinion

Is this statement consistent with on-ground observations?

Absolutely. Press publishers have been complaining about this issue for at least ten years. Studies have shown that aggregators like MSN, Yahoo News, or certain pure SEO players capture a disproportionate share of traffic on topics where they provide no journalistic value. Google knows this perfectly.

What’s new is the public admission and the request for help. This likely means that automated systems are reaching their limits. The algorithm struggles to distinguish between legitimate rephrasing and parasitic copying without human intervention or an enriched dataset. [To be verified]: it’s hard to know if Google genuinely needs examples or if this is a communication strategy to appease publishers.

What are the root causes of this dysfunction?

The problem is multi-faceted. First point: freshness alone is not enough. A site publishing 2 minutes after the original but with better on-page optimization, more internal links, and a dense thematic mesh can easily surpass the source. Google prioritizes perceived quality of content, not necessarily timeliness.

The second factor: domain authority plays a massive role. An established site with millions of backlinks and a history of trust receives an initial boost even for derived content. Local or specialized media, even with exclusive info, start with a structural handicap against big aggregators.

The third element: the detection of originality remains probabilistic. Google uses signals like publication dates, citations, and early inbound links. But when 50 sites simultaneously republish an AFP dispatch, which one is the source? The algorithm can make mistakes, especially if the original site has less frequent crawling.

When does this rule not apply?

Evergreen or educational content does not face the same issue. For a query like "how to change a tire", temporal originality is irrelevant. Google ranks based on perceived quality, depth, and user experience. The copy does not necessarily take precedence.

Topics with strong established editorial authority are also more protected. If The New York Times publishes an exclusive investigation, aggregators are less likely to surpass it because brand and trust signals compensate. The problem mainly affects medium-sized media without this algorithmic recognition.

Note: Submitting examples to Google does not guarantee immediate correction. It is an algorithmic R&D process that will take months or even years to significantly impact SERPs. Do not count on quick improvements.

Practical impact and recommendations

What should you do if your original content is outperformed by aggregators?

First action: accurately document cases with URLs, exact publication dates (with visible timestamps), and screenshots of SERP positions. Google is asking for examples, but they must be irrefutable: your article published Monday morning, the aggregator picking it up Monday afternoon, and the latter ranking above you the following day.

Second lever: speed up your indexing. Use the Indexing API (normally reserved for job postings and livestreams but tolerated for urgent breaking news), submit via Search Console immediately, and push on your social networks to generate early social signals. Every minute counts for establishing precedence.

How can you strengthen originality signals on your exclusive content?

Integrate structured NewsArticle metadata with accurately second-specific datePublished, declared author, and if possible a "backstory" or "correction" field to document the editorial process. Google has never confirmed this helps, but it can’t hurt and facilitates algorithmic analysis.

Enhance your editorial signature: proprietary visuals, named expert quotes, exclusive data, and custom infographics. The more recognizable and harder to recycle your content is without losing value, the less attractive it is to parasites. Aggregators look for easy-to-rephrase content, not dense investigations.

Should you send your examples to Google and how should you proceed?

If you have clear and documented cases, yes. Use official channels: Search Console Help Forum with John Mueller mentioned, public Twitter tagging @JohnMu, or Google News feedback forms if eligible. Avoid standard support tickets that will be buried.

Be factual and precise. No diatribes against competitors, just objective data: "URL A published at 09:12, indexed at 09:45. URL B published at 14:30 picking up 80% of the text, indexed at 15:00, ranked position 3 while A is at position 12 on [exact query]." Google looks for patterns, provide it with usable material.

  • Timestamp your publications precisely with a visible timestamp (HTML
  • Submit immediately via Search Console and Indexing API when relevant
  • Integrate strong differentiation elements (exclusive data, proprietary visuals, expert quotes)
  • Monitor your content with SERP tracking tools to quickly detect surpassing
  • Systematically document cases of abusive republishing with timestamped proof
  • Use schema.org NewsArticle with complete and precise metadata
The public acknowledgment of the problem by Google is a step forward, but do not expect miracles in the short term. Strengthen your originality signals, speed up your indexing, and document abuses. If this issue significantly impacts your visibility, support from an SEO agency specialized in media can help you implement an effective defensive strategy and structure your interactions with Google optimally.

❓ Frequently Asked Questions

Google va-t-il réellement corriger ce problème de classement des agrégateurs ?
Google reconnaît le problème et demande des exemples, ce qui suggère une volonté d'amélioration. Mais aucun calendrier ni garantie n'est donné. Les corrections algorithmiques de cette ampleur prennent généralement plusieurs mois voire années.
Comment prouver qu'un contenu est vraiment original face à Google ?
Timestamp précis visible, soumission immédiate via Search Console, métadonnées NewsArticle complètes, captures d'écran horodatées. L'idéal est de combiner plusieurs signaux temporels et éditoriaux concordants.
Les petits sites d'actualité ont-ils une chance face aux gros agrégateurs ?
Structurellement non, l'autorité de domaine joue énormément. Mais en optimisant l'indexation rapide, les signaux d'originalité et la différenciation éditoriale, ils peuvent limiter la casse sur leurs scoops exclusifs.
Peut-on utiliser l'API Indexing pour tous les contenus d'actualité ?
Officiellement, l'API Indexing est réservée aux offres d'emploi et livestreams. Pour les breaking news, Google tolère généralement l'usage mais sans garantie. À utiliser avec parcimonie et sur vos vraies exclusivités uniquement.
Que faire si un concurrent copie systématiquement mes contenus exclusifs ?
Documentez chaque cas avec preuves horodatées, envoyez-les à Google via les canaux officiels, et envisagez une action DMCA si la copie est littérale. Parallèlement, renforcez vos signaux d'originalité et votre vitesse d'indexation.
🏷 Related Topics
Content Discover & News

🎥 From the same video 10

Other SEO insights extracted from this same Google Search Central video · duration 1h05 · published on 26/09/2018

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.