What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

BERT is not a ranking algorithm update but a system to better understand the text of long queries and pages. If a site loses traffic after BERT, it's because Google has a better understanding of what the pages are about, not because it penalizes them.
30:10
🎥 Source video

Extracted from a Google Search Central video

⏱ 57:01 💬 EN 📅 13/05/2020 ✂ 22 statements
Watch on YouTube (30:10) →
Other statements from this video 21
  1. 1:43 Google réécrit-il vraiment vos meta descriptions si elles contiennent trop de mots-clés ?
  2. 4:20 Pourquoi modifier le code Analytics bloque-t-il la vérification Search Console ?
  3. 5:58 Pourquoi votre balisage hreflang ne fonctionne-t-il toujours pas malgré vos efforts ?
  4. 5:58 Faut-il privilégier hreflang langue seule ou langue+pays pour vos versions internationales ?
  5. 9:09 Hreflang n'influence pas l'indexation : pourquoi Google indexe une seule version mais affiche plusieurs URLs ?
  6. 12:32 Pourquoi votre site disparaît-il complètement de l'index Google et comment le récupérer ?
  7. 15:51 L'outil de paramètres URL consolide-t-il vraiment tous les signaux comme Google le prétend ?
  8. 19:03 Les core updates ne sanctionnent-elles vraiment aucune erreur technique ?
  9. 23:00 L'outil de contenu obsolète supprime-t-il vraiment l'indexation ou juste le snippet ?
  10. 23:56 Pourquoi la commande site: est-elle inutile pour diagnostiquer l'indexation ?
  11. 23:56 L'outil de suppression d'URL désindexe-t-il vraiment vos pages ?
  12. 26:59 Les 50 000 URLs d'un sitemap : pourquoi cette limite ne concerne-t-elle pas ce que vous croyez ?
  13. 32:07 Google Images choisit-il vraiment la bonne image pour vos pages ?
  14. 33:50 Faut-il vraiment détailler ses anchor texts avec prix, avis et notes ?
  15. 35:26 Pourquoi votre site reste-t-il partiellement invisible si votre maillage interne n'est pas bidirectionnel ?
  16. 38:03 Pourquoi Google refuse-t-il d'indexer toutes vos pages et comment y remédier ?
  17. 40:12 L'anchor text interne répétitif est-il vraiment un problème pour Google ?
  18. 42:48 Les paramètres UTM créent-ils vraiment du contenu dupliqué indexé par Google ?
  19. 45:27 Le mixed content HTTPS/HTTP impacte-t-il vraiment le référencement Google ?
  20. 47:16 Le hreflang en HTML alourdit-il vraiment vos pages ou est-ce un mythe ?
  21. 53:53 Pourquoi les anciennes URLs restent-elles dans l'index après une redirection 301 ?
📅
Official statement from (5 years ago)
TL;DR

BERT is not a ranking filter but a linguistic understanding model for long queries and content. If a site loses traffic post-BERT, it's because Google is finally grasping what your pages truly mean—and that they don’t match the intended search queries. The focus isn’t on correcting a penalty, but on reworking the semantic alignment between your content and user expectations.

What you need to understand

Is BERT really a system for understanding and not a ranking algorithm?

Let’s be clear: BERT (Bidirectional Encoder Representations from Transformers) is a pre-trained natural language processing (NLP) model that improves contextual understanding of words in a sentence. Google uses it to better grasp the meaning of user queries and the content of pages.

It is not a ranking filter in the sense of Panda or Penguin. BERT does not assign quality scores or penalize. It acts upstream of ranking, at the understanding stage—which then influences the matching between intent and results. The nuance is crucial: if your content loses traffic, it’s not because BERT has degraded it, but because Google finally understands that it does not meet the actual intent.

Why do some pages lose traffic after the deployment of BERT?

Before BERT, Google largely relied on lexical matching between queries and content. A page could rank for a long query simply by the presence of keywords, even if the overall meaning was off. With BERT, the engine picks up on the contextual relationships between words — it understands that “airport parking without reservation” does not mean the same thing as “book airport parking”.

If your page previously ranked for queries where it only partially responded or through semantic misunderstanding, BERT reveals this mismatch. Traffic decreases not due to penalty, but due to rebalancing: Google is now serving more relevant results. It’s harsh for your analytics but makes sense from a user perspective.

In what cases does BERT have a significant impact on performance?

BERT primarily targets long-tail and conversational queries, where context and linguistic nuances are critical. Short and generic queries (1-2 words) remain largely handled by traditional systems. If your traffic mainly comes from short transactional queries, the impact will be marginal.

In contrast, if you rank for complex questions, informative queries with prepositions, negations, or subtle phrasing, BERT redefines the game rules. Sites producing vague or generic content to cast a wide net will find themselves overtaken by content that is precise and contextually aligned.

  • BERT does not rank — it improves understanding of queries and content before ranking signals come into play.
  • A post-BERT traffic loss indicates a mismatch between your content and the actual intent of the queries you ranked for.
  • The impact is maximal on long, conversational, or ambiguous queries where linguistic context is decisive.
  • BERT does not penalize — it reveals mismatches that previous systems did not detect.

SEO Expert opinion

Does this statement align with field observations?

In principle, yes. Post-deployment analyses of BERT indeed show redistributions of traffic on long queries, with no correlation to classic quality signals (backlinks, authority, speed). The affected sites did not present any obvious technical or qualitative flaws — just poorly aligned content with the fine intention.

But — and this is where it gets tricky — the boundary between “better understanding” and “reclassifying” is porous in practice. If BERT changes the understanding of a query, it de facto alters the served results, thus the ranking observed. Saying “this is not a ranking change” is technically accurate but semantically misleading for a practitioner watching their positions fall. The distinction holds in engineering, less so in operational SEO.

What nuances should be added to this official position?

Google presents BERT as a qualitative advancement — which is true — but downplays the business impact for affected sites. The reality: a traffic drop is a traffic drop, regardless of the technical cause. Saying “we don’t penalize” doesn’t comfort an editor who loses 30% of their SEO traffic overnight.

Moreover, Mueller claims that BERT reveals what pages are “really” about. [To be verified] — this assertion assumes BERT systematically captures intent better than the user themselves, which remains debatable. NLP models, however advanced, are still probabilistic approximations. There are documented cases where BERT interprets a nuance in a questionable way, especially on very specific business queries or technical jargon.

In what cases does this rule not apply or become problematic?

BERT performs well in English and languages with high training data density. In less endowed languages or with very technical content (legal, medical, financial), the model’s performance is more uncertain. A specialized site may see its traffic fluctuate not due to real inadequacy, but due to model limitations on that specific corpus.

Another case: ambiguous queries where multiple intents coexist. BERT statistically favors the major intent. If your content serves a minor but legitimate intent, you risk being sidelined in favor of a “mainstream” intent that’s less relevant for your niche. This is coherent from a global perspective but unfair locally.

Attention: If you notice a drop in traffic on long queries after a BERT deployment, don’t rush into technical corrections. First, analyze the real intent behind the lost queries via Search Console. Your content may simply no longer be relevant for those intents—in which case, you need to rethink the substance, not just the form.

Practical impact and recommendations

What should I do if my site loses traffic after BERT?

First step: identify the lost queries. Filter in Search Console the queries of 4 words or more that have decreased in impressions and clicks. Analyze their real intent — Google Search, forums, Questions to ask in the SERPs — and compare with what your content actually addresses. If the gap is glaring, BERT has simply corrected an anomaly.

Next, rework the semantic alignment. This doesn’t mean stuffing keywords, but reformulating your content so that it explicitly meets the intent captured by BERT. Use natural phrasing, contextual synonyms, and structure your responses logically — clear titles, short paragraphs, argumentative progression. BERT detects overall coherence, not just lexical occurrences.

What mistakes should be avoided in post-BERT optimization?

A classic mistake: thinking that adding long query variants in the content will be enough. BERT detects semantic stuffing just as well as lexical stuffing. If your text sounds artificial or repetitive, the model will pick up on it. Prioritize fluidity and actual relevance.

Another trap: wanting to “optimize for BERT” like we did for Hummingbird. BERT is not an algorithm with actionable levers — it’s a comprehension model. The only viable optimization consists of creating intrinsically relevant content for the intended purpose. If you have to force or twist your text to “match,” it means you’re targeting the wrong intent.

How can I check if my content aligns with BERT-compatible intents?

Test your contents with natural queries in Google. If your page appears for long queries that match your topic exactly, it’s a good sign. If it appears on tangential queries or by lexical accident, BERT will eventually correct — better to anticipate.

Use NLP tools (Answer the Public, AlsoAsked, or even public NLP APIs) to map the real intents behind your target queries. Compare with the structure and substance of your content. The idea is to think “question → direct answer,” not “keyword → density.”

  • Filter in Search Console for long queries (4+ words) that have lost traffic and analyze their real intent.
  • Reformulate your content to explicitly address the intent, using natural and contextual vocabulary.
  • Avoid stuffing variants of queries — BERT detects semantic manipulations.
  • Test your pages with natural queries to ensure they appear for the right intents.
  • Map intents using NLP tools and align your editorial structure accordingly.
  • Prioritize clarity and logical progression — BERT captures overall coherence, not just keywords.
BERT redefines the matching between query and content by replacing lexical matching with contextual understanding. For an SEO site, this means producing content that precisely meets user intent, using natural language and a logical structure. Post-BERT optimizations are not purely technical but relate to editorial alignment — a complex exercise that may require thorough semantic auditing and specialized support to deeply rework content architecture.

❓ Frequently Asked Questions

BERT modifie-t-il directement les positions des pages dans les SERP ?
Non. BERT améliore la compréhension des requêtes et du contenu, ce qui influence ensuite le matching entre intention et pages. Le ranking reste déterminé par les autres signaux classiques.
Un site peut-il être pénalisé par BERT ?
Non. BERT n'est pas un filtre punitif. Une baisse de trafic signifie que Google comprend mieux l'inadéquation entre le contenu et l'intention réelle de la requête.
Faut-il optimiser spécifiquement pour BERT ?
Non. L'optimisation consiste à produire du contenu clair, contextuel et aligné avec les intentions utilisateur. BERT récompense la pertinence sémantique, pas le bourrage de mots-clés.
BERT affecte-t-il toutes les requêtes de la même manière ?
Non. BERT cible principalement les requêtes longues et conversationnelles où le contexte et les nuances linguistiques sont critiques pour comprendre l'intention.
Comment savoir si ma baisse de trafic vient de BERT ?
Analyse les requêtes perdues : si ce sont des requêtes longues ou ambiguës pour lesquelles ton contenu répondait partiellement ou par accident, BERT est probablement en cause.
🏷 Related Topics
Algorithms Domain Age & History Content AI & SEO

🎥 From the same video 21

Other SEO insights extracted from this same Google Search Central video · duration 57 min · published on 13/05/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.