Official statement
Other statements from this video 6 ▾
- 2:02 Faut-il vraiment abandonner les outils tiers pour tester le rendu HTML de vos pages ?
- 4:02 Pourquoi Google ignore-t-il les liens cachés derrière vos menus déroulants ?
- 7:56 Faut-il débloquer JavaScript et CSS dans le robots.txt pour le référencement ?
- 9:01 Pourquoi Google crawle vos fichiers JS/CSS mais ne les indexe jamais ?
- 13:43 Bloquer JavaScript et CSS peut-il vraiment dégrader votre SEO ?
- 18:32 Faut-il renoncer à onclick pour éviter d'être pénalisé pour cloaking ?
Google states that duplicating metadata — such as through a static HTML file AND a JS framework like React Helmet — causes issues for indexing. The recommendation: choose a single source of truth, either server-side or client-side, to ensure consistency between what Google crawls and what your users see. This directive mainly concerns JavaScript framework sites that dynamically inject tags that are already present in the initial HTML.
What you need to understand
Why does Google consider metadata duplication problematic?
The logic is simple: when a meta title or meta description tag exists both in static HTML (index.html) and in JavaScript code (React Helmet, Vue Meta, Next.js Head), Google must choose which one to consider. This conflict creates a potential inconsistency between what the bot sees at the first load and what the browser shows after executing the JS.
In practice, if your HTML file contains <title>Homepage</title> and React Helmet then injects <title>Welcome - My Site</title>, Google may index the first version while your users see the second. Or vice versa. The result? Your SERP snippets don’t align with your editorial strategy.
What is the difference between server-side and client-side rendering in this context?
Server-Side Rendering (SSR) generates the complete HTML server-side before sending it to the browser. In this scenario, the metadata is already present in the initial HTML — no duplication risk if you inject nothing client-side.
Client-Side Rendering (CSR) sends minimal HTML (often an almost empty index.html) and then builds the page via JavaScript. If this index.html already contains generic meta tags, and your JS framework then injects the real dynamic metadata, you create exactly the problem that Google warns against. The bot can crawl the static HTML before the JS executes, or after, depending on its available resources — hence the unpredictability.
Does Google consistently prioritize the initial HTML or the post-JS content?
This is where it gets blurry. Officially, Google indexes the rendered content, so post-JS. But in practice, the timing of the crawl and rendering varies greatly depending on crawl budget, rendering server load, and JS complexity.
If your JavaScript takes 5 seconds to execute and Googlebot is in a hurry, it may well index the initial HTML. Hence the importance of having a single source of truth. Martin Splitt does not explicitly state which version Google favors in case of conflict — he mostly says: don’t create this conflict.
- One meta tag per type: title, description, robots, canonical — a single occurrence in the final DOM.
- Choose a clear strategy: either SSR/SSG that generates everything server-side, or CSR with full JS injection (and an empty index.html of metadata).
- Test with the URL inspection tool from Search Console to see what Google actually renders.
- Avoid poorly configured hybrid frameworks where static HTML and JS clash.
- Document the source of truth in your technical stack to prevent developers from adding meta tags "just in case".
SEO Expert opinion
Is this directive really new or just a reminder of the fundamentals?
Let's be honest: the rule "one meta tag per type" is nothing revolutionary. Since the early days of HTML, having two <title> tags on a page is technically invalid according to W3C specs. What Martin Splitt points out is the emergence of this issue with modern JavaScript frameworks that abstract metadata management.
React Helmet, Next.js Head, Nuxt, Gatsby — all these tools allow dynamic manipulation of the <head>. The trap? Developers often forget to clean up the initial index.html file, thereby creating this unintentional duplication. Google is thus reminding an old rule in a new technical context — and it’s relevant because the problem is real in the field.
What nuances should be added to this recommendation?
First nuance: not all frameworks are created equal. Next.js in SSR mode or Gatsby in SSG mode generate the complete HTML server-side — if you use their API to manage metadata, there is no duplication. The issue mainly concerns pure CSR SPAs (Create React App by default, for example) where index.html is static and JS injects everything afterwards.
Second nuance: Google does not explicitly state what happens in case of conflict. Does it say "last wins" (the last tag overwrites the first)? Does it take the first one found? Does it create a weighted average based on trust in the JS? [To be verified] — Martin Splitt remains vague on the exact conflict resolution mechanism. Field tests show variable behaviors depending on the crawl budget and rendering delay.
Third nuance: some SEO tools (Screaming Frog, OnCrawl) crawl the raw HTML without executing JS. If you rely solely on these reports and your metadata is injected via JS, you’ll receive false positives reporting missing titles when they exist post-rendering. You need to cross-check with Search Console and tools that render JS (OnCrawl in JS mode, Screaming Frog in rendering mode).
Practical impact and recommendations
What action should be taken to eliminate duplicate metadata?
First step: audit your source code. Open your index.html file (or equivalent) and list all <meta>, <title>, <link rel="canonical"> tags present. Then, inspect your JS code (React Helmet, Next.js Head, etc.) to see what tags are being injected dynamically.
Second step: choose a single source of truth. If you are in SSR/SSG (Next.js, Nuxt, SvelteKit), let the framework generate everything server-side and completely clear the base HTML template. If you are in pure CSR, remove the meta tags from the static HTML and manage 100% through your JS library. The worst-case scenario? Having default values in the HTML "just in case the JS fails" — this creates exactly the conflict that Google warns against.
How to check if your site is compliant after correction?
Use the URL inspection tool from Search Console: paste a URL, click on "Test live URL", then "View crawled page" > "HTML". You will see the HTML as Google renders it. Look for your meta tags — you should only find one of each type.
Follow up with a crawl using Screaming Frog in JavaScript mode (Settings > Spider > Rendering > JavaScript). Export the titles and meta descriptions: no line should contain two different values for the same URL. If it does, you still have duplication somewhere.
- Audit the base HTML file (index.html, _document.js, etc.) and remove any SEO meta tags.
- Centralize metadata management in a single library (React Helmet, Next.js Head, etc.).
- Test with the URL inspection tool from Search Console to validate the final rendering.
- Crawl the site with Screaming Frog in JS mode and check for the uniqueness of titles/descriptions.
- Document in the technical stack which layer manages the metadata (server, build, client).
- Implement automated tests (Playwright, Puppeteer) to detect regressions during deployments.
What mistakes should be avoided when redesigning a site in JavaScript?
Classic error: leaving generic metadata in the base template "for engines that don’t execute JS". Google executes JS by 2025 — and if you create a conflict, you lose control over what gets indexed. Better to have a temporarily missing title (Google generates one) than a contradictory title.
Another trap: using multiple head management libraries at the same time (React Helmet + a custom Next.js plugin, for example). Each layer can inject its own tags, creating invisible duplicates in the source code but present in the final DOM. Standardize on a single approach.
Google's recommendation is clear: one meta tag per type, one unique source of truth. For JavaScript sites, this implies explicitly choosing between server-side generation (SSR/SSG) or client-side (CSR), and ruthlessly cleaning the base HTML template of any SEO metadata.
These technical optimizations may seem simple in theory, but their implementation in a complex production environment — with multiple teams, various frameworks, and deployment constraints — often requires deep expertise. If your technical stack makes these adjustments tricky, or if you want a comprehensive audit of your metadata management, the support of an SEO agency specializing in JavaScript architectures can greatly accelerate the process and avoid costly visibility errors.
❓ Frequently Asked Questions
Google prend-il en compte la dernière balise meta trouvée dans le HTML ou la première ?
Si j'utilise Next.js avec SSR, dois-je quand même supprimer les balises meta du fichier _document.js ?
React Helmet supprime-t-il automatiquement les balises meta présentes dans l'index.html ?
Faut-il aussi éviter les doublons pour les balises Open Graph et Twitter Cards ?
Comment gérer les métadonnées pour un site multilingue en JavaScript framework ?
🎥 From the same video 6
Other SEO insights extracted from this same Google Search Central video · duration 20 min · published on 23/06/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.