Official statement
Other statements from this video 9 ▾
- □ Pourquoi un site web bien conçu ne génère-t-il aucun trafic sans stratégie de découvrabilité ?
- □ JavaScript moderne : Google peut-il vraiment tout indexer ?
- □ Le Shadow DOM est-il un frein au référencement multi-moteurs ?
- □ Pourquoi votre SEO technique se dégrade-t-il sans maintenance continue ?
- □ Faut-il vraiment respecter la hiérarchie des balises Hn pour le SEO ?
- □ SEO et accessibilité : pourquoi Google insiste-t-il sur leur convergence ?
- □ La qualité finit-elle toujours par l'emporter dans les classements Google ?
- □ Pourquoi les Core Updates sabotent-elles vos tests SEO ?
- □ Faut-il vraiment privilégier l'utilisateur plutôt que l'optimisation technique en SEO ?
Martin Splitt reminds us that technical basics — title tags, meta descriptions, canonical URLs, real links with href attributes, heading structure, and sitemaps — remain non-negotiable. Their absence or poor implementation directly harms discoverability by Google. There's no magic without solid foundations.
What you need to understand
Why is Google still emphasizing these elements in 2025?
You might think that with generative AI and increasingly sophisticated algorithms, Google wouldn't need these basic structural markers anymore. Yet Splitt makes it clear: these technical elements remain the foundation on which a site's discoverability rests.
Why? Because crawling and indexation are processes that rely on explicit signals. Title tags and meta descriptions guide the algorithm's understanding of pages. Canonical URLs prevent duplication. Links with href attributes allow the bot to navigate. Without these, Google stumbles — or moves on.
What exactly does Splitt mean by "discoverability"?
He's not talking about ranking, but about Google's ability to find, understand, and index your pages. A page invisible in the index can't rank, no matter how good its content is. It's the difference between "being seen" and "being well-ranked".
Sitemaps facilitate exploration. Headings structure content for the algorithm. Internal links with href create a navigable graph. These elements don't guarantee good positioning, but their absence takes you out of the game.
Is this statement a revelation or a reminder?
Let's be honest: nothing new here. Splitt is simply reiterating what every SEO has known for fifteen years. But the fact that he's reasserting it says a lot about the persistence of real-world mistakes.
Many sites, especially those built with JavaScript or misconfigured CMS platforms, still neglect these basics. Google's reminder isn't trivial — it likely targets a reality they observe at scale across their crawls.
- Title tags and meta descriptions remain essential signals for understanding and display
- Canonical URLs prevent duplication and clarify the preferred version
- Links with href attributes allow the bot to navigate efficiently
- Heading structure helps Google understand content hierarchy
- Sitemaps accelerate discovery, especially for large sites
- The absence or poor implementation of these elements directly harms indexation
SEO Expert opinion
Is this statement consistent with real-world practices?
Yes, absolutely. SEO audits regularly reveal sites with duplicate titles, misconfigured canonicals, or worse: JavaScript links without href attributes. These sites struggle to be crawled properly, even when their content is solid.
What's surprising is that Google still needs to hammer this home. But in reality, many developers — especially those working on front-end frameworks — ignore these constraints. They think SEO is about content and backlinks. Wrong. Without technical structure, you're invisible.
What nuances should we add to this statement?
Splitt is talking about "discoverability," not ranking. This is crucial. Having perfect title tags won't push you up the SERPs if your content is mediocre or your authority is lacking. These elements are necessary, not sufficient.
Another nuance: meta description. Google rewrites it often, we know that. But its absence or inconsistency sends a signal of editorial negligence. It's less a ranking factor than a marker of overall site quality. [Requires verification]: the direct impact of meta description on CTR remains difficult to isolate from other SERP factors.
Are there cases where these rules can be relaxed?
Rarely, to be frank. Even high-budget crawl sites must respect these fundamentals. The only edge case involves ultra-authoritative sites (like Amazon, Wikipedia) where Google compensates for technical gaps through sheer crawling power and authority.
But for 99% of sites, trying to work around these basics is shooting yourself in the foot. Sitemaps, for example, are only "optional" for very small sites with perfect internal linking — which practically never happens.
Practical impact and recommendations
What should you check first on your site?
Start with a complete crawl using Screaming Frog or similar tool. Identify pages without titles, with duplicate titles, or missing meta descriptions. These errors are common and easy to fix — but only if you detect them first.
Next, verify your canonical URLs. Many sites have canonicals pointing to 404 pages, or canonicalization loops. Google Search Console will flag these issues in the "Coverage" section.
Finally, test your internal links. If you're using JavaScript to generate links, make sure they have a real href attribute. The test: disable JavaScript in your browser and click your links. If they don't work anymore, Googlebot probably won't follow them either.
What errors should you absolutely avoid?
Never leave important pages without a title tag. It's the most basic signal Google uses to understand what a page is about. A missing or generic title ("Page with no title," "Home") is SEO suicide.
Also avoid cascading canonicals: page A → canonical to B → canonical to C. Google can follow it, but it's inefficient and error-prone. Always point directly to the final canonical version.
And above all, don't neglect sitemaps on large sites. Many think good internal linking is enough. Wrong. Sitemaps accelerate discovery and allow Google to prioritize crawling important pages.
How do you ensure everything is compliant?
- Complete site crawl with Screaming Frog or Sitebulb to identify missing or duplicate titles/metas
- Audit canonical URLs via Google Search Console, "Coverage" section
- Test internal links: disable JavaScript and verify navigation
- Validate heading structure (unique H1 per page, logical H2-H3 hierarchy)
- Verify XML sitemaps: presence, validity, submission in GSC
- Regular monitoring of 404 errors and redirects to canonicals
- Check server-side rendering for JavaScript frameworks (SSR or pre-rendering)
❓ Frequently Asked Questions
Les meta descriptions sont-elles encore utiles si Google les réécrit souvent ?
Un site en JavaScript pur peut-il ranker sans SSR ou pré-rendu ?
Faut-il absolument un sitemap XML même pour un petit site ?
Peut-on utiliser des canoniques cross-domain en toute sécurité ?
Les headings H2-H6 ont-ils encore un poids SEO significatif ?
🎥 From the same video 9
Other SEO insights extracted from this same Google Search Central video · published on 09/02/2022
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.