Official statement
Other statements from this video 9 ▾
- 2:37 Le rendu côté client pose-t-il vraiment un problème pour le SEO ?
- 3:53 Le rendu client détruit-il vraiment votre expérience mobile sans impacter le SEO ?
- 6:24 Le rendu dynamique est-il vraiment la solution pour les gros sites à contenu changeant ?
- 9:09 Pourquoi les événements de défilement cassent-ils votre chargement paresseux ?
- 15:00 Faut-il vraiment bannir le JavaScript critique de l'en-tête pour le SEO ?
- 27:45 Google ignore-t-il vraiment le JavaScript tiers sur la vitesse de chargement ?
- 45:51 Fusionner vos pages similaires booste-t-il vraiment votre classement Google ?
- 50:24 Faut-il vraiment archiver les anciennes versions de produits plutôt que les supprimer ?
- 61:51 Faut-il vraiment supprimer du contenu pour améliorer son SEO ?
Google reminds us that links should use the HTML <a> tag instead of elements like <span> or <div> that are turned into links via JavaScript. This recommendation aims to ensure that Googlebot can properly detect and follow all links on a site. In practice, many modern frameworks generate non-standard links that can block crawling and fragment internal linking, directly impacting indexing.
What you need to understand
What distinguishes a classic HTML link from a fake link?
A classic HTML link uses the tag. This has been the web standard for decades, and it is what Googlebot expects by default. When the crawler encounters this tag, it instantly extracts the destination URL and adds it to its crawl queue.
A fake link is an element like or The explosion of Single Page Applications (SPA) and JavaScript frameworks like React, Vue, or Angular has popularized patterns where everything, including navigation, relies on client-side code. Developers create clickable components without always adhering to standard HTML semantics. Some modern design patterns also use If Googlebot does not detect your internal links, it cannot discover deep pages on your site. You may end up with entire orphaned sections, accessible only via XML sitemap but never crawled naturally. Internal linking loses its effectiveness entirely. Internal PageRank does not circulate correctly. Strategic pages remain unindexed or poorly positioned simply because the crawler never found them through valid link paths. No. Google has been repeating this advice since at least 2015, and it has always been a fundamental rule of technical SEO. Martin Splitt emphasizes it again because the problem persists and worsens with the widespread use of modern JavaScript frameworks. What changes is that Googlebot handles JavaScript better than before. Some developers incorrectly conclude that they can take liberties with HTML standards. The result is that we still see major e-commerce sites with entire navigation menus in clickable Not quite. An tag without an href attribute is not considered a link by Googlebot. Some frameworks generate tags that use onclick without href, which is the same issue as a clickable . Similarly, a href="#" or href="javascript:void(0)" leads nowhere. Google may attempt to execute the associated JavaScript, but there is no guarantee it will understand the navigation. [To be verified]: Google has never published precise data on its success rate for interpreting these complex patterns. For pure UI interactions that do not correspond to navigation (modal openings, accordions, tabs), using First step: disable JavaScript in your browser and navigate your site. If links stop working or sections become inaccessible, you have a problem. This is exactly what a crawler sees when it does not execute JavaScript. Use tools like Screaming Frog in "no JavaScript" mode or Oncrawl to compare the link graph with and without JS rendering. If you notice significant discrepancies in the number of detected links, your internal linking relies too much on JavaScript. Focus first on the main navigation: menu, breadcrumb, pagination. These elements must absolutely use tags with valid hrefs. It is through them that Googlebot discovers the architecture of your site. Next, examine internal links in editorial content, product cards, and CTAs. Any element that should pass PageRank or allow page discovery must be a classic HTML link. Secondary action buttons (social sharing, filters) can remain in JavaScript if necessary. Most frameworks offer declarative routing components that generate valid tags. In React, use from React Router or Next.js instead of Check the final HTML generated on the server side (SSR) or during hydration. The principle is that even without JavaScript, links should be present and functional in the source code. If your framework generates content only on the client side, switch to SSR or SSG (Static Site Generation) to ensure indexability.
Other SEO insights extracted from this same Google Search Central video · duration 1h06 · published on 31/10/2018 Be the first to know every time a new official Google statement drops — with full expert analysis.What has led to the rise of these practices?
What is the concrete risk to indexing?
SEO Expert opinion
Is this recommendation really new?
Do all tags hold the same value for Google?
When can we tolerate exceptions?
Practical impact and recommendations
How can I check if my site uses fake links?
What should be prioritized for correction?
How to implement this with a modern framework?
❓ Frequently Asked Questions
Est-ce que Googlebot suit les liens créés en JavaScript ?
Un lien en <button> avec un onclick peut-il transmettre du PageRank ?
Les frameworks comme React ou Vue posent-ils un problème pour les liens ?
Comment vérifier si mes liens sont correctement détectés par Google ?
Faut-il aussi appliquer cette règle aux liens externes ?
🎥
From the same video 9
Related statements
Get real-time analysis of the latest Google SEO declarations
💬 Comments (0)
Be the first to comment.