What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

To ensure indexing by Google, it is advised to use 'a' tags for links instead of other elements like 'span' or 'div' which can cause crawling issues.
41:42
🎥 Source video

Extracted from a Google Search Central video

⏱ 1h06 💬 EN 📅 31/10/2018 ✂ 10 statements
Watch on YouTube (41:42) →
Other statements from this video 9
  1. 2:37 Le rendu côté client pose-t-il vraiment un problème pour le SEO ?
  2. 3:53 Le rendu client détruit-il vraiment votre expérience mobile sans impacter le SEO ?
  3. 6:24 Le rendu dynamique est-il vraiment la solution pour les gros sites à contenu changeant ?
  4. 9:09 Pourquoi les événements de défilement cassent-ils votre chargement paresseux ?
  5. 15:00 Faut-il vraiment bannir le JavaScript critique de l'en-tête pour le SEO ?
  6. 27:45 Google ignore-t-il vraiment le JavaScript tiers sur la vitesse de chargement ?
  7. 45:51 Fusionner vos pages similaires booste-t-il vraiment votre classement Google ?
  8. 50:24 Faut-il vraiment archiver les anciennes versions de produits plutôt que les supprimer ?
  9. 61:51 Faut-il vraiment supprimer du contenu pour améliorer son SEO ?
📅
Official statement from (7 years ago)
TL;DR

Google reminds us that links should use the HTML <a> tag instead of elements like <span> or <div> that are turned into links via JavaScript. This recommendation aims to ensure that Googlebot can properly detect and follow all links on a site. In practice, many modern frameworks generate non-standard links that can block crawling and fragment internal linking, directly impacting indexing.

What you need to understand

What distinguishes a classic HTML link from a fake link?

A classic HTML link uses the tag. This has been the web standard for decades, and it is what Googlebot expects by default. When the crawler encounters this tag, it instantly extracts the destination URL and adds it to its crawl queue.

A fake link is an element like or

made clickable via JavaScript with an onclick event listener. Visually, users see no difference. Technically, Googlebot must first execute the JavaScript, interpret the event, and attempt to reconstruct the target URL. This works sometimes, but it is fragile and consumes crawl resources.

What has led to the rise of these practices?

The explosion of Single Page Applications (SPA) and JavaScript frameworks like React, Vue, or Angular has popularized patterns where everything, including navigation, relies on client-side code. Developers create clickable components without always adhering to standard HTML semantics.

Some modern design patterns also use

to create clickable cards or elaborate navigation buttons. The problem is that without an tag, these elements are invisible to a crawler that does not execute JavaScript or adheres strictly to the allocated crawl budget.

What is the concrete risk to indexing?

If Googlebot does not detect your internal links, it cannot discover deep pages on your site. You may end up with entire orphaned sections, accessible only via XML sitemap but never crawled naturally.

Internal linking loses its effectiveness entirely. Internal PageRank does not circulate correctly. Strategic pages remain unindexed or poorly positioned simply because the crawler never found them through valid link paths.

SEO Expert opinion

Is this recommendation really new?

No. Google has been repeating this advice since at least 2015, and it has always been a fundamental rule of technical SEO. Martin Splitt emphasizes it again because the problem persists and worsens with the widespread use of modern JavaScript frameworks.

What changes is that Googlebot handles JavaScript better than before. Some developers incorrectly conclude that they can take liberties with HTML standards. The result is that we still see major e-commerce sites with entire navigation menus in clickable

.

Do all tags hold the same value for Google?

Not quite. An tag without an href attribute is not considered a link by Googlebot. Some frameworks generate tags that use onclick without href, which is the same issue as a clickable .

Similarly, a href="#" or href="javascript:void(0)" leads nowhere. Google may attempt to execute the associated JavaScript, but there is no guarantee it will understand the navigation. [To be verified]: Google has never published precise data on its success rate for interpreting these complex patterns.

When can we tolerate exceptions?

For pure UI interactions that do not correspond to navigation (modal openings, accordions, tabs), using

Practical impact and recommendations

How can I check if my site uses fake links?

First step: disable JavaScript in your browser and navigate your site. If links stop working or sections become inaccessible, you have a problem. This is exactly what a crawler sees when it does not execute JavaScript.

Use tools like Screaming Frog in "no JavaScript" mode or Oncrawl to compare the link graph with and without JS rendering. If you notice significant discrepancies in the number of detected links, your internal linking relies too much on JavaScript.

What should be prioritized for correction?

Focus first on the main navigation: menu, breadcrumb, pagination. These elements must absolutely use tags with valid hrefs. It is through them that Googlebot discovers the architecture of your site.

Next, examine internal links in editorial content, product cards, and CTAs. Any element that should pass PageRank or allow page discovery must be a classic HTML link. Secondary action buttons (social sharing, filters) can remain in JavaScript if necessary.

How to implement this with a modern framework?

Most frameworks offer declarative routing components that generate valid tags. In React, use from React Router or Next.js instead of

with onClick. In Vue, does the job. In Angular, routerLink generates the correct tags.

Check the final HTML generated on the server side (SSR) or during hydration. The principle is that even without JavaScript, links should be present and functional in the source code. If your framework generates content only on the client side, switch to SSR or SSG (Static Site Generation) to ensure indexability.

❓ Frequently Asked Questions

Est-ce que Googlebot suit les liens créés en JavaScript ?
Oui, Googlebot exécute le JavaScript et peut suivre certains liens générés dynamiquement. Mais c'est plus lent, moins fiable, et ça consomme du crawl budget. Les balises <a> classiques restent la méthode recommandée pour garantir une indexation optimale.
Un lien en <button> avec un onclick peut-il transmettre du PageRank ?
Non. Seules les balises <a> avec un attribut href transmettent du PageRank. Un <button> ou un <div> cliquable n'a aucune valeur SEO pour le maillage interne, même si Google parvient à le suivre.
Les frameworks comme React ou Vue posent-ils un problème pour les liens ?
Pas s'ils sont bien configurés. Les composants de routing modernes (React Router, Next.js Link, Vue Router) génèrent des balises <a> valides. Le problème survient quand les développeurs créent des composants custom qui contournent ces standards.
Comment vérifier si mes liens sont correctement détectés par Google ?
Utilisez un crawler SEO en mode sans JavaScript pour comparer le graphe de liens. Consultez aussi le rapport de liens internes dans Google Search Console pour voir quels liens Google suit réellement sur votre site.
Faut-il aussi appliquer cette règle aux liens externes ?
Oui. Si vous voulez que Google suive un lien externe, utilisez une balise <a>. Pour les liens que vous ne souhaitez pas voir suivis (publicités, liens non fiables), ajoutez rel="nofollow" ou rel="sponsored" plutôt que de masquer le lien en JavaScript.

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.