What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

Google can track JavaScript links injected as anchor tags with URLs. However, if a link is designed as a button without an href attribute, it cannot be followed.
22:42
🎥 Source video

Extracted from a Google Search Central video

⏱ 55:57 💬 EN 📅 03/04/2020 ✂ 23 statements
Watch on YouTube (22:42) →
Other statements from this video 22
  1. 1:36 Le fichier de désaveu fonctionne-t-il vraiment lien par lien au fil du crawl ?
  2. 4:39 Les menus dupliqués mobile/desktop pénalisent-ils vraiment votre SEO ?
  3. 8:21 Faut-il vraiment nofollow les liens entre vos pages de succursales ?
  4. 8:41 Faut-il vraiment placer vos produits phares dans la navigation principale ?
  5. 9:07 Le balisage de données structurées erroné pénalise-t-il vraiment votre référencement ?
  6. 10:20 Faut-il vraiment placer vos pages stratégiques dans la navigation principale pour mieux ranker ?
  7. 11:26 Google ignore-t-il vraiment les données structurées mal balisées sans pénaliser la page ?
  8. 13:01 Le contenu masqué derrière des onglets est-il vraiment indexé par Google ?
  9. 13:42 Le contenu derrière des onglets est-il vraiment indexé en mobile-first ?
  10. 14:36 Google filtre-t-il manuellement les sites médicaux pour garantir la qualité des résultats ?
  11. 16:40 Faut-il abandonner Data Highlighter au profit du JSON-LD ?
  12. 20:09 Les liens en nofollow sont-ils vraiment ignorés par Google pour le SEO ?
  13. 20:19 Google suit-il vraiment les liens nofollow pour découvrir de nouveaux sites ?
  14. 23:12 Pourquoi Google ignore-t-il vos liens JavaScript mal formatés ?
  15. 27:47 Faut-il vraiment centraliser son contenu pour ranker sur Google ?
  16. 29:55 Le contenu de qualité suffit-il vraiment à générer des liens naturels ?
  17. 30:03 L'autorité de domaine est-elle vraiment inutile pour ranker dans Google ?
  18. 30:16 Pourquoi Google considère-t-il les liens sur sites d'images, petites annonces et plateformes gratuites comme du spam ?
  19. 38:17 Comment Google déclare-t-il vraiment son user-agent lors du crawl ?
  20. 43:06 Google reconnaît-il vraiment tous les formats d'intégration vidéo pour le SEO ?
  21. 44:12 Les cookies tiers bloqués impactent-ils vraiment votre trafic mobile dans Analytics ?
  22. 51:11 Faut-il abandonner la version desktop pour optimiser uniquement la version mobile ?
📅
Official statement from (6 years ago)
TL;DR

Google can follow JavaScript links injected as classic anchor tags with href attributes, but not clickable buttons lacking this attribute. A link designed as a button without href remains invisible to Googlebot, even if JavaScript manages navigation. In practice, a poor technical implementation can deprive entire pages of crawl and indexing.

What you need to understand

Why does Google make this technical distinction between anchors and buttons?

The difference lies in how Googlebot analyzes the DOM after JavaScript execution. When an <a> tag contains an href attribute, the bot immediately identifies a target URL, even if JavaScript modifies or enhances the link's behavior. The crawler can extract this URL and add it to its crawl queue.

On the other hand, a button without href — typically a <button> or a <div> with an onclick event — carries no structural information about the destination. Google must then execute the JavaScript, intercept the event, and guess where the link points. This operation is resource-intensive and not guaranteed: Googlebot does not systematically simulate clicks on all interactive elements on a page.

What’s the concrete difference between a valid JavaScript link and a non-crawlable button?

A valid JavaScript link looks like this: <a href="/destination" onclick="myFunction()">. The href attribute provides a fallback URL that Googlebot can follow, even if JavaScript fails or is disabled. The onclick adds client-side behavior (tracking, animation) without compromising discoverability.

In contrast, a non-crawlable button takes this form: <button onclick="navigate('/destination')"> or <div class="link" data-url="/destination">. Here, the URL is encapsulated within JavaScript code or a non-standard attribute. Googlebot cannot extract it without executing the script and simulating the interaction, which it generally does not do.

Which frameworks and architectures are particularly exposed to this risk?

Single Page Applications (SPAs) built with React, Vue, or Angular are the most vulnerable. These frameworks often use custom navigation components that generate buttons or clickable divs instead of standard HTML anchors. Developers favor this approach to control transitions, state management, and UX, but overlook the critical SEO implications.

E-commerce sites with dynamic filters and pagination, dashboard-type interfaces, and mobile burger-style menus frequently resort to pseudo-links. If these elements do not rely on <a href> tags, entire sections of the product catalog or site structure can vanish from the index.

  • Always use <a href> for any internal link, even if JavaScript enriches the behavior.
  • Buttons <button> should be reserved for actions (form submission, modal opening), never for navigation.
  • Test discoverability with Search Console's URL inspection tool: if the link does not appear in the rendered DOM, it is invisible.
  • Audit front-end frameworks to ensure navigation components generate semantic HTML.
  • Prefer server-side rendering (SSR) or static site generation (SSG) to expose links right from the initial HTML.

SEO Expert opinion

Is this statement consistent with practices observed on the ground?

Yes, and it’s a welcome confirmation of a phenomenon documented for years. Technical audits regularly reveal sites where thousands of orphan pages receive no crawl due to the lack of valid <a href> links. Server logs show that Googlebot does not even attempt to access these URLs, even though they are technically accessible via JavaScript.

The important nuance: Google can follow certain dynamically injected JavaScript links, as long as they adhere to the <a href> structure. But this capability remains fragile and costly in crawl budget. On a site with 100,000 pages and a limited budget, relying on JavaScript execution to discover links is a risky bet.

What nuances should be added to this rule?

Martin Splitt talks about links "injected as anchor tags"—this phrasing implies that Google can handle links generated after the initial load, provided they conform to standard HTML syntax. In practice, a link created by React or Vue that becomes <a href="/page"> in the final DOM will be crawlable. [To verify]: Does the speed of injection matter? Does Google expect a maximum delay before considering the DOM stable?

Another point: buttons with role="link" and tabindex (for accessibility) do not compensate for the lack of an href. Google does not rely on ARIA attributes to discover links — it looks for an explicit and extractable URL. Web accessibility and SEO share fundamentals (semantic HTML), but their criteria do not completely overlap.

In what cases does this rule pose specific problems?

Sites using hash URLs (#) for navigation encounter an additional challenge. Historically, Google ignored content after the #, but SPAs popularized hash routers. Since then, Google can handle certain hash architectures, but the reliability remains uneven. If a site combines buttons without href and hash routing, it accumulates two crawl obstacles.

Progressive Web Apps (PWAs) with service workers and client-side navigation are also exposed. The service worker can intercept requests and serve cached content, but if the links are not in the initial HTML or the DOM post-JavaScript, Googlebot will never discover them. An SEO-friendly PWA must ensure its internal links are exposed via <a href>, ideally within the HTML shell.

Warning: some JS frameworks dynamically change the href attribute on hover or click (for lazy loading URLs). If the initial href is empty or points to #, Googlebot may not detect the link until JavaScript executes. This UX optimization technique can sabotage crawling.

Practical impact and recommendations

What practical steps should be taken to secure link discoverability?

The first reflex: audit the raw source HTML of your strategic pages. Use "View page source" in the browser (Ctrl+U) and search for your internal links. If they do not appear as <a href="URL">, it’s an immediate red flag. Complete this with the URL inspection tool in Search Console: compare the raw HTML and the rendered DOM to identify links injected solely by JavaScript.

Next, scrutinize your navigation components: main menu, pagination, category filters, product links. If your framework generates buttons or divs with onClick, refactor them into <a> tags. JavaScript can still intercept the click via event.preventDefault() to manage client-side navigation, but the href must be present from the load.

What mistakes should be avoided when migrating to a JavaScript architecture?

The classic error: delegating all navigation to the JavaScript router (React Router, Vue Router) without implementing server-side rendering (SSR). The result is an empty HTML shell that loads a JS bundle of several hundred KB. Googlebot must download and execute this bundle to discover links — a slow, fragile process that unnecessarily consumes crawl budget.

Another trap: using custom data attributes to store URLs (data-href, data-url) thinking that Google will extract them. No: Googlebot looks for href, end of story. Data attributes are invisible to the crawler, even if your JavaScript reads them perfectly. Respect the HTML standards, it's the only guarantee of compatibility.

How can I verify that my site is compliant after refactoring?

Run a crawl with Screaming Frog or Sitebulb in "JavaScript rendering" mode and compare it with a pure HTML crawl. The discovered links should be identical in both cases. If the JS crawl finds 30% more links, your site relies too much on JavaScript execution — a major risk factor.

Also, check the server logs: Is Googlebot regularly accessing deep pages (level 4-5 deep)? If not, it’s often a symptom of undiscoverable links. Cross-reference with coverage data in Search Console to identify pages detected but not crawled — often victims of poorly implemented JavaScript links.

  • Replace all <button> and <div> navigation elements with <a href>
  • Implement SSR or SSG to expose links in the initial HTML
  • Test discoverability with Search Console (URL inspection, tab "More info" > "Detected links")
  • Audit JS frameworks to ensure they generate semantic HTML
  • Crawl the site in both JavaScript mode AND pure HTML to compare coverage
  • Analyze server logs to detect orphaned pages that are not crawled
The discoverability of JavaScript links relies on a simple rule: always use <a href> for navigation, even in a SPA. Any other pattern (button, clickable div, data attribute) exposes the site to massive losses in crawl and indexing. These technical optimizations require advanced expertise in front-end architecture and SEO — if your team lacks resources or specialized skills, engaging an experienced SEO agency can prevent costly mistakes and secure the visibility of your entire hierarchy.

❓ Frequently Asked Questions

Un lien JavaScript avec href est-il aussi bien crawlé qu'un lien HTML classique ?
Google peut suivre un lien <a href> injecté par JavaScript, mais il doit d'abord exécuter le script, ce qui consomme du crawl budget. Un lien HTML pur reste plus rapide et plus fiable.
Les attributs data-href ou data-url peuvent-ils remplacer un href standard ?
Non. Googlebot ne lit pas les attributs data personnalisés pour découvrir des liens. Seul l'attribut href standard est reconnu et crawlé.
Mon site React utilise des composants Link de React Router, est-ce un problème ?
Pas si ces composants génèrent des balises <a href> dans le DOM final. Vérifiez avec l'outil d'inspection d'URL que les liens apparaissent bien dans le HTML rendu.
Un bouton avec role='link' et tabindex est-il considéré comme un lien par Google ?
Non. Les attributs ARIA améliorent l'accessibilité mais ne compensent pas l'absence d'un href pour le crawl. Google cherche une URL explicite dans l'attribut href.
Faut-il systématiquement implémenter du SSR pour un site JavaScript SEO-friendly ?
Pas systématiquement, mais c'est fortement recommandé. Le SSR garantit que les liens sont présents dans le HTML initial, réduisant la dépendance à l'exécution JavaScript et les risques d'échec de crawl.
🏷 Related Topics
JavaScript & Technical SEO Links & Backlinks Domain Name

🎥 From the same video 22

Other SEO insights extracted from this same Google Search Central video · duration 55 min · published on 03/04/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.