Official statement
Other statements from this video 19 ▾
- 2:38 Should you really multiply sitemaps when you have a lot of URLs?
- 2:38 Is it really necessary to split your sitemap into multiple files to index a large site?
- 5:15 Why does replacing HTML with JavaScript canvas hurt SEO?
- 5:18 Should you ditch HTML5 canvas to ensure your content gets indexed?
- 10:56 Should you ditch the noscript attribute for SEO?
- 12:26 Should you really ditch noscript for rendering your content?
- 15:13 What happens when your HTML metadata contradicts the JavaScript ones?
- 16:19 Do complex JavaScript menus really block the indexing of your navigation?
- 19:28 Do full-page hero images really harm Google indexing?
- 19:35 Do full-screen hero images really block the indexing of your pages?
- 20:04 Why does Google keep crawling your old URLs after a redesign?
- 22:25 Is it true that Google really respects the canonical tag?
- 25:48 How does the initial load of a SPA potentially ruin your SEO?
- 26:20 Does the initial load time of SPAs hurt your organic traffic?
- 28:13 Do Service Workers really enhance the crawling and indexing of your site?
- 36:00 Will Server-Side Rendering Become Essential for the SEO of JavaScript Applications?
- 36:17 Should you go all in on server-side rendering to excel in JavaScript?
- 41:29 Does JavaScript really represent the future of web development for SEO?
- 52:01 Are Third-Party Scripts Really Hurting Your Core Web Vitals?
Google states that Googlebot can follow links generated by JavaScript, but only if they utilize standard <a> tags. Non-semantic elements like spans with onclick will simply not be crawled. In practice, a poor technical choice by developers can render part of your site invisible to the engine, even if everything works perfectly for the user.
What you need to understand
Why this distinction between anchor tags and onclick elements?
Googlebot analyzes the DOM after JavaScript execution — this has been known for years. But the interpretation of navigation signals remains strictly tied to HTML semantics. An <a href="..."> tag is a universal link signal, natively recognized by all bots and browsers.
A <span onclick="navigateTo('/page')">, even if it triggers navigation for the user, does not exist as a link in the DOM. Googlebot does not simulate user clicks — it extracts URLs from elements structurally identified as links. No anchor tag, no crawl.
Are modern JavaScript frameworks affected?
Most modern frameworks (React, Vue, Angular, Svelte) generate correct <a> tags when you use their standard navigation components (<Link>, <router-link>, etc.). The issue arises when developers create custom components with event handlers on divs or spans.
Typical case: a mobile burger menu that uses <div> clickable elements instead of actual links. For the user, it works. For Googlebot, those sections of the site simply do not exist. And that’s where things can get stuck for entire sites sometimes.
What about empty href attributes or JavaScript links: void(0)?
A link with href="javascript:void(0)" or href="#" remains an anchor tag, but the destination URL is unexploitable. If routing is done entirely in JavaScript without updating the href, Googlebot has no URLs to crawl.
The best practice: use real hrefs that point to canonical URLs, then intercept the click in JavaScript for client-side navigation (progressive enhancement). This way, even if JS fails or the bot does not simulate the event, the link remains crawlable.
- Googlebot extracts links from the DOM post-JavaScript, but only from valid
<a>tags - onclick events on non-semantic elements (span, div) generate no URL discovery
- Modern frameworks generally produce compliant code, except for risky custom implementations
- An empty href or a javascript:void(0) makes the link technically present but unusable for crawl
- Progressive enhancement: always start from a functional HTML link, then enrich with JavaScript
SEO Expert opinion
Is this statement consistent with field observations?
Yes, and it has even been documented for years. Crawl tests on JavaScript sites consistently show that Googlebot does not trigger onclick, onmouseover, or other event handlers. It parses the DOM, extracts the href attributes from <a> tags, and that’s it.
Where it gets interesting: some e-commerce sites lost 30-40% of their indexed pages after migrating to a poorly configured JS framework. Common diagnosis? Links produced generated in clickable spans, perfectly functional in user navigation, entirely invisible to the bot. The dev team did not see it coming — everything worked in staging.
What nuances should we add to this rule?
Google says "appropriate anchor tags" — what exactly does that mean? Is a link with href="#" considered "appropriate"? [To be verified] in tests, but in practice, an empty or internal anchor href does not provide any URLs to crawl. So technically compliant, but functionally useless.
Another nuance: some sites use data-attributes to store URLs (<a data-url="/page">) and dynamically inject them into the href via JS. If the script runs before Googlebot parses the DOM, it can pass. But that’s playing with fire — no guarantee of execution timing. Better to set the href directly from the start.
In which cases is this rule insufficient?
Having correct <a> tags does not guarantee crawlability. It is also necessary for these links to be present in the initial HTML or in the DOM after the first JS render. If your links only appear after an infinite scroll, aggressive lazy-loading, or user interaction (clicking on "See more"), Googlebot may never see them.
Classic example: e-commerce faceted filters that reload listings in AJAX without updating URLs or creating actual links. Filtered pages = nonexistent pages for Google, even if they generate organic traffic from the site's internal search. This is a frequent SEO black hole.
Practical impact and recommendations
What should be prioritized in auditing a JavaScript site?
First step: check the HTML structure post-render. Use the URL inspection tool in Search Console, "Rendered Page" section, and examine the source code. All your critical navigation links should appear as <a href="REAL_URL">. If you see divs or spans with classes suggesting navigation, you have a problem.
Second check: test the crawl with a tool like Screaming Frog in JavaScript mode enabled. Compare the number of discovered pages with JS enabled vs. disabled. A massive gap indicates that your linking relies on JS, but if even with JS enabled some sections are not being crawled, it is a signal that your links are not standard.
How to correct JavaScript navigation errors?
Case #1: you are using custom components that generate non-semantic elements. Refactor using the native components of the framework (<Link> in React, <router-link> in Vue). They produce <a> tags by default and handle client-side routing properly.
Case #2: your legacy site uses onclick on spans for historical reasons. Pragmatic solution: replace with <a> tags with real hrefs, then intercept the click in JavaScript for SPA navigation. The link works without JS (progressive enhancement), and Google can crawl it. It’s work, but it’s the only way to correct structurally.
What errors to avoid during a JavaScript migration?
Classic error #1: migrating to a JS framework without conducting a prior crawlability audit. You discover the problem 3 months later, when rankings have dropped. Always test rendering and crawling in staging.
Error #2: assuming that "if it works in the browser, it works for Google." Googlebot does not simulate user interactions — it parses the DOM. A dropdown menu that opens on mouse hover? The links inside may be invisible if the CSS hides them by default and JS/hover is required to display them.
- Inspect the rendered DOM via Search Console (URL inspection tool) to check for valid <a> tags
- Crawl the site with Screaming Frog in JavaScript mode enabled and compare with a crawl without JS
- Audit custom navigation components: replace clickable divs/spans with real anchor tags
- Ensure that all hrefs contain real URLs, not #, javascript:void(0), or data-attributes
- Test lazy-loading and infinite scroll: ensure critical links are present from the first render
- Implement continuous monitoring of the number of crawled/indexed pages after any major JS changes
<a> tags with real hrefs are the foundation. But between theory and practice, there are often dozens of micro-technical decisions (custom components, routing management, lazy-loading) that can sabotage crawlability without anyone noticing immediately. These optimizations require fine coordination between dev and SEO teams and can quickly become complex to manage internally — enlisting a specialized SEO agency provides a complete technical audit and tailored support to avoid the pitfalls of JavaScript migration.❓ Frequently Asked Questions
Un lien avec href="#" est-il considéré comme une balise d'ancrage appropriée par Googlebot ?
Les Single Page Applications (SPA) peuvent-elles être correctement crawlées par Google ?
Faut-il absolument désactiver le JavaScript pour tester la crawlabilité de son site ?
Est-ce que l'attribut rel="nofollow" empêche Googlebot de suivre un lien JavaScript ?
Les liens générés dynamiquement après un scroll infini sont-ils crawlés par Google ?
🎥 From the same video 19
Other SEO insights extracted from this same Google Search Central video · duration 57 min · published on 29/04/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.