Official statement
Other statements from this video 19 ▾
- 2:38 Should you really multiply sitemaps when you have a lot of URLs?
- 2:38 Is it really necessary to split your sitemap into multiple files to index a large site?
- 5:15 Why does replacing HTML with JavaScript canvas hurt SEO?
- 5:18 Should you ditch HTML5 canvas to ensure your content gets indexed?
- 10:56 Should you ditch the noscript attribute for SEO?
- 12:26 Should you really ditch noscript for rendering your content?
- 15:13 What happens when your HTML metadata contradicts the JavaScript ones?
- 18:47 Does Googlebot really follow all the JavaScript links on your site?
- 19:28 Do full-page hero images really harm Google indexing?
- 19:35 Do full-screen hero images really block the indexing of your pages?
- 20:04 Why does Google keep crawling your old URLs after a redesign?
- 22:25 Is it true that Google really respects the canonical tag?
- 25:48 How does the initial load of a SPA potentially ruin your SEO?
- 26:20 Does the initial load time of SPAs hurt your organic traffic?
- 28:13 Do Service Workers really enhance the crawling and indexing of your site?
- 36:00 Will Server-Side Rendering Become Essential for the SEO of JavaScript Applications?
- 36:17 Should you go all in on server-side rendering to excel in JavaScript?
- 41:29 Does JavaScript really represent the future of web development for SEO?
- 52:01 Are Third-Party Scripts Really Hurting Your Core Web Vitals?
Google correctly indexes JavaScript navigation as long as it relies on standard links with <a> tags and href attributes. Dropdowns or complex interactions that do not generate real HTML links may not be followed. For an SEO practitioner, this means auditing the navigation structure and prioritizing standard links even in a modern JavaScript context.
What you need to understand
Why does Google emphasize anchor and href tags?
Google crawls the web by following traditional HTML links. Even though Googlebot now executes JavaScript, its discovery mechanism relies on tags with a href attribute pointing to a URL.
When navigation is built with complex JavaScript events (onclick, onmouseover) without generating a real tag, the bot does not see a link to follow. It will not guess that a click on a Martin Splitt targets dropdown menus that activate only on hover or click, without exposing direct links in the source code. For example: a mega-menu that loads its links via an AJAX call on hover, or worse, a navigation system that dynamically injects URLs after authentication. The trap is that these navigations work perfectly for the user, but Googlebot sees only a button or an empty If your main navigation does not generate traditional links, Google will have to rely on other signals to discover your pages: XML sitemaps, internal links from the content, external backlinks. This means you lose control over crawl and indexing depth. Pages located more than 3-4 clicks away from a real link may never be crawled regularly. And if they are crawled, it will be with a considerable delay, which is problematic for time-sensitive content. Yes, and it’s even a welcome reminder. In 2018-2019, Google communicated extensively about its ability to execute modern JavaScript, which created the impression that everything had become magic. The reality is more nuanced: Googlebot can execute JS, but it remains fundamentally a bot that follows links. Audits of Angular, React, or Vue sites often reveal crawl issues related to navigation that is too reliant on application state. Conditional links, menus that load after a delay, or empty hrefs with client-side routers create black holes for the bot. [To be verified]: Google has never published quantitative data on the crawl failure rate associated with complex JavaScript navigations, so it's difficult to precisely quantify the impact. Martin Splitt mentions “appropriate links,” but remains vague on certain edge cases. For instance, are links dynamically generated on first render on the client-side problematic? If your framework injects tags into the DOM before Googlebot starts indexing, it should technically be fine. The real issue lies with interactions that require a user event to reveal links: hover, click, scroll. In such cases, Google will not simulate these interactions exhaustively. It crawls the DOM as it finds it after the first render, period. If your site benefits from a very high crawl budget and a dense internal linking elsewhere (content, sidebar, footer), you may get away with complex JS navigation. Google will eventually discover your pages through other paths. But relying on this is playing with fire. Let's be honest: prioritizing traditional links in the main navigation remains the most robust strategy, one that does not depend on the goodwill of the bot or your crawl budget. First, audit your current navigation. Disable JavaScript in your browser (or use a non-JS crawler like Screaming Frog in standard mode) and check if your main menus still display clickable links. If the answer is no, you have a problem. Next, ensure that each menu item generates a tag with an absolute or relative href. No href="#" or href="javascript:void(0)". Even if your framework handles client-side routing, the href must point to a real URL that Googlebot can follow. Do not confuse progressive enhancement and total degradation. You can have a rich JavaScript navigation (animations, mega-menus, lazy loading), as long as the basic structure remains traditional links. JS should enhance the experience, not replace it. Avoid dropdown menus triggered only on hover without an alternative click option. Google does not simulate hover. If your submenus are only accessible this way, they will not be crawled. Plan for a click to open the menu, or better, display level 2 links in the footer or a hub page. Use the URL Inspection tool in Search Console and look at the rendered screenshot. Check that your menus are visible and that links are present in the rendered HTML. Complement this with a Lighthouse or PageSpeed Insights test to see the DOM as Google sees it. Then cross-reference with server logs: if Googlebot does not crawl certain sections linked from your navigation, it’s a warning sign. Analyze orphaned URLs in Search Console (indexed pages without detected internal links): often, this is a sign of faulty JS navigation.
Other SEO insights extracted from this same Google Search Central video · duration 57 min · published on 29/04/2020 What constitutes a complex interaction in this context?
What is the concrete risk for indexing?
SEO Expert opinion
Is this statement consistent with field observations?
What nuances should be added to this advice?
In what cases does this rule not fully apply?
Practical impact and recommendations
What concrete steps should be taken to make navigation crawlable?
What mistakes should absolutely be avoided?
How can I check if my site is compliant?
❓ Frequently Asked Questions
Google crawle-t-il les liens générés dynamiquement par JavaScript ?
Un mega-menu qui se charge en AJAX au hover est-il un problème pour le SEO ?
Les frameworks comme React ou Vue posent-ils problème pour la navigation ?
Faut-il obligatoirement du rendu côté serveur pour une navigation JS ?
Comment tester si ma navigation est bien crawlable par Google ?
🎥
From the same video 19
Related statements
Get real-time analysis of the latest Google SEO declarations
Be the first to know every time a new official Google statement drops — with full expert analysis.
💬 Comments (0)
Be the first to comment.