Official statement
Other statements from this video 6 ▾
- 2:02 Faut-il vraiment abandonner les outils tiers pour tester le rendu HTML de vos pages ?
- 2:02 Faut-il vraiment éviter les balises meta en double dans le HTML et le JavaScript ?
- 7:56 Faut-il débloquer JavaScript et CSS dans le robots.txt pour le référencement ?
- 9:01 Pourquoi Google crawle vos fichiers JS/CSS mais ne les indexe jamais ?
- 13:43 Bloquer JavaScript et CSS peut-il vraiment dégrader votre SEO ?
- 18:32 Faut-il renoncer à onclick pour éviter d'être pénalisé pour cloaking ?
Google only crawls links present in the rendered HTML without user interaction. If your links only appear on hovering over a menu or are loaded via JSON, they become invisible to the bot. Essentially, any navigation structure that requires a click or hover to reveal internal links negatively impacts your crawl budget and the discovery of your strategic pages.
What you need to understand
What is rendered HTML and why does Google limit itself to it?
Rendered HTML refers to the final DOM after JavaScript execution, as visible in testing tools like Google Search Console or the mobile rendering inspector. Google does not crawl raw source code, but rather what is actually displayed in the browser after all JavaScript manipulations have occurred.
The important nuance: Google distinguishes what is present in the DOM from what requires an action to appear. A dropdown menu that loads links only on hover or click is never considered crawlable, even if the JavaScript works perfectly on the client side. The bot does not simulate any user interactions — no clicks, no hovers, no infinite scroll triggered by events.
Why does this technical limitation still persist?
Let’s be honest: Google could technically simulate interactions like an automated testing tool such as Selenium. However, the crawl budget and server load impose strict economic constraints. Crawling billions of pages while simulating clicks on every interactive element would multiply the required resources by a prohibitive factor.
In practice, this means that any lazy-loaded navigation conditional on a user event (onclick, onmouseover, scroll with intersection observer triggered manually) is a dead end for the crawler. Google assumes that if a link is important, it should be accessible without friction in the initial rendered HTML.
What happens to links loaded only via JSON?
A common case involves modern JavaScript applications (React, Vue, Angular) where routes are managed client-side and links are dynamically built from a JSON API. If your navigation component fetches an endpoint /api/menu.json and creates <a href> links in JavaScript, these links will only be crawled if they actually end up in the rendered DOM.
The confusion often comes from developers who see the links displayed in their browser and assume that Google sees them too. But if the JSON loads after a user event or if the framework does not perform server-side rendering (SSR) or static pre-rendering, Google will only see a blank page or an HTML skeleton without usable links.
- Only rendered HTML counts: any link missing from the DOM after the initial JavaScript execution is invisible to Google
- User interactions are never simulated: hover, click, and conditional scrolling are insurmountable barriers for the bot
- JSON alone is not enough: links must be injected into the rendered DOM, ideally via SSR or static hydration
- Use Google’s testing tools to check what the bot really sees, not what your browser shows after interaction
- The crawl budget is a limited resource: Google cannot afford to simulate every interaction scenario on billions of pages
SEO Expert opinion
Is this statement consistent with real-world observations?
Yes, and it’s even one of the few points where Google is perfectly transparent. Tests conducted over the years with conditional dropdown menus confirm that pages hidden behind a hover or click are never indexed, unless they benefit from direct links elsewhere on the site. Crawl audits on heavily JavaScript-based e-commerce sites consistently show collapsed discovery rates for categories buried in manually triggered mega-menus.
The consistency stops where Google does not clarify exactly what it means by "rendered HTML." Some frameworks like Next.js or Nuxt generate SSR but with partial hydrations that can leave links un-crawlable if misconfigured. [To verify]: Does Google crawl links injected by client-side hydration after the first paint, or only those present in the initial HTML sent by the server? The official documentation remains unclear on this point.
What implementation errors go unnoticed?
The classic trap: a site that correctly displays its links in Chrome in inspection mode but loads them via an event listener DOMContentLoaded or window.onload. If this loading depends on an external resource (slow CDN, blocking script), the bot may time out before the links appear in the DOM. The result: incomplete crawl, and no one understands why certain sections of the site are never indexed.
Another frequent error: mobile hamburger menus that require a click to deploy the navigation. On desktop, links are in the DOM; on mobile, they are loaded on demand. If Google is crawling in mobile-first mode with a smartphone viewport, those links are invisible. Many sites lose entire sections of their structure due to this untested desktop/mobile asymmetry.
In what cases does this rule not apply?
It does not apply if the links are discovered through other crawl paths: XML sitemap, incoming external links, breadcrumb navigation present elsewhere, internal links within the editorial content. A link absent from a dropdown menu but present in a blog post will be perfectly crawled. The issue arises only when the menu is the sole entry point to certain pages.
And here’s where it gets tricky: on large-scale sites with thousands of products or categories, the menu is often the main navigation. If this menu is inaccessible to the bot, the crawl is limited to the few pages linked in the footer or in sporadic editorial content. The rest? Invisible. [To verify]: Does Google actively favor sites with static navigation in its rankings, or does it merely passively penalize those that make crawling difficult?
Practical impact and recommendations
What concrete actions should be taken to ensure link crawlability?
The first action is to conduct an audit of the rendered HTML via Google Search Console, in the "URL Inspection" tab. Compare the raw source HTML with the rendered HTML after JavaScript. If links appear only in the rendered version but require interaction to load, they are lost to the bot. Also, use tools like Screaming Frog in JavaScript mode to simulate rendering and identify gaps.
Next, refactor your navigation to be static in the initial DOM. There’s no need to sacrifice UX: a mega-menu can be present in the rendered HTML but visually hidden in CSS (display:none or visibility:hidden), then revealed on hover via pure CSS or JavaScript. The key is that <a href> tags should already be in the DOM upon loading, not injected on demand.
What mistakes should absolutely be avoided in the implementation?
Do not confuse client-side rendering and server-side rendering. A framework like React in pure SPA (Single Page Application) mode often generates an empty HTML with a <div id="root"></div> and builds all content in client-side JavaScript. For Google, that’s a blank page. The solution: switch to SSR (Server-Side Rendering) with Next.js, or use static pre-rendering (Static Site Generation) for key pages.
Avoid conditional links based on cookies or client-side geolocation. If your menu displays different links depending on whether the user is in France or Belgium, and this detection occurs in JavaScript after loading, Google (which crawls from American IPs most of the time) will only see a partial version of your menu. Prefer universal navigation with linguistic or regional variations accessible via static links.
How can I check if my site respects these constraints?
Systematically test with JavaScript disabled in your browser. If your navigation links disappear, it means they are not in the initial HTML and Google likely won’t see them either. Complement this with a Screaming Frog crawl in "JavaScript rendering" mode and compare the number of discovered links versus a classic crawl without JS: any significant discrepancy indicates a crawlability issue.
Also, use the Search Console coverage report to identify pages discovered but not indexed. If entire sections of your hierarchy never appear in this report, it is often linked to non-crawlable conditional navigation. Cross-reference with server logs to confirm that Googlebot is not even requesting those URLs: total absence of crawl = discovery issue, not indexing.
- Audit the rendered HTML via Search Console and compare it with the raw source
- Redesign navigation to inject all critical links into the initial DOM, without user interaction conditions
- Switch to SSR or static pre-rendering for modern JavaScript frameworks
- Test with JavaScript disabled: navigation should remain functional (or at least links should be present)
- Screaming Frog crawl with/without JS to measure link discovery discrepancies
- Monitor server logs to identify pages never crawled despite their presence in the sitemap
❓ Frequently Asked Questions
Un lien chargé en JavaScript après le DOMContentLoaded est-il crawlé par Google ?
Les mega-menus déroulants au hover sont-ils un problème pour le SEO ?
Google crawle-t-il les liens injectés par des frameworks comme React ou Vue sans SSR ?
Un sitemap XML compense-t-il l'absence de liens crawlables dans le HTML rendu ?
Comment tester ce que Google voit réellement dans mon HTML rendu ?
🎥 From the same video 6
Other SEO insights extracted from this same Google Search Central video · duration 20 min · published on 23/06/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.