What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

Links that require user interaction (like hovering over a menu with the mouse) or that are loaded solely via JSON without being present in the rendered HTML will not be discovered or crawled by Google. Only links present in the rendered HTML as visible in testing tools are considered.
4:02
🎥 Source video

Extracted from a Google Search Central video

⏱ 20:04 💬 EN 📅 23/06/2020 ✂ 7 statements
Watch on YouTube (4:02) →
Other statements from this video 6
  1. 2:02 Faut-il vraiment abandonner les outils tiers pour tester le rendu HTML de vos pages ?
  2. 2:02 Faut-il vraiment éviter les balises meta en double dans le HTML et le JavaScript ?
  3. 7:56 Faut-il débloquer JavaScript et CSS dans le robots.txt pour le référencement ?
  4. 9:01 Pourquoi Google crawle vos fichiers JS/CSS mais ne les indexe jamais ?
  5. 13:43 Bloquer JavaScript et CSS peut-il vraiment dégrader votre SEO ?
  6. 18:32 Faut-il renoncer à onclick pour éviter d'être pénalisé pour cloaking ?
📅
Official statement from (5 years ago)
TL;DR

Google only crawls links present in the rendered HTML without user interaction. If your links only appear on hovering over a menu or are loaded via JSON, they become invisible to the bot. Essentially, any navigation structure that requires a click or hover to reveal internal links negatively impacts your crawl budget and the discovery of your strategic pages.

What you need to understand

What is rendered HTML and why does Google limit itself to it?

Rendered HTML refers to the final DOM after JavaScript execution, as visible in testing tools like Google Search Console or the mobile rendering inspector. Google does not crawl raw source code, but rather what is actually displayed in the browser after all JavaScript manipulations have occurred.

The important nuance: Google distinguishes what is present in the DOM from what requires an action to appear. A dropdown menu that loads links only on hover or click is never considered crawlable, even if the JavaScript works perfectly on the client side. The bot does not simulate any user interactions — no clicks, no hovers, no infinite scroll triggered by events.

Why does this technical limitation still persist?

Let’s be honest: Google could technically simulate interactions like an automated testing tool such as Selenium. However, the crawl budget and server load impose strict economic constraints. Crawling billions of pages while simulating clicks on every interactive element would multiply the required resources by a prohibitive factor.

In practice, this means that any lazy-loaded navigation conditional on a user event (onclick, onmouseover, scroll with intersection observer triggered manually) is a dead end for the crawler. Google assumes that if a link is important, it should be accessible without friction in the initial rendered HTML.

What happens to links loaded only via JSON?

A common case involves modern JavaScript applications (React, Vue, Angular) where routes are managed client-side and links are dynamically built from a JSON API. If your navigation component fetches an endpoint /api/menu.json and creates <a href> links in JavaScript, these links will only be crawled if they actually end up in the rendered DOM.

The confusion often comes from developers who see the links displayed in their browser and assume that Google sees them too. But if the JSON loads after a user event or if the framework does not perform server-side rendering (SSR) or static pre-rendering, Google will only see a blank page or an HTML skeleton without usable links.

  • Only rendered HTML counts: any link missing from the DOM after the initial JavaScript execution is invisible to Google
  • User interactions are never simulated: hover, click, and conditional scrolling are insurmountable barriers for the bot
  • JSON alone is not enough: links must be injected into the rendered DOM, ideally via SSR or static hydration
  • Use Google’s testing tools to check what the bot really sees, not what your browser shows after interaction
  • The crawl budget is a limited resource: Google cannot afford to simulate every interaction scenario on billions of pages

SEO Expert opinion

Is this statement consistent with real-world observations?

Yes, and it’s even one of the few points where Google is perfectly transparent. Tests conducted over the years with conditional dropdown menus confirm that pages hidden behind a hover or click are never indexed, unless they benefit from direct links elsewhere on the site. Crawl audits on heavily JavaScript-based e-commerce sites consistently show collapsed discovery rates for categories buried in manually triggered mega-menus.

The consistency stops where Google does not clarify exactly what it means by "rendered HTML." Some frameworks like Next.js or Nuxt generate SSR but with partial hydrations that can leave links un-crawlable if misconfigured. [To verify]: Does Google crawl links injected by client-side hydration after the first paint, or only those present in the initial HTML sent by the server? The official documentation remains unclear on this point.

What implementation errors go unnoticed?

The classic trap: a site that correctly displays its links in Chrome in inspection mode but loads them via an event listener DOMContentLoaded or window.onload. If this loading depends on an external resource (slow CDN, blocking script), the bot may time out before the links appear in the DOM. The result: incomplete crawl, and no one understands why certain sections of the site are never indexed.

Another frequent error: mobile hamburger menus that require a click to deploy the navigation. On desktop, links are in the DOM; on mobile, they are loaded on demand. If Google is crawling in mobile-first mode with a smartphone viewport, those links are invisible. Many sites lose entire sections of their structure due to this untested desktop/mobile asymmetry.

In what cases does this rule not apply?

It does not apply if the links are discovered through other crawl paths: XML sitemap, incoming external links, breadcrumb navigation present elsewhere, internal links within the editorial content. A link absent from a dropdown menu but present in a blog post will be perfectly crawled. The issue arises only when the menu is the sole entry point to certain pages.

And here’s where it gets tricky: on large-scale sites with thousands of products or categories, the menu is often the main navigation. If this menu is inaccessible to the bot, the crawl is limited to the few pages linked in the footer or in sporadic editorial content. The rest? Invisible. [To verify]: Does Google actively favor sites with static navigation in its rankings, or does it merely passively penalize those that make crawling difficult?

Note: Google’s testing tools (Search Console, Mobile-Friendly Test) do not always simulate the bot's behavior in production exactly. A successful test does not guarantee optimal crawling under real-world conditions, especially for sites with a high volume of pages or with slow external resources.

Practical impact and recommendations

What concrete actions should be taken to ensure link crawlability?

The first action is to conduct an audit of the rendered HTML via Google Search Console, in the "URL Inspection" tab. Compare the raw source HTML with the rendered HTML after JavaScript. If links appear only in the rendered version but require interaction to load, they are lost to the bot. Also, use tools like Screaming Frog in JavaScript mode to simulate rendering and identify gaps.

Next, refactor your navigation to be static in the initial DOM. There’s no need to sacrifice UX: a mega-menu can be present in the rendered HTML but visually hidden in CSS (display:none or visibility:hidden), then revealed on hover via pure CSS or JavaScript. The key is that <a href> tags should already be in the DOM upon loading, not injected on demand.

What mistakes should absolutely be avoided in the implementation?

Do not confuse client-side rendering and server-side rendering. A framework like React in pure SPA (Single Page Application) mode often generates an empty HTML with a <div id="root"></div> and builds all content in client-side JavaScript. For Google, that’s a blank page. The solution: switch to SSR (Server-Side Rendering) with Next.js, or use static pre-rendering (Static Site Generation) for key pages.

Avoid conditional links based on cookies or client-side geolocation. If your menu displays different links depending on whether the user is in France or Belgium, and this detection occurs in JavaScript after loading, Google (which crawls from American IPs most of the time) will only see a partial version of your menu. Prefer universal navigation with linguistic or regional variations accessible via static links.

How can I check if my site respects these constraints?

Systematically test with JavaScript disabled in your browser. If your navigation links disappear, it means they are not in the initial HTML and Google likely won’t see them either. Complement this with a Screaming Frog crawl in "JavaScript rendering" mode and compare the number of discovered links versus a classic crawl without JS: any significant discrepancy indicates a crawlability issue.

Also, use the Search Console coverage report to identify pages discovered but not indexed. If entire sections of your hierarchy never appear in this report, it is often linked to non-crawlable conditional navigation. Cross-reference with server logs to confirm that Googlebot is not even requesting those URLs: total absence of crawl = discovery issue, not indexing.

  • Audit the rendered HTML via Search Console and compare it with the raw source
  • Redesign navigation to inject all critical links into the initial DOM, without user interaction conditions
  • Switch to SSR or static pre-rendering for modern JavaScript frameworks
  • Test with JavaScript disabled: navigation should remain functional (or at least links should be present)
  • Screaming Frog crawl with/without JS to measure link discovery discrepancies
  • Monitor server logs to identify pages never crawled despite their presence in the sitemap
Optimizing the crawlability of internal links can quickly become a complex technical challenge, especially on sites with a heavy JavaScript component or sophisticated navigation architectures. If these adjustments exceed your internal resources or if you want to ensure implementation without the risk of regression, the support of an SEO agency specializing in technical architecture can be crucial for finely auditing your HTML rendering and implementing crawlable navigation at scale.

❓ Frequently Asked Questions

Un lien chargé en JavaScript après le DOMContentLoaded est-il crawlé par Google ?
Oui, si le lien finit par apparaître dans le DOM rendu sans nécessiter d'interaction utilisateur. Google attend quelques secondes pour que le JavaScript s'exécute, mais ne simule aucun clic ou hover pour déclencher le chargement.
Les mega-menus déroulants au hover sont-ils un problème pour le SEO ?
Uniquement si les liens n'existent pas dans le DOM initial et sont chargés au survol. Un mega-menu dont les liens sont présents dans le HTML mais masqués en CSS puis révélés au hover ne pose aucun problème de crawl.
Google crawle-t-il les liens injectés par des frameworks comme React ou Vue sans SSR ?
Seulement si le framework génère le HTML côté client avant que Google ne timeout. En pratique, un SPA pur sans SSR ni pré-rendu statique a de fortes chances de voir ses liens internes mal crawlés ou ignorés.
Un sitemap XML compense-t-il l'absence de liens crawlables dans le HTML rendu ?
Partiellement. Le sitemap permet de découvrir les URLs, mais l'absence de liens internes crawlables nuit au PageRank interne et à la profondeur de crawl. Google privilégie toujours les liens HTML naturels pour distribuer l'autorité.
Comment tester ce que Google voit réellement dans mon HTML rendu ?
Utilise l'outil "Inspection d'URL" de Google Search Console, onglet "HTML rendu". Compare avec le code source brut. Complète avec Screaming Frog en mode JavaScript activé et un test navigateur avec JS désactivé.
🏷 Related Topics
Crawl & Indexing AI & SEO JavaScript & Technical SEO Links & Backlinks Pagination & Structure

🎥 From the same video 6

Other SEO insights extracted from this same Google Search Central video · duration 20 min · published on 23/06/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.