What does Google say about SEO? /

Official statement

Links that only appear after a user action (click) will not be seen by Googlebot, as it does not interact with pages. However, if the links are present in the source code but simply hidden (then revealed), Google will see them because they are extracted from the HTML before and after rendering.
44:44
🎥 Source video

Extracted from a Google Search Central video

⏱ 48:50 💬 EN 📅 27/01/2021 ✂ 15 statements
Watch on YouTube (44:44) →
Other statements from this video 14
  1. 1:01 Does Googlebot crawl and render JavaScript at the same frequency?
  2. 4:17 Does Googlebot truly execute JavaScript like a real browser?
  3. 4:50 Is it true that Googlebot really ignores all content loaded after user interaction?
  4. 6:53 Is rendered HTML really the only reference for Google indexing?
  5. 7:23 Can you really rely on Google's cache to check JavaScript indexing?
  6. 7:54 Does JavaScript really affect your crawl budget?
  7. 9:00 Does Google really index the entirety of your pages or just strategic fragments?
  8. 12:08 Do CSS classes labeled 'SEO' really harm your SEO rankings?
  9. 16:36 Can Google's cache really skew the rendering of your JavaScript pages?
  10. 20:27 Could removing JavaScript links make your pages invisible to Google?
  11. 23:54 Why do live tests in Search Console produce conflicting results?
  12. 26:00 How can you manage URL parameters to prevent indexing issues?
  13. 30:47 Why does Google discover your pages but refuse to index them?
  14. 35:39 Can a XML sitemap really trigger a targeted recrawl of your pages?
📅
Official statement from (5 years ago)
TL;DR

Google distinguishes between two types of hidden links: those absent from the initial source code and revealed by JavaScript after interaction (invisible to Googlebot), and those present in the HTML but hidden by CSS (crawlable). This technical nuance directly impacts the discoverability of your strategic pages. Specifically: if your internal linking relies on accordions, tabs, or dropdown menus generated post-click, you're sabotaging your crawl budget.

What you need to understand

What’s the difference between 'visually hidden' and 'non-existent in the DOM'?

The confusion often arises from a mix-up between CSS visibility and presence in the HTML. A link can be invisible on the screen (display:none, opacity:0, absolute positioning off-screen) while still being present in the initial source code. In this case, Googlebot extracts it without any issues during HTML parsing.

Conversely, a link dynamically generated by JavaScript after a user event (onClick, onHover, infinite scroll) does not exist in the DOM before interaction. Googlebot does not simulate any user actions — it doesn't click, scroll, or hover. These links remain invisible to the crawl, even if the final JavaScript rendering displays them correctly.

How does Googlebot handle JavaScript during crawling?

Google crawls in two stages: fetching the raw HTML, then deferred JavaScript rendering. This second phase consumes a lot of server resources and is not guaranteed for every URL. The rendering allows viewing elements added by JS on initial load, but never those conditioned on interaction.

Specifically, if your framework (React, Vue, Angular) loads a complete menu via an onClick event listener, Googlebot will only see the trigger button. The menu content? Invisible. It's a black hole in your internal linking.

Why does this technical limitation pose a practical problem?

Modern sites abuse interactive UX patterns: FAQ accordions, category tabs, conditional mega menus. These components enhance the user experience but fragment discoverability for bots. If your category page loads 50 additional products on clicking 'See more', these URLs will never be crawled via this page.

Result: you create structural orphans. Strategic pages that only exist in your XML sitemap or via external links, never in your crawlable internal structure. Your internal PageRank does not circulate properly, and your crawl depth skyrockets.

  • Links present in the initial HTML but hidden by CSS are crawlable — use display:none without worry for mobile accessibility
  • Links generated post-interaction (click, hover, scroll) are invisible to Googlebot which simulates no user action
  • Google's JavaScript rendering doesn't compensate for this limitation: it displays what loads automatically, not what requires action
  • This rule applies even to modern full JavaScript sites — client-side hydration isn't enough if conditioned on an event
  • The solution involves Server-Side Rendering or initial HTML inclusion of strategic links, even if they are visually hidden afterwards

SEO Expert opinion

Is this statement consistent with field observations?

Yes, and it’s actually one of Google’s most empirically verifiable claims. Tests with Google Search Console (URL Inspection tool) consistently confirm: links added by onClick event listeners disappear from the captured rendering. Crawling tools like Screaming Frog with JavaScript enabled easily spot these black holes.

I’ve observed dozens of cases where e-commerce sites lost 30-40% of their internal linking due to product filters or infinite pagination initiated by clicks. The correlation with patchy indexing of deep pages is clear — Google crawls less, indexes less, and the organic traffic for long-tail categories collapses.

What nuances should be added to this rule?

First nuance: Google speaks of 'links' but the principle extends to any content conditioned on interaction. A text block revealed by clicking a 'Read more' button will not be indexed. A data table loaded after selecting a filter? Same. You lose semantic potential and ranking opportunities on long-tail queries.

Second nuance: some modern frameworks (Next.js, Nuxt) implement static pre-rendering that sidesteps the problem. If your links are generated server-side before being sent to the client, they exist in the initial HTML even if the user interaction hides/shows them later. This is the winning strategy — but it requires a real technical overhaul. [To be verified]: Google communicates little about its ability to crawl links present in the Shadow DOM or custom Web Components.

Under what circumstances does this limitation really impact SEO?

Three critical situations. First, the conditional mega menus of e-commerce sites: if your level 2-3 categories only show on hover (and are loaded via AJAX), Googlebot doesn't see them. Your thematic siloing collapses. Next, poorly implemented FAQ accordions: if each answer is in a separate div loaded on click, you lose the SEO benefit of long-form content.

Finally, the infinite pagination like 'scroll to load more'. If the button 'Load 20 more products' dynamically generates URLs on click, these pages remain orphaned. You need to either implement classic pagination alongside (often hidden) or use the rel="next"/"prev" attribute server-side to signal the series to Google.

Be careful: Don’t confuse this limitation with cloaking. Hiding links in CSS for Googlebot while showing them to users is legitimate — it’s even recommended for mobile accessibility. Cloaking is serving different HTML based on the user-agent, not using display:none.

Practical impact and recommendations

How to audit your hidden links and detect black holes?

First step: crawl your site with Screaming Frog in JavaScript disabled, then with it enabled. Compare the two exports of internal links. Any link present only in the JS crawl is suspect — check if it requires interaction or if it loads automatically. Gaps of 15-20% are common; beyond 30%, you have a structural issue.

Second step: use the URL Inspection tool from Google Search Console on your strategic pages. Check the 'Rendered HTML' tab and compare it with your live page. If entire sections are missing (FAQs, product grids, sub-menus), it’s because they are conditioned on interaction. Also test with curl in the command line to see the raw HTML — that’s what Googlebot receives on the first pass.

What technical corrections to apply concretely?

Immediate solution: inject all strategic links into the initial HTML, even if you mask them visually with display:none or aria-hidden. For a FAQ accordion, the complete content must be present on load, and the JavaScript merely toggles visibility. It’s compatible with accessibility (screen readers) and the crawl.

Sustainable solution: switch to Server-Side Rendering (SSR) or Static Site Generation (SSG). Frameworks like Next.js (React), Nuxt (Vue), or SvelteKit handle this natively. Your pages are pre-rendered server-side with all links, then interactivity is added client-side (hydration). Googlebot receives complete HTML on the first request, without waiting for JS rendering.

For complex cases (product filters, infinite pagination), implement a crawlable alternative navigation: a classic HTML sitemap, numbered pagination at the bottom of the page, complete breadcrumbs. This dual navigation (interactive UX + crawlable fallback) is the norm on large e-commerce sites. It's more code, but it’s the price paid to reconcile modern UX with SEO discoverability.

  • Crawl your site with JavaScript disabled to identify invisible links to the initial crawl
  • Check via Google Search Console (URL Inspection) that your strategic links appear in the rendered HTML
  • Refactor interactive components (accordions, tabs) to include content in the initial DOM, masked in CSS
  • Implement SSR/SSG if your tech stack allows, otherwise plan a crawlable alternative navigation
  • Systematically test each deployment with a crawl bot to detect internal link regressions
  • Document in your dev guidelines the rule 'no strategic links conditioned on an onClick/onHover'
The trade-off between interactive UX and crawlability is delicate — each JavaScript pattern must be evaluated from an SEO perspective before implementation. Dev teams rarely think 'bot' when coding a feature, highlighting the importance of a regular technical audit. If your stack is complex (SPA, micro-frontends, headless architecture), these optimizations often require cross-expertise between dev and SEO. Engaging a specialized SEO agency can expedite compliance and avoid costly mistakes on technical overhauls.

❓ Frequently Asked Questions

Un lien masqué en display:none est-il pénalisé par Google ?
Non, masquer un lien en CSS (display:none, visibility:hidden) n'est pas du cloaking tant que le HTML est identique pour tous les user-agents. Google crawle et suit ces liens normalement. C'est même recommandé pour l'accessibilité mobile.
Les menus déroulants au hover sont-ils crawlables ?
Oui si les liens sont présents dans le HTML initial et que seul le CSS gère l'affichage au survol. Non si un script charge dynamiquement le contenu du menu au premier hover. Vérifiez votre code source brut pour trancher.
Comment Google traite-t-il les lazy-loading d'images avec liens ?
Si l'image lazy-loadée contient un lien (balise <a> autour de <img>), ce lien doit être présent dans le HTML initial pour être crawlé. Le lazy-loading ne doit porter que sur l'attribut src de l'image, pas sur la structure HTML du lien.
Les SPAs React/Vue sont-elles condamnées pour le SEO à cause de cette limitation ?
Non si elles implémentent du SSR (Server-Side Rendering) ou du pre-rendering. Le problème vient des SPAs full-client qui génèrent tout le HTML côté navigateur après le chargement initial. Next.js, Nuxt et consorts résolvent ça nativement.
Faut-il abandonner les accordéons et onglets pour le SEO ?
Pas du tout — il suffit d'inclure tout le contenu dans le HTML initial et de ne gérer que l'affichage/masquage en CSS ou JavaScript. L'utilisateur voit des onglets, Googlebot voit tout le contenu d'un coup. C'est compatible et performant.
🏷 Related Topics
Domain Age & History Crawl & Indexing Featured Snippets & SERP AI & SEO Links & Backlinks

🎥 From the same video 14

Other SEO insights extracted from this same Google Search Central video · duration 48 min · published on 27/01/2021

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.