What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

Googlebot follows internal links even if they are made visible through a hover effect, provided that the links are included in the initial HTML code loaded by the browser.
9:54
🎥 Source video

Extracted from a Google Search Central video

⏱ 54:45 💬 EN 📅 24/08/2017 ✂ 33 statements
Watch on YouTube (9:54) →
Other statements from this video 32
  1. 1:07 Comment Google décide-t-il vraiment quelles pages crawler en priorité sur votre site ?
  2. 2:07 Les pages de catégories sont-elles vraiment plus crawlées par Google ?
  3. 5:21 Faut-il vraiment optimiser les titres de pages produits pour Google ou pour les utilisateurs ?
  4. 5:22 Plusieurs pages peuvent-elles avoir le même H1 sans risque SEO ?
  5. 6:54 Les liens en mouseover sont-ils vraiment crawlables par Google ?
  6. 10:53 Faut-il bloquer les scripts JavaScript dans le robots.txt ?
  7. 13:07 Comment exploiter Search Console pour piloter son SEO mobile de façon optimale ?
  8. 16:01 Faut-il vraiment rendre vos fichiers JavaScript accessibles à Googlebot ?
  9. 18:06 Faut-il vraiment garder son fichier Disavow même avec des domaines morts ?
  10. 21:00 JavaScript et indexation Google : jusqu'où peut-on vraiment pousser le curseur côté client ?
  11. 21:45 Comment isoler le trafic SEO d'un sous-domaine ou d'une version mobile dans Search Console ?
  12. 23:24 Combien d'articles faut-il afficher par page de catégorie pour optimiser le SEO ?
  13. 23:32 La balise canonical transfère-t-elle vraiment autant de signal qu'une redirection 301 ?
  14. 29:00 Le contenu dupliqué est-il vraiment un problème SEO à traiter en priorité ?
  15. 29:12 Le fichier Disavow neutralise-t-il vraiment tous les backlinks désavoués ?
  16. 29:32 Les balises canonical transmettent-elles réellement les signaux SEO comme une redirection 301 ?
  17. 30:26 Faut-il vraiment nettoyer son fichier Disavow des URLs mortes et redirigées ?
  18. 33:21 Le JavaScript est-il vraiment un problème pour le crawl de Google ?
  19. 36:20 Faut-il vraiment mettre en noindex les pages de catégorie peu peuplées ?
  20. 40:50 Faut-il vraiment passer son site en HTTPS pour le SEO ?
  21. 41:30 HTTPS booste-t-il vraiment votre SEO ou est-ce un mythe Google ?
  22. 45:25 Google retire-t-il vraiment les pages trompeuses ou se contente-t-il de les déclasser ?
  23. 46:12 Faut-il vraiment éviter les balises canonical sur les pages paginées ?
  24. 47:32 Comment accélérer la désindexation des pages orphelines qui plombent votre index Google ?
  25. 48:06 Le contenu dupliqué impacte-t-il vraiment le crawl budget de votre site ?
  26. 53:30 Les signalements de spam Google garantissent-ils vraiment une action ?
  27. 57:26 Le contenu descriptif sur les pages catégorie règle-t-il vraiment le problème d'indexation ?
  28. 59:12 Les pages de catégorie vides nuisent-elles vraiment à l'indexation ?
  29. 63:20 Faut-il vraiment réécrire toutes les descriptions produit pour ranker en e-commerce ?
  30. 70:51 Google peut-il fusionner vos sites internationaux si le contenu est trop similaire ?
  31. 77:06 Faut-il vraiment éviter les canonicals vers la page 1 sur les séries paginées ?
  32. 80:32 Faut-il vraiment compter sur le 404 pour nettoyer l'index Google des URLs orphelines ?
📅
Official statement from (8 years ago)
TL;DR

Google confirms that Googlebot follows internal links even if they only appear on hover, as long as they are present in the initial HTML. This clarification addresses a common doubt regarding dropdowns and mega menus built with CSS/JS. Ultimately, what's important is not immediate visibility but effective presence in the DOM at the time the page loads.

What you need to understand

What does "present in the initial HTML" really mean?

The distinction is crucial: Googlebot reads the HTML code returned by the server before any JavaScript is executed. If your internal links already exist in this raw HTML, even if hidden by CSS (display:none, opacity:0, transform, position:absolute off-screen), the bot can see and follow them.

Conversely, if your links are dynamically injected by JavaScript after the initial load (e.g., a click or hover event listener generating the link through DOM manipulation), Googlebot may not discover them immediately. You then have to rely on JavaScript rendering, which occurs later and is not guaranteed for all URLs.

Why is there a distinction between CSS and JavaScript?

Google clearly differentiates between structural accessibility (is the link in the HTML?) and visual accessibility (is the link displayed?). A link hidden by CSS remains technically crawlable because it exists in the DOM. A link added in JS afterward requires an additional rendering step that Googlebot does not consistently perform on all pages.

This approach is explained by the resource cost of JavaScript rendering: Google cannot execute JS across the entire web in real-time. The initial crawl is based on raw HTML, with JS rendering occurring afterward in a separate queue.

In what scenarios does this statement actually apply?

The architectures concerned are primarily CSS-only dropdowns, mega menus that display on hover via :hover, or sidebars where sections unfold through CSS transitions. As long as the <a href> tags are present from the start, Googlebot will crawl them.

However, be cautious with modern frameworks (React, Vue, Angular) that generate all the DOM client-side: if your navigation does not exist in the HTML served by the server (which can be verified by disabling JavaScript in the browser), you are entirely reliant on Google’s JavaScript rendering. This is not optimal for crawl budget or quick discovery guarantee.

  • Links in the initial HTML + hidden by CSS: normally crawled by Googlebot
  • Links injected via JavaScript after loading: deferred crawl, not immediately guaranteed
  • CSS-only dropdowns: no documented crawl problem
  • SPAs without SSR/pre-rendering: total reliance on JavaScript rendering, risk of late discovery
  • Simple check: View Source (Ctrl+U) vs Inspect Element to compare raw HTML and rendered DOM

SEO Expert opinion

Is this statement consistent with real-world observations?

Yes, it aligns with empirical tests conducted on high-traffic sites. Server logs show that Googlebot indeed follows links hidden in pure CSS, including those visible only on hover. Discovery delays are similar to links that are permanently visible.

However, [To be verified] the crawl speed of links generated in client-side JS: Google claims to discover them through JavaScript rendering, but real-world feedback indicates variable delays (from a few hours to several weeks depending on crawl budget). On niche sites with low authority, some JS-only links are never crawled. Mueller's statement, therefore, remains optimistic for sites with low crawl budgets.

What nuances should be added to this assertion?

Mueller does not specify the timing of JavaScript rendering or its prioritization. A link that is present only after JS execution may be discovered, but when? The JS rendering queue is opaque and not publicly documented. On massive sites (e-commerce, marketplaces), relying on JS rendering for critical URL discovery is a risky strategy.

Another point: the phrasing "present in the initial HTML code" remains vague. What about links loaded via an iframe or a shadow DOM? What about links in data- attributes retrieved by JS? Google does not provide technical details. When in doubt, the safe rule remains: direct link in the <body> of the HTML served by the server.

In what scenarios does this rule not apply fully?

Single-page applications (SPAs) without Server-Side Rendering or pre-rendering are edge cases. If your initial HTML contains only an empty <div id="app"></div>, all your links depend on JS rendering. Google indicates it will crawl them, but observations show slower and incomplete discovery compared to standard HTML.

Another exception: links in modals or overlays loaded via AJAX on click. If the loading trigger is a user event (clicking a non-link button), Googlebot will not simulate that click. The final link remains invisible to the bot even if it could technically be rendered in JS. This is a common blind spot on e-commerce sites with dynamic filters.

Warning: Do not confuse "Googlebot can follow the link" with "Google will actually index the target page." Crawling does not imply indexing, especially for low-value or duplicate URLs. This statement relates to discovery, not ranking.

Practical impact and recommendations

What actionable steps should be taken to optimize the crawl of internal links?

Always prioritize traditional HTML links in the initial source code. If you use dropdown menus, implement them in pure CSS (transitions, :hover) instead of JavaScript. This ensures that all navigation links are crawlable from the first pass by Googlebot, without relying on JS rendering.

For sites under modern frameworks (Next.js, Nuxt, SvelteKit), always enable Server-Side Rendering (SSR) or Static Site Generation (SSG) for strategic pages. This injects your links into the initial HTML. If SSR is too costly, targeted pre-rendering of pages with high SEO potential (categories, landing pages) is an acceptable compromise.

What common mistakes should absolutely be avoided?

Never hide critical links using display:none or visibility:hidden permanently. Even if Google can technically follow them, this sends a negative signal: why hide content that is supposed to be important? Google may interpret this as an attempt to manipulate if the link is never visible to a real user.

Avoid links generated solely on-click via JavaScript event listeners (e.g., addEventListener('click', () => { location.href = '...' })). These pseudo-links are not <a> tags, and Googlebot does not follow them. Always use real <a href="..."> tags, even if you override the behavior with JS.

How can I check that my architecture complies with the recommendations?

Test by disabling JavaScript in Chrome (DevTools > Settings > Debugger > Disable JavaScript) and navigate your site. If your navigation links disappear, you have a problem: they are not in the initial HTML. Use Google's Mobile-Friendly Test (which shows the final rendering) and compare that with a raw View Source.

Analyze your server logs to identify the URLs that Googlebot visits. If some strategic pages never receive visits despite internal links, this is a symptom of non-crawlable links. Cross-reference with Google Search Console (Coverage > Detected - currently not indexed) to identify URLs discovered but never indexed.

  • Check the raw HTML (Ctrl+U) to confirm the presence of links before any JavaScript
  • Implement dropdown menus in pure CSS rather than dynamic JS
  • Enable SSR or pre-rendering on strategic pages of SPAs
  • Use actual <a href> tags, never clickable divs with event listeners
  • Review Googlebot logs to verify the effective crawl of linked pages
  • Test the site with JavaScript disabled to simulate the initial crawl
The recommendation is clear: structure your internal linking in native HTML rather than in JavaScript. Visual effects (animations, hover, transitions) can be managed in CSS without impacting crawl. For complex architectures requiring JS, server-side rendering is non-negotiable. If these optimizations seem technical or time-consuming, it may be wise to consult a specialized SEO agency that understands the balance between modern UX and crawl constraints.

❓ Frequently Asked Questions

Googlebot suit-il les liens masqués par display:none en CSS ?
Oui, tant que le lien existe dans le HTML initial. Google fait la distinction entre accessibilité structurelle (présence dans le DOM) et accessibilité visuelle (affichage à l'écran). Un lien masqué en CSS reste crawlable.
Les menus déroulants en pur CSS posent-ils un problème pour le crawl ?
Non, aucun problème. Si les liens sont présents dans le HTML et simplement révélés au survol via :hover et des transitions CSS, Googlebot les suit normalement. C'est une pratique safe pour le SEO.
Que se passe-t-il si mes liens sont générés uniquement en JavaScript côté client ?
Ils dépendent du rendu JavaScript de Google, qui intervient dans un second temps et n'est pas garanti immédiatement. Sur des sites à faible crawl budget, ces liens peuvent ne jamais être découverts. Privilégiez le HTML initial.
Comment savoir si mes liens sont dans le HTML initial ou ajoutés en JS ?
Faites un clic droit > Afficher le code source (Ctrl+U) et cherchez vos liens. S'ils n'apparaissent pas là mais seulement dans l'Inspecteur (DOM rendu), ils sont injectés en JS et donc non garantis pour le crawl initial.
Les Single Page Applications (SPAs) sont-elles pénalisées pour le crawl des liens internes ?
Pas pénalisées directement, mais handicapées. Sans SSR ou pre-rendering, tous les liens dépendent du rendu JS, ce qui retarde leur découverte. Sur des sites à faible autorité, certains liens peuvent ne jamais être crawlés. Le SSR est fortement recommandé.
🏷 Related Topics
Crawl & Indexing AI & SEO JavaScript & Technical SEO Links & Backlinks

🎥 From the same video 32

Other SEO insights extracted from this same Google Search Central video · duration 54 min · published on 24/08/2017

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.