Official statement
Other statements from this video 32 ▾
- 0:36 Comment vérifier si un domaine a des problèmes SEO invisibles depuis Google Search Console ?
- 1:48 Peut-on vraiment détecter les pénalités algorithmiques cachées d'un domaine expiré ?
- 3:50 Comment gérer le contenu dupliqué quand on gère plusieurs entités distinctes ?
- 4:25 Faut-il dupliquer son contenu pour chaque établissement local ou tout regrouper sur une page ?
- 6:18 Pourquoi les suppressions DMCA massives peuvent-elles détruire le classement d'un site entier ?
- 6:18 Les retraits DMCA massifs peuvent-ils vraiment dégrader le classement d'un site ?
- 7:18 Faut-il privilégier un sous-domaine ou un sous-répertoire pour héberger vos pages AMP ?
- 7:22 Où héberger vos pages AMP : sous-domaine, sous-répertoire ou paramètre ?
- 8:25 La balise canonical fonctionne-t-elle vraiment si les pages sont différentes ?
- 8:35 Faut-il vraiment bannir le rel=canonical de vos pages paginées ?
- 10:04 Le scraping peut-il vraiment détruire le référencement d'un site à faible autorité ?
- 11:23 L'adresse IP du serveur influence-t-elle encore le référencement local ?
- 11:45 L'adresse IP de votre serveur impacte-t-elle encore votre SEO local ?
- 13:39 Un lien sans balise <a> peut-il transmettre du PageRank ?
- 15:11 Comment Google indexe-t-il vraiment vos pages AMP en présence d'un noindex ?
- 15:13 Le noindex d'une page HTML bloque-t-il vraiment l'indexation de sa version AMP associée ?
- 18:21 Combien de temps faut-il pour récupérer après une action manuelle complète ?
- 18:25 Combien de temps faut-il pour récupérer d'une action manuelle Google ?
- 21:59 Faut-il intégrer des mots-clés dans son nom de domaine pour mieux ranker ?
- 22:43 Faut-il vraiment indexer son fichier robots.txt dans Google ?
- 24:08 Pourquoi le cache Google affiche-t-il votre page différemment du rendu réel ?
- 25:29 DMCA et disavow : pourquoi Google privilégie-t-il l'une sur l'autre pour gérer contenu dupliqué et backlinks toxiques ?
- 28:19 Le taux de crawl influence-t-il vraiment le classement dans Google ?
- 28:19 Votre serveur limite-t-il le crawl de Google plus que vous ne le pensez ?
- 31:00 Les signaux sociaux sont-ils vraiment inutiles pour le référencement Google ?
- 31:25 Les profils sociaux améliorent-ils le classement Google ?
- 32:03 Les profils sociaux multiples boostent-ils vraiment votre SEO ?
- 33:00 Les répertoires de liens sont-ils vraiment ignorés par Google ?
- 33:25 Les liens d'annuaires sont-ils vraiment tous ignorés par Google ?
- 36:14 Faut-il activer HSTS immédiatement lors d'une migration de domaine vers HTTPS ?
- 42:35 Pourquoi les étoiles d'avis mettent-elles autant de temps à apparaître dans Google ?
- 52:00 Le niveau de stock influence-t-il vraiment le classement de vos fiches produits ?
Google strictly defines a link as an HTML <a> tag with an href attribute. An image alone, even if clickable via JavaScript or CSS, does not count as a link for the search engine. The nofollow or sponsored attributes only apply to real links, meaning any alternative navigation structure risks being ignored for crawling and PageRank transfer.
What you need to understand
Why does Google limit its definition of a link to the tag?
The answer lies in the historical architecture of the web and crawling technical constraints. Google built its engine on the analysis of traditional HTML link graphs, those that have existed since the 1990s. The tag with its href attribute forms the universal standard for declaring a relationship between two pages.
Modern alternatives like clickable buttons in JavaScript, interactive CSS areas, or event listeners do not generate a signal that can be utilized by Googlebot during the initial crawl. The bot follows links declared in the DOM at the time of HTML parsing, even before the full execution of JavaScript. Therefore, an image or a div made clickable by a script remains invisible.
What does this definition imply for clickable images?
An image can serve as a clickable area for a user without ever communicating that information to Google. If you use an onclick event on an tag to trigger navigation, Googlebot will detect no link. The engine will see only a static image without a declared destination.
For an image to become a link in Google's eyes, it must be nested within an tag. The correct structure looks like this:
. Without this encapsulation, you lose the navigation signal, the PageRank transfer, and the bot's ability to discover the target page.
Do nofollow attributes work differently depending on the context?
No, and that's precisely what Mueller clarifies here. The rel="nofollow", rel="sponsored" or rel="ugc" attributes only apply to valid tags. You cannot apply nofollow to an image, a button, or a This restriction avoids ambiguities. If an image without an tag is not a link, then it cannot be followed or nofollowed. The very concept of nofollow presupposes the existence of a link in the technical sense. Any attempt to bypass this rule with JavaScript or custom attributes fails from a crawling perspective.
SEO Expert opinion
Does this rule match the field observations of SEO practitioners?
Absolutely. Technical audits regularly show that pure JavaScript navigations create blind spots for Googlebot. Single-page application sites that generate their links via frameworks without tags encounter recurring indexing issues. Google has improved its JavaScript rendering, but initial discovery remains conditioned by HTML links.
I have seen sites lose dozens of orphaned pages because their menus used clickable divs with event listeners instead of traditional links. Googlebot would visit the homepage, detect no links to internal sections, and leave. Search Console would report a successful crawl, but no child pages were discovered. The issue disappeared immediately after migrating to tags.
Are there cases where this strict definition poses a problem?
Yes, particularly with modern interfaces that prioritize user experience over SEO compatibility. Progressive web applications, animated hamburger menus, or interactive image galleries often rely on heavy JavaScript. If the developer does not account for a fallback HTML layer, the site becomes partially invisible.
React, Vue, or Angular frameworks sometimes generate structures where links are created dynamically on click, without a prior href. Google can index these pages if they are discovered through another path (sitemap, external link), but internal linking does not function. Internal PageRank does not flow correctly, and some sections remain under-explored.
Can this rule be bypassed with prerendering or SSR?
Server-side rendering or HTML prerendering does effectively solve the problem, but only if the final HTML contains tags. If your SSR generates clickable divs with event handlers, you have not resolved anything. The goal is to serve Googlebot a DOM containing valid links on the first parsing.
Solutions like Next.js or Nuxt.js facilitate this approach by automatically generating tags for navigation components. But I have audited poorly configured SSR sites where server rendering did not generate href, only data-route attributes utilized client-side. Google was not following anything. [To be verified] consistently after migration to a modern framework: the source HTML must contain exploitable links without JavaScript.
Practical impact and recommendations
How can you check if your images and buttons generate valid links?
The first step is to inspect the raw source HTML, the one that Googlebot sees before JavaScript execution. Right-click > View Page Source, then search for tags. If your navigation elements do not appear as in this code, they are invisible for the initial crawl.
Next, use the URL inspection tool in Google Search Console to test the rendering. The “HTML” section shows you exactly what Googlebot has parsed. Compare it with the “live” version to identify discrepancies. If links appear only after JavaScript rendering, you have a discoverability problem.
What technical errors must absolutely be corrected?
Navigation menus built with div onclick or
Image galleries that use JavaScript lightboxes or modals pose the same problem. If the image is not wrapped in an , Google does not follow the destination. Add a layer of HTML with links to product pages or high-resolution versions, even if the user interface hides these links for an enhanced experience.
In what contexts does the nofollow attribute remain relevant?
The nofollow attribute retains its usefulness for managing internal PageRank sculpting and signaling unreliable external links. On an e-commerce site, you might choose to nofollow links to sorting filters or login pages to concentrate the crawl budget on categories and product sheets. But this directive only works if the targeted elements are real links .
For user-generated links (comments, forums), rel="ugc" offers a more precise alternative. For commercial partnerships, rel="sponsored" clearly indicates the nature of the relationship. These attributes protect your site from penalties related to link schemes, but let’s remember: they only apply to tags with href.
- Audit the raw source HTML to identify undeclared navigations in
- Migrate JavaScript menus and buttons to traditional links with CSS styling
- Encapsulate all clickable images in tags to transmit PageRank
- Check Googlebot's rendering via Search Console after each technical overhaul
- Apply nofollow, sponsored, or ugc only on valid tags
- Test discoverability of orphaned pages with a Screaming Frog or Sitebulb crawl
❓ Frequently Asked Questions
Une image avec un attribut onclick peut-elle être considérée comme un lien par Google ?
Faut-il ajouter rel="nofollow" sur une image cliquable sans balise <a> ?
Le server-side rendering garantit-il que mes liens soient crawlables ?
Comment savoir si mes images transmettent du PageRank ?
Les frameworks modernes comme React posent-ils un problème pour le maillage interne ?
🎥 From the same video 32
Other SEO insights extracted from this same Google Search Central video · duration 1h00 · published on 27/07/2018
🎥 Watch the full video on YouTube →Related statements
Get real-time analysis of the latest Google SEO declarations
Be the first to know every time a new official Google statement drops — with full expert analysis.
💬 Comments (0)
Be the first to comment.