What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

Links with a no-follow attribute can still be followed by Google depending on the case. JavaScript links with on-click handlers are not considered links by Google during JavaScript rendering.
39:49
🎥 Source video

Extracted from a Google Search Central video

⏱ 54:55 💬 EN 📅 31/03/2020 ✂ 10 statements
Watch on YouTube (39:49) →
Other statements from this video 9
  1. 2:06 Google adapte-t-il vraiment ses algorithmes en temps de crise ?
  2. 4:43 Le DMCA suffit-il vraiment à protéger votre contenu volé du duplicate content ?
  3. 8:30 Faut-il vraiment placer le balisage schema.org publisher sur toutes les pages de votre site ?
  4. 10:39 Faut-il vraiment des images de 1200px pour apparaître dans Google Discover ?
  5. 18:29 Le JavaScript peut-il transformer vos pages uniques en contenu dupliqué aux yeux de Google ?
  6. 20:44 Google lit-il vraiment le contenu des images pour les classer ?
  7. 36:11 Faut-il vraiment s'inquiéter des erreurs 404 qui s'accumulent dans la Search Console ?
  8. 39:23 Le contenu masqué en mobile-first est-il vraiment pris en compte par Google pour l'indexation ?
  9. 41:52 Les données structurées profitent-elles au SEO même sans rich snippets visibles ?
📅
Official statement from (6 years ago)
TL;DR

Google can follow no-follow links in certain cases, contrary to the common misconception that they completely block crawling. JavaScript links with on-click handlers are not recognized as real links during rendering. Essentially, this means a no-follow is not an absolute guarantee against crawling, and improperly implemented JS links are invisible to Googlebot.

What you need to understand

Is no-follow still an absolute blocking signal?

Mueller's statement breaks a firmly held belief for years: the no-follow would completely block the crawl of a URL. The reality is more nuanced. Google may decide to follow a no-follow link if other signals encourage it — for example, if the URL is already known through other sources, or if the context justifies exploratory crawling.

This flexibility is not exhaustively documented. Google uses no-follow as a recommendation, not as an absolute directive. In other words, you suggest to Googlebot not to follow the link, but it may override this based on its own judgment. This behavior has been confirmed by several practitioners who observe crawls on URLs that are exclusively linked with no-follow.

Why do JavaScript on-click links not pass?

Links that use an on-click handler without an <a href="..."> tag are not considered links by Googlebot. Even if the bot renders the JavaScript and executes certain events, it does not detect a link if it does not exist in the DOM as an <a> element with a valid href attribute.

This point is crucial for sites that have switched to modern JavaScript frameworks. If your menus, calls-to-action, or internal links are implemented via div or button with on-click, Google simply does not see them. JavaScript rendering does not compensate for a failing link architecture. This is a common mistake on poorly configured React, Vue, or Angular sites.

What are the implications for internal link architecture?

This statement challenges two widespread practices. First, the use of no-follow to control crawl budget: if Google can still follow the link, you lose a portion of your control leverage. Second, the implementation of links via JavaScript: if you rely on JS rendering for Google to discover your pages, you are taking a risk.

The issue arises especially for sites with thousands of pages. If you no-follow entire categories thinking they won't be crawled, but Google decides to follow them anyway, you end up with a dispersed crawl budget on pages you specifically wanted to avoid. On the JavaScript side, the solution is simple: every important link must exist in the initial HTML, not just after JavaScript execution.

  • No-follow is not a guarantee against crawling — Google can follow the link if other signals encourage it.
  • On-click links are not recognized as links — they do not appear in Google's link graph.
  • JavaScript rendering does not replace a valid href — even if the bot executes the JS, it does not detect a link without an <a> tag.
  • Link architecture should be considered in native HTML — JavaScript can enrich the experience, but not replace base links.
  • Crawl control via no-follow is less reliable than with robots.txt or noindex — prefer these methods for truly blocking a URL.

SEO Expert opinion

Is this statement consistent with what is observed on the ground?

Yes, and that’s what makes it interesting. For years, SEOs have reported crawls on URLs exclusively linked with no-follow. Google had never really clarified this behavior — the official line remained vague. Here, Mueller confirms what the server logs were showing: the no-follow is a hint, not an absolute directive.

The problem is that we still do not know exactly under what circumstances Google decides to follow a no-follow. Is it linked to the popularity of the URL? The depth of the site? A random exploratory crawl? [To be verified] — Google provides no specific criteria. This lack of transparency makes it difficult to base any crawl control strategy solely on no-follow.

Are JavaScript links really invisible in all cases?

Mueller's statement is clear: on-click handlers without href are not links. But it should be nuanced. If a site uses a modern framework with a client-side router and links are dynamically generated with valid <a href="..."> in the DOM after rendering, Google can see them.

The real issue arises when the link exists only as an event listener on a non-semantic element. For example, a <div onclick="goToPage('/product')"> will never be crawled. Even if Googlebot executes the JavaScript, it does not detect this type of navigation. This is often seen on e-commerce sites that have migrated to React or Vue without considering link architecture.

Should one abandon no-follow to control crawling?

No, but it should be combined with other levers. The no-follow remains useful to signal to Google that a link is not editorial — that’s actually its initial role. But if your goal is to strictly block the crawling of a URL, prefer robots.txt or a noindex tag.

Let’s be honest: relying solely on no-follow to manage your crawl budget is risky. Google can still decide to follow the link, and you have no way to predict this behavior. If a page truly should not be crawled, use a Disallow in robots.txt. No-follow remains relevant for outbound links, UGC, or sponsored links — but not as the only internal control mechanism.

Practical impact and recommendations

What should you do with internal no-follow links?

First step: audit your internal no-follow links. If you are using this tag to control crawl budget, check your server logs to see if Google is still crawling those URLs. In many client sites, we observe that 20 to 30% of no-followed URLs are still crawled anyway. This is a waste of budget if these pages have no SEO value.

If you truly want to block a URL, switch to robots.txt or noindex. The no-follow remains useful to signal non-editorial links — comments, widgets, outbound links — but do not rely on it as a crawl barrier. And if you are no-following links to important pages just to manage depth, rethink your architecture: it’s better to improve internal linking than to no-follow indiscriminately.

How to ensure your JavaScript links are crawlable?

Use the Search Console and URL Inspection Tool to see the DOM rendered by Google. If your links do not appear as valid <a href="..."> in the rendered HTML, they are invisible. You can also analyze the initial source code: every critical link must exist before the execution of JavaScript.

Specifically, if you are using a JS framework, ensure that your router generates <a> tags with href in the DOM. No clickable divs, no navigation driven purely by JavaScript without href. And test with a browser that has JavaScript disabled: if your links stop working, Google probably doesn't see them either. It’s a brutal but effective test.

What mistakes should be avoided in managing internal links?

A classic mistake: massive no-following to 'optimize' internal PageRank. This PageRank Sculpting is outdated for over ten years. Google ignores no-follow links for PageRank calculation, so you gain nothing — and you risk blocking URLs that should be crawled. It’s better to build a logical link architecture than to fiddle with no-follow everywhere.

Another pitfall: implementing menus or calls-to-action solely in JavaScript without a fallback HTML link. This is seen on sites that have migrated to modern frameworks without prior SEO audits. The result: orphan pages, invisible categories, a collapsed internal linking structure. If you are redesigning your site, always keep a layer of native HTML links.

  • Audit server logs to identify no-follow URLs that are still being crawled
  • Replace no-follow with robots.txt or noindex to truly block a URL
  • Check in Search Console that all important links exist as <a href> in the rendered DOM
  • Test site navigation with JavaScript disabled to spot invisible links
  • Ban clickable divs or buttons without href for critical internal links
  • Review link architecture if you are massively no-following to control crawl
No-follow is not an absolute lock, and improperly implemented JavaScript links are invisible to Google. Favor a native HTML link architecture, and reserve no-follow for non-editorial links. To truly block a URL, use robots.txt or noindex. These optimizations often touch upon deep technical aspects — redesign, JavaScript framework, large-scale crawl management. If your site has a complex architecture or recurring crawl budget issues, it may be wise to consult a specialized SEO agency for a complete diagnosis and tailored support.

❓ Frequently Asked Questions

Un lien no-follow transmet-il du PageRank ?
Non, Google a confirmé depuis longtemps que les liens no-follow ne transmettent pas de PageRank. Le PageRank Sculpting via no-follow ne fonctionne plus.
Google crawle-t-il systématiquement les liens no-follow ?
Non, mais il peut décider de les suivre dans certains cas. Le no-follow est traité comme une recommandation, pas une directive absolue.
Comment rendre un lien JavaScript crawlable ?
Il faut que le lien existe comme une balise <a href="..."> dans le DOM après rendu. Un simple on-click sur un div ou un button ne suffit pas.
Faut-il no-follower les liens de pagination pour économiser le budget crawl ?
Non, ce n'est généralement pas recommandé. Google peut quand même les crawler, et vous risquez de bloquer l'indexation de pages importantes. Utilisez plutôt rel="next/prev" ou une architecture de pagination optimisée.
Quel est le meilleur moyen de bloquer le crawl d'une URL ?
Utilisez robots.txt pour bloquer le crawl, ou une balise noindex pour bloquer l'indexation. Le no-follow seul n'est pas fiable pour ce cas d'usage.
🏷 Related Topics
JavaScript & Technical SEO Links & Backlinks

🎥 From the same video 9

Other SEO insights extracted from this same Google Search Central video · duration 54 min · published on 31/03/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.