What does Google say about SEO? /

Official statement

Google can now follow nofollow links to discover new URLs and potentially index them. However, the passing of PageRank and ranking signals through nofollow remains independent and is not guaranteed: just because a nofollowed page is crawled does not mean it will receive ranking signals.
11:08
🎥 Source video

Extracted from a Google Search Central video

⏱ 55:02 💬 EN 📅 21/08/2020 ✂ 50 statements
Watch on YouTube (11:08) →
Other statements from this video 49
  1. 1:38 Does Google really track HTML links that are hidden by JavaScript?
  2. 1:46 Can JavaScript really hide your links from Google without destroying them?
  3. 3:43 Is it really necessary to optimize the first link on a page for SEO?
  4. 3:43 Does Google really combine signals from multiple links pointing to the same page?
  5. 5:20 Do site-wide links in the menu and footer really dilute the PageRank of your strategic pages?
  6. 6:22 Is it really necessary to nofollow site-wide links to your legal pages to optimize PageRank?
  7. 7:24 Should you really keep nofollow on your footer links and service pages?
  8. 10:10 Why does Google make it impossible to use Search Console Insights without Analytics?
  9. 11:08 Does Nofollow still affect crawling without passing on PageRank?
  10. 13:50 Why is Google so tight-lipped about its indexing incidents?
  11. 15:58 Should you really index all paged pages to optimize your SEO?
  12. 15:59 Is it really necessary to index all pagination pages to optimize your SEO?
  13. 19:53 Are URL parameters still an obstacle for organic search?
  14. 19:53 Are URL parameters really a non-issue for SEO anymore?
  15. 21:50 Is it true that Google is blocking the indexing of new sites?
  16. 23:56 Do links in embedded tweets really affect your SEO?
  17. 25:33 Are sitemaps really essential for Google indexing?
  18. 26:03 How does Google really discover your new URLs?
  19. 27:28 Why does Google require a canonical on ALL AMP pages, including standalone ones?
  20. 27:40 Is the rel=canonical really mandatory on all AMP pages, even standalone ones?
  21. 28:09 Should you really implement hreflang across an entire multilingual site?
  22. 28:41 Should you really implement hreflang on every page of a multilingual website?
  23. 29:08 Is it true that AMP is a speed factor for Google?
  24. 29:16 Should you still invest in AMP to optimize speed and ranking?
  25. 29:50 Why does Google measure Core Web Vitals on the actual page version your visitors are really viewing?
  26. 30:20 Do Core Web Vitals really measure what your users actually see?
  27. 31:23 Should you manually deindex old pagination URLs after changing your site's architecture?
  28. 31:23 Is it really necessary to manually de-index your old pagination URLs?
  29. 32:08 Is advertising on your site harming your SEO?
  30. 32:48 Does having ads on your site really hurt your Google rankings?
  31. 34:47 Is rel=canonical in syndication really reliable for controlling indexing?
  32. 34:47 Does rel=canonical really protect your syndicated content from ranking theft?
  33. 38:14 Do security alerts in Search Console really block Google's crawling?
  34. 38:14 Can a hacked site lose its crawl budget due to Google security alerts?
  35. 39:20 Have links in guest posts really lost all SEO value?
  36. 39:20 Do guest post links really have no SEO value?
  37. 40:55 Why does Google ignore identical modification dates in your sitemaps?
  38. 40:55 Why does Google ignore the lastmod dates in your XML sitemap?
  39. 42:00 Should you really update the lastmod date of the sitemap for every minor change?
  40. 42:21 Does a poorly configured sitemap really diminish your crawl budget?
  41. 43:00 Can a misconfigured sitemap really cut down your crawl budget?
  42. 44:34 Should you really have to choose between reducing duplicate content and using canonical tags?
  43. 44:34 Is it really necessary to eliminate all duplicate content or should you rely on rel=canonical?
  44. 45:10 Should you really set a crawl limit in Search Console?
  45. 45:40 Should you really let Google decide your crawl limit?
  46. 47:08 Do internal 301 redirects really dilute PageRank?
  47. 47:48 Do cascading internal 301 redirects really drain SEO juice?
  48. 49:53 Can the JavaScript History API really force Google to change your canonical URL?
  49. 49:53 Can Google really treat URL changes made by JavaScript and the History API as redirects?
📅
Official statement from (5 years ago)
TL;DR

Google now treats nofollow as a simple discovery hint, not an absolute block. Specifically, pages linked by nofollow links can be crawled and indexed, but there is no guarantee of PageRank or ranking signals being passed. This nuance changes the game for internal linking strategies and crawl budget management on large sites.

What you need to understand

What has changed in the functioning of nofollow?

Historically, nofollow was a strict directive: Google did not follow these links, period. The pointed URLs remained invisible to the crawler unless another normal link referenced them. Today, Google can decide to follow these links to discover new pages, even if the nofollow attribute is present.

This evolution turns nofollow into a hint rather than an instruction. Google reserves the right to crawl or not, depending on its interpretation of the context. A page with only nofollow backlinks can thus appear in the index — something that was impossible before.

Is PageRank transmission still blocked?

This is where it gets tricky. Google states that crawling and the passing of signals are two independent processes. A URL discovered via nofollow can be indexed, but that does not mean it will receive any SEO juice. PageRank transmission remains blocked by default.

In practical terms, if your page is crawled only through nofollow links, it may appear in the index with catastrophic ranking, due to a lack of incoming signals. It exists, but Google does not attribute any authority to it. Nofollow still protects against the leakage of PageRank, but no longer against discovery.

Why did Google change this behavior?

The official reason: to improve content discovery. In practice, Google wants to avoid useful pages being invisible simply because they are protected by nofollow. Think about forums, comments, user-generated content sections — all areas where nofollow was heavily applied.

But this logic also serves Google's interests: more crawled pages, a more complete index, better overall relevance. The engine prefers to decide for itself what deserves indexing, rather than letting webmasters fully control it through nofollow. It is a subtle transfer of power.

  • Nofollow no longer prevents discovery — Google can crawl the pointed URLs
  • PageRank still does not pass through nofollow, except in undocumented exceptions
  • Indexing does not guarantee ranking — a nofollowed page can be indexed without authority
  • Crawl budget strategies must evolve — nofollow is no longer sufficient to block crawler access
  • Robots.txt and meta robots remain essential for strict blocking

SEO Expert opinion

Is this statement consistent with observed practices on the ground?

Yes and no. Since Google introduced this nuance, there are indeed indexed pages with only nofollow backlinks. But the frequency and logic remain opaque. Some sites see their nofollowed URLs crawled massively, while others do not at all — with no clear pattern.

The real issue: Google provides no explicit criteria to know when nofollow will be respected or ignored. Is it related to domain authority? The type of link? The context of the page? Impossible to know. This lack of transparency makes SEO planning uncertain. [To be verified] in your own crawl logs to measure the real impact.

What nuances should be added to this logic of ‘independent signal’?

Google claims that crawling and ranking are decoupled. Let's be honest: this separation is theoretical. If a page is crawled, it consumes crawl budget. If it is indexed without signals, it can dilute the perceived quality of the site as a whole.

And then there are the gray areas: some observers note that pages with nofollow still receive a minimal boost in certain contexts — especially when the link comes from an ultra-authoritative domain. Google speaks of ‘hint’, not ‘absolute block’. Translation: they can do what they want. [To be verified] with rigorous A/B testing if you manage a large inventory.

In what cases does this rule not apply?

Nofollow remains strictly respected in two key situations: paid links explicitly marked (sponsored, nofollow) and areas where Google applies a zero-tolerance policy — for example, detected link schemes. In these cases, the engine takes no risks and ignores everything.

But be careful: if your nofollow link is in a sidebar full of outgoing links or in a generic footer, Google may decide not to even consider it a hint. Context matters. A nofollow in an editorial article is more likely to be followed than a nofollow in an automated widget.

Point of attention: Never assume that a nofollow completely blocks Google. If you really want to prevent crawling, use robots.txt or meta noindex. Nofollow alone is no longer reliable for strict crawl management.

Practical impact and recommendations

What should you do with nofollow links on your site?

First, audit your crawl logs. Identify if Google follows your internal nofollow links. If so, how frequently and on which sections. You may discover that areas meant to be protected (facets, filters, pagination pages) are still being crawled — which unnecessarily eats up budget.

Next, reevaluate your internal linking strategy. If you were using nofollow to prevent the discovery of low-value pages (thanks, order confirmations), switch to a real directive for exclusion: meta robots noindex or robots.txt blocking. Nofollow is no longer sufficient to guarantee invisibility.

What errors should you avoid in managing nofollow links?

Error number one: believing that nofollow = zero crawl. Google can pass through, and if the page is indexed without signals, it risks poisoning your index with weak content. You thought you were protecting your crawl budget, but you end up with thousands of ghost pages indexed.

Another trap: overusing nofollow on strategic internal linking. Some SEOs nofollow out of reflex — footer links, sidebar, filters. But if these pages have real user value, you deprive Google of useful signals. Nofollow should be reserved for zones without editorial value, not applied en masse out of laziness.

How to verify that your site complies with this new reality?

Start by extracting all your internal nofollow links using a Screaming Frog crawl or Oncrawl. Cross-reference with your server logs: how many of these URLs are still visited by Googlebot? If the rate exceeds 20-30%, you have a problem of consistency between intention and reality.

Next, check the indexing of these pages with site: queries or via Google Search Console. If nofollowed pages appear in the index, ask yourself: do they deserve to be there? If not, move to strict exclusion. If yes, remove the nofollow and let them receive signals normally.

  • Audit your crawl logs to detect nofollowed URLs still visited by Google
  • Replace nofollow with noindex or robots.txt on pages you really want to exclude
  • Reserve nofollow for non-editorial outgoing links (UGC, comments, sponsored)
  • Remove nofollow from your strategic internal links if the target pages have real SEO value
  • Monitor involuntary indexing of nofollowed pages via Search Console
  • Document your crawl strategy to avoid inconsistencies between dev and SEO teams
Nofollow is no longer a watertight barrier. Google now decides when to follow or ignore these links. To maintain control, combine several levers — noindex, robots.txt, canonical — and monitor your logs. This multi-layered management can quickly become complex on high-volume sites. If you lack internal resources or time to thoroughly audit crawl and indexing, contact a specialized SEO agency to avoid costly mistakes and optimize your crawl budget surgically.

❓ Frequently Asked Questions

Le nofollow empêche-t-il encore Google de crawler une page ?
Non. Google peut désormais suivre les liens nofollow pour découvrir et indexer des URLs, même si le PageRank ne passe pas. Le nofollow est devenu un indice, pas un blocage absolu.
Une page avec uniquement des backlinks nofollow peut-elle ranker ?
Techniquement, elle peut être indexée, mais sans signaux de ranking entrants elle aura un positionnement très faible. L'indexation ne garantit pas la visibilité.
Faut-il encore utiliser le nofollow sur les liens internes ?
Oui, mais uniquement pour les sections sans valeur éditoriale (filtres, facettes, pages techniques). Pour bloquer vraiment le crawl, préférez robots.txt ou meta noindex.
Le nofollow protège-t-il toujours contre la fuite de PageRank ?
Oui, selon Google. Le passage de PageRank reste bloqué par défaut via nofollow, même si la page est crawlée. Mais aucune donnée publique ne permet de vérifier cette affirmation de manière indépendante.
Comment savoir si Google crawle mes pages nofollowées ?
Analysez vos logs serveur et croisez-les avec la liste de vos liens nofollow internes. Si Googlebot visite ces URLs, c'est qu'il ignore l'attribut pour la découverte. Search Console peut aussi révéler une indexation involontaire.
🏷 Related Topics
Domain Age & History Crawl & Indexing AI & SEO Links & Backlinks Domain Name

🎥 From the same video 49

Other SEO insights extracted from this same Google Search Central video · duration 55 min · published on 21/08/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.