Official statement
Other statements from this video 49 ▾
- 1:38 Does Google really track HTML links that are hidden by JavaScript?
- 1:46 Can JavaScript really hide your links from Google without destroying them?
- 3:43 Is it really necessary to optimize the first link on a page for SEO?
- 3:43 Does Google really combine signals from multiple links pointing to the same page?
- 5:20 Do site-wide links in the menu and footer really dilute the PageRank of your strategic pages?
- 6:22 Is it really necessary to nofollow site-wide links to your legal pages to optimize PageRank?
- 7:24 Should you really keep nofollow on your footer links and service pages?
- 10:10 Why does Google make it impossible to use Search Console Insights without Analytics?
- 11:08 Does Nofollow still affect crawling without passing on PageRank?
- 13:50 Why is Google so tight-lipped about its indexing incidents?
- 15:58 Should you really index all paged pages to optimize your SEO?
- 15:59 Is it really necessary to index all pagination pages to optimize your SEO?
- 19:53 Are URL parameters still an obstacle for organic search?
- 19:53 Are URL parameters really a non-issue for SEO anymore?
- 21:50 Is it true that Google is blocking the indexing of new sites?
- 23:56 Do links in embedded tweets really affect your SEO?
- 25:33 Are sitemaps really essential for Google indexing?
- 26:03 How does Google really discover your new URLs?
- 27:28 Why does Google require a canonical on ALL AMP pages, including standalone ones?
- 27:40 Is the rel=canonical really mandatory on all AMP pages, even standalone ones?
- 28:09 Should you really implement hreflang across an entire multilingual site?
- 28:41 Should you really implement hreflang on every page of a multilingual website?
- 29:08 Is it true that AMP is a speed factor for Google?
- 29:16 Should you still invest in AMP to optimize speed and ranking?
- 29:50 Why does Google measure Core Web Vitals on the actual page version your visitors are really viewing?
- 30:20 Do Core Web Vitals really measure what your users actually see?
- 31:23 Should you manually deindex old pagination URLs after changing your site's architecture?
- 31:23 Is it really necessary to manually de-index your old pagination URLs?
- 32:08 Is advertising on your site harming your SEO?
- 32:48 Does having ads on your site really hurt your Google rankings?
- 34:47 Is rel=canonical in syndication really reliable for controlling indexing?
- 34:47 Does rel=canonical really protect your syndicated content from ranking theft?
- 38:14 Do security alerts in Search Console really block Google's crawling?
- 38:14 Can a hacked site lose its crawl budget due to Google security alerts?
- 39:20 Have links in guest posts really lost all SEO value?
- 39:20 Do guest post links really have no SEO value?
- 40:55 Why does Google ignore identical modification dates in your sitemaps?
- 40:55 Why does Google ignore the lastmod dates in your XML sitemap?
- 42:00 Should you really update the lastmod date of the sitemap for every minor change?
- 42:21 Does a poorly configured sitemap really diminish your crawl budget?
- 43:00 Can a misconfigured sitemap really cut down your crawl budget?
- 44:34 Should you really have to choose between reducing duplicate content and using canonical tags?
- 44:34 Is it really necessary to eliminate all duplicate content or should you rely on rel=canonical?
- 45:10 Should you really set a crawl limit in Search Console?
- 45:40 Should you really let Google decide your crawl limit?
- 47:08 Do internal 301 redirects really dilute PageRank?
- 47:48 Do cascading internal 301 redirects really drain SEO juice?
- 49:53 Can the JavaScript History API really force Google to change your canonical URL?
- 49:53 Can Google really treat URL changes made by JavaScript and the History API as redirects?
Google now treats nofollow as a simple discovery hint, not an absolute block. Specifically, pages linked by nofollow links can be crawled and indexed, but there is no guarantee of PageRank or ranking signals being passed. This nuance changes the game for internal linking strategies and crawl budget management on large sites.
What you need to understand
What has changed in the functioning of nofollow?
Historically, nofollow was a strict directive: Google did not follow these links, period. The pointed URLs remained invisible to the crawler unless another normal link referenced them. Today, Google can decide to follow these links to discover new pages, even if the nofollow attribute is present.
This evolution turns nofollow into a hint rather than an instruction. Google reserves the right to crawl or not, depending on its interpretation of the context. A page with only nofollow backlinks can thus appear in the index — something that was impossible before.
Is PageRank transmission still blocked?
This is where it gets tricky. Google states that crawling and the passing of signals are two independent processes. A URL discovered via nofollow can be indexed, but that does not mean it will receive any SEO juice. PageRank transmission remains blocked by default.
In practical terms, if your page is crawled only through nofollow links, it may appear in the index with catastrophic ranking, due to a lack of incoming signals. It exists, but Google does not attribute any authority to it. Nofollow still protects against the leakage of PageRank, but no longer against discovery.
Why did Google change this behavior?
The official reason: to improve content discovery. In practice, Google wants to avoid useful pages being invisible simply because they are protected by nofollow. Think about forums, comments, user-generated content sections — all areas where nofollow was heavily applied.
But this logic also serves Google's interests: more crawled pages, a more complete index, better overall relevance. The engine prefers to decide for itself what deserves indexing, rather than letting webmasters fully control it through nofollow. It is a subtle transfer of power.
- Nofollow no longer prevents discovery — Google can crawl the pointed URLs
- PageRank still does not pass through nofollow, except in undocumented exceptions
- Indexing does not guarantee ranking — a nofollowed page can be indexed without authority
- Crawl budget strategies must evolve — nofollow is no longer sufficient to block crawler access
- Robots.txt and meta robots remain essential for strict blocking
SEO Expert opinion
Is this statement consistent with observed practices on the ground?
Yes and no. Since Google introduced this nuance, there are indeed indexed pages with only nofollow backlinks. But the frequency and logic remain opaque. Some sites see their nofollowed URLs crawled massively, while others do not at all — with no clear pattern.
The real issue: Google provides no explicit criteria to know when nofollow will be respected or ignored. Is it related to domain authority? The type of link? The context of the page? Impossible to know. This lack of transparency makes SEO planning uncertain. [To be verified] in your own crawl logs to measure the real impact.
What nuances should be added to this logic of ‘independent signal’?
Google claims that crawling and ranking are decoupled. Let's be honest: this separation is theoretical. If a page is crawled, it consumes crawl budget. If it is indexed without signals, it can dilute the perceived quality of the site as a whole.
And then there are the gray areas: some observers note that pages with nofollow still receive a minimal boost in certain contexts — especially when the link comes from an ultra-authoritative domain. Google speaks of ‘hint’, not ‘absolute block’. Translation: they can do what they want. [To be verified] with rigorous A/B testing if you manage a large inventory.
In what cases does this rule not apply?
Nofollow remains strictly respected in two key situations: paid links explicitly marked (sponsored, nofollow) and areas where Google applies a zero-tolerance policy — for example, detected link schemes. In these cases, the engine takes no risks and ignores everything.
But be careful: if your nofollow link is in a sidebar full of outgoing links or in a generic footer, Google may decide not to even consider it a hint. Context matters. A nofollow in an editorial article is more likely to be followed than a nofollow in an automated widget.
Practical impact and recommendations
What should you do with nofollow links on your site?
First, audit your crawl logs. Identify if Google follows your internal nofollow links. If so, how frequently and on which sections. You may discover that areas meant to be protected (facets, filters, pagination pages) are still being crawled — which unnecessarily eats up budget.
Next, reevaluate your internal linking strategy. If you were using nofollow to prevent the discovery of low-value pages (thanks, order confirmations), switch to a real directive for exclusion: meta robots noindex or robots.txt blocking. Nofollow is no longer sufficient to guarantee invisibility.
What errors should you avoid in managing nofollow links?
Error number one: believing that nofollow = zero crawl. Google can pass through, and if the page is indexed without signals, it risks poisoning your index with weak content. You thought you were protecting your crawl budget, but you end up with thousands of ghost pages indexed.
Another trap: overusing nofollow on strategic internal linking. Some SEOs nofollow out of reflex — footer links, sidebar, filters. But if these pages have real user value, you deprive Google of useful signals. Nofollow should be reserved for zones without editorial value, not applied en masse out of laziness.
How to verify that your site complies with this new reality?
Start by extracting all your internal nofollow links using a Screaming Frog crawl or Oncrawl. Cross-reference with your server logs: how many of these URLs are still visited by Googlebot? If the rate exceeds 20-30%, you have a problem of consistency between intention and reality.
Next, check the indexing of these pages with site: queries or via Google Search Console. If nofollowed pages appear in the index, ask yourself: do they deserve to be there? If not, move to strict exclusion. If yes, remove the nofollow and let them receive signals normally.
- Audit your crawl logs to detect nofollowed URLs still visited by Google
- Replace nofollow with noindex or robots.txt on pages you really want to exclude
- Reserve nofollow for non-editorial outgoing links (UGC, comments, sponsored)
- Remove nofollow from your strategic internal links if the target pages have real SEO value
- Monitor involuntary indexing of nofollowed pages via Search Console
- Document your crawl strategy to avoid inconsistencies between dev and SEO teams
❓ Frequently Asked Questions
Le nofollow empêche-t-il encore Google de crawler une page ?
Une page avec uniquement des backlinks nofollow peut-elle ranker ?
Faut-il encore utiliser le nofollow sur les liens internes ?
Le nofollow protège-t-il toujours contre la fuite de PageRank ?
Comment savoir si Google crawle mes pages nofollowées ?
🎥 From the same video 49
Other SEO insights extracted from this same Google Search Central video · duration 55 min · published on 21/08/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.