Official statement
Other statements from this video 49 ▾
- 1:38 Google suit-il vraiment les liens HTML masqués par du JavaScript ?
- 1:46 JavaScript peut-il masquer vos liens aux yeux de Google sans les détruire ?
- 3:43 Faut-il vraiment optimiser le premier lien d'une page pour le SEO ?
- 3:43 Google combine-t-il vraiment les signaux de plusieurs liens pointant vers la même page ?
- 5:20 Les liens site-wide dans le menu et le footer diluent-ils vraiment le PageRank de vos pages stratégiques ?
- 6:22 Faut-il vraiment nofollow les liens site-wide vers vos pages légales pour optimiser le PageRank ?
- 7:24 Faut-il vraiment garder le nofollow sur vos liens footer et pages de service ?
- 10:10 Search Console Insights sans Analytics : pourquoi Google rend-il impossible l'utilisation solo ?
- 11:08 Le nofollow influence-t-il encore le crawl sans transmettre de PageRank ?
- 13:50 Pourquoi Google refuse-t-il de communiquer sur tous ses incidents d'indexation ?
- 15:58 Faut-il vraiment indexer toutes les pages paginées pour optimiser son SEO ?
- 15:59 Faut-il vraiment indexer toutes les pages de pagination pour optimiser son SEO ?
- 19:53 Les paramètres d'URL sont-ils encore un problème pour le référencement naturel ?
- 19:53 Les paramètres d'URL sont-ils vraiment devenus un non-sujet SEO ?
- 21:50 Google bloque-t-il vraiment l'indexation des nouveaux sites ?
- 23:56 Les liens dans les tweets embarqués influencent-ils vraiment votre SEO ?
- 25:33 Les sitemaps sont-ils vraiment indispensables pour l'indexation Google ?
- 26:03 Comment Google découvre-t-il vraiment vos nouvelles URLs ?
- 27:28 Pourquoi Google impose-t-il un canonical sur TOUTES les pages AMP, même standalone ?
- 27:40 Le rel=canonical est-il vraiment obligatoire sur toutes les pages AMP, même standalone ?
- 28:09 Faut-il vraiment déployer hreflang sur l'intégralité d'un site multilingue ?
- 28:41 Faut-il vraiment implémenter hreflang sur toutes les pages d'un site multilingue ?
- 29:08 AMP est-il vraiment un facteur de vitesse pour Google ?
- 29:16 Faut-il encore miser sur AMP pour optimiser la vitesse et le ranking ?
- 29:50 Pourquoi Google mesure-t-il les Core Web Vitals sur la version de page que vos visiteurs consultent réellement ?
- 30:20 Les Core Web Vitals mesurent-ils vraiment ce que vos utilisateurs voient ?
- 31:23 Faut-il manuellement désindexer les anciennes URLs de pagination après un changement d'architecture ?
- 31:23 Faut-il vraiment désindexer manuellement vos anciennes URLs de pagination ?
- 32:08 La pub sur votre site tue-t-elle votre SEO ?
- 32:48 La publicité sur un site nuit-elle vraiment au classement Google ?
- 34:47 Le rel=canonical en syndication est-il vraiment fiable pour contrôler l'indexation ?
- 34:47 Le rel=canonical protège-t-il vraiment votre contenu syndiqué du vol de ranking ?
- 38:14 Les alertes de sécurité dans Search Console bloquent-elles vraiment le crawl de Google ?
- 38:14 Un site hacké perd-il son crawl budget suite aux alertes de sécurité Google ?
- 39:20 Les liens dans les guest posts ont-ils vraiment perdu toute valeur SEO ?
- 39:20 Les liens issus de guest posts ont-ils vraiment une valeur SEO nulle ?
- 40:55 Pourquoi Google ignore-t-il les dates de modification identiques dans vos sitemaps ?
- 40:55 Pourquoi Google ignore-t-il les dates lastmod de votre sitemap XML ?
- 42:00 Faut-il vraiment mettre à jour la date lastmod du sitemap à chaque modification mineure ?
- 42:21 Un sitemap mal configuré réduit-il vraiment votre crawl budget ?
- 43:00 Un sitemap mal configuré peut-il vraiment réduire votre crawl budget ?
- 44:34 Faut-il vraiment choisir entre réduction du duplicate content et balises canonical ?
- 44:34 Faut-il vraiment éliminer tout le duplicate content ou miser sur le rel=canonical ?
- 45:10 Faut-il vraiment configurer la limite de crawl dans Search Console ?
- 45:40 Faut-il vraiment laisser Google décider de votre limite de crawl ?
- 47:08 Les redirections 301 en interne diluent-elles vraiment le PageRank ?
- 47:48 Les redirections 301 internes en cascade font-elles vraiment perdre du jus SEO ?
- 49:53 L'History API JavaScript peut-elle vraiment forcer Google à changer votre URL canonique ?
- 49:53 JavaScript et History API : Google peut-il vraiment traiter ces changements d'URL comme des redirections ?
Google now treats nofollow as a simple discovery hint, not an absolute block. Specifically, pages linked by nofollow links can be crawled and indexed, but there is no guarantee of PageRank or ranking signals being passed. This nuance changes the game for internal linking strategies and crawl budget management on large sites.
What you need to understand
What has changed in the functioning of nofollow?
Historically, nofollow was a strict directive: Google did not follow these links, period. The pointed URLs remained invisible to the crawler unless another normal link referenced them. Today, Google can decide to follow these links to discover new pages, even if the nofollow attribute is present.
This evolution turns nofollow into a hint rather than an instruction. Google reserves the right to crawl or not, depending on its interpretation of the context. A page with only nofollow backlinks can thus appear in the index — something that was impossible before.
Is PageRank transmission still blocked?
This is where it gets tricky. Google states that crawling and the passing of signals are two independent processes. A URL discovered via nofollow can be indexed, but that does not mean it will receive any SEO juice. PageRank transmission remains blocked by default.
In practical terms, if your page is crawled only through nofollow links, it may appear in the index with catastrophic ranking, due to a lack of incoming signals. It exists, but Google does not attribute any authority to it. Nofollow still protects against the leakage of PageRank, but no longer against discovery.
Why did Google change this behavior?
The official reason: to improve content discovery. In practice, Google wants to avoid useful pages being invisible simply because they are protected by nofollow. Think about forums, comments, user-generated content sections — all areas where nofollow was heavily applied.
But this logic also serves Google's interests: more crawled pages, a more complete index, better overall relevance. The engine prefers to decide for itself what deserves indexing, rather than letting webmasters fully control it through nofollow. It is a subtle transfer of power.
- Nofollow no longer prevents discovery — Google can crawl the pointed URLs
- PageRank still does not pass through nofollow, except in undocumented exceptions
- Indexing does not guarantee ranking — a nofollowed page can be indexed without authority
- Crawl budget strategies must evolve — nofollow is no longer sufficient to block crawler access
- Robots.txt and meta robots remain essential for strict blocking
SEO Expert opinion
Is this statement consistent with observed practices on the ground?
Yes and no. Since Google introduced this nuance, there are indeed indexed pages with only nofollow backlinks. But the frequency and logic remain opaque. Some sites see their nofollowed URLs crawled massively, while others do not at all — with no clear pattern.
The real issue: Google provides no explicit criteria to know when nofollow will be respected or ignored. Is it related to domain authority? The type of link? The context of the page? Impossible to know. This lack of transparency makes SEO planning uncertain. [To be verified] in your own crawl logs to measure the real impact.
What nuances should be added to this logic of ‘independent signal’?
Google claims that crawling and ranking are decoupled. Let's be honest: this separation is theoretical. If a page is crawled, it consumes crawl budget. If it is indexed without signals, it can dilute the perceived quality of the site as a whole.
And then there are the gray areas: some observers note that pages with nofollow still receive a minimal boost in certain contexts — especially when the link comes from an ultra-authoritative domain. Google speaks of ‘hint’, not ‘absolute block’. Translation: they can do what they want. [To be verified] with rigorous A/B testing if you manage a large inventory.
In what cases does this rule not apply?
Nofollow remains strictly respected in two key situations: paid links explicitly marked (sponsored, nofollow) and areas where Google applies a zero-tolerance policy — for example, detected link schemes. In these cases, the engine takes no risks and ignores everything.
But be careful: if your nofollow link is in a sidebar full of outgoing links or in a generic footer, Google may decide not to even consider it a hint. Context matters. A nofollow in an editorial article is more likely to be followed than a nofollow in an automated widget.
Practical impact and recommendations
What should you do with nofollow links on your site?
First, audit your crawl logs. Identify if Google follows your internal nofollow links. If so, how frequently and on which sections. You may discover that areas meant to be protected (facets, filters, pagination pages) are still being crawled — which unnecessarily eats up budget.
Next, reevaluate your internal linking strategy. If you were using nofollow to prevent the discovery of low-value pages (thanks, order confirmations), switch to a real directive for exclusion: meta robots noindex or robots.txt blocking. Nofollow is no longer sufficient to guarantee invisibility.
What errors should you avoid in managing nofollow links?
Error number one: believing that nofollow = zero crawl. Google can pass through, and if the page is indexed without signals, it risks poisoning your index with weak content. You thought you were protecting your crawl budget, but you end up with thousands of ghost pages indexed.
Another trap: overusing nofollow on strategic internal linking. Some SEOs nofollow out of reflex — footer links, sidebar, filters. But if these pages have real user value, you deprive Google of useful signals. Nofollow should be reserved for zones without editorial value, not applied en masse out of laziness.
How to verify that your site complies with this new reality?
Start by extracting all your internal nofollow links using a Screaming Frog crawl or Oncrawl. Cross-reference with your server logs: how many of these URLs are still visited by Googlebot? If the rate exceeds 20-30%, you have a problem of consistency between intention and reality.
Next, check the indexing of these pages with site: queries or via Google Search Console. If nofollowed pages appear in the index, ask yourself: do they deserve to be there? If not, move to strict exclusion. If yes, remove the nofollow and let them receive signals normally.
- Audit your crawl logs to detect nofollowed URLs still visited by Google
- Replace nofollow with noindex or robots.txt on pages you really want to exclude
- Reserve nofollow for non-editorial outgoing links (UGC, comments, sponsored)
- Remove nofollow from your strategic internal links if the target pages have real SEO value
- Monitor involuntary indexing of nofollowed pages via Search Console
- Document your crawl strategy to avoid inconsistencies between dev and SEO teams
❓ Frequently Asked Questions
Le nofollow empêche-t-il encore Google de crawler une page ?
Une page avec uniquement des backlinks nofollow peut-elle ranker ?
Faut-il encore utiliser le nofollow sur les liens internes ?
Le nofollow protège-t-il toujours contre la fuite de PageRank ?
Comment savoir si Google crawle mes pages nofollowées ?
🎥 From the same video 49
Other SEO insights extracted from this same Google Search Central video · duration 55 min · published on 21/08/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.