Official statement
Other statements from this video 10 ▾
- 1:06 Google My Business améliore-t-il vraiment le référencement de votre site ?
- 8:33 Pourquoi les nouveaux sites subissent-ils des fluctuations de classement incontrôlables ?
- 13:18 Pourquoi la Search Console affiche-t-elle des données d'indexation incohérentes ?
- 19:35 Le canonical mal défini pénalise-t-il vraiment votre classement dans Google ?
- 31:00 Le contenu dupliqué nuit-il vraiment à votre indexation Google ?
- 33:24 Sites multilingues : Google peut-il fusionner vos versions linguistiques si le contenu est trop similaire ?
- 36:48 Les données structurées mal implémentées freinent-elles vraiment l'indexation de votre site ?
- 39:41 Les erreurs 404 nuisent-elles vraiment au classement de votre site ?
- 40:19 Les ancres internes dictent-elles vraiment les titres de vos sitelinks dans Google ?
- 44:21 Le balisage Search Action suffit-il vraiment à faire apparaître la sitelink searchbox dans Google ?
John Mueller states that noindex pages cannot pass PageRank, even if the links are technically crawled with the follow attribute. An unindexed page does not exist in Google's link graph, so no flow occurs. Specifically: if you block the indexing of an intermediate page, you break the chain of PageRank transfer to the target pages. Forget about strategic noindexing to sculpt your links.
What you need to understand
What does noindex with follow actually mean?
The noindex directive tells Google not to include a page in its index. The follow attribute allows the bot to crawl and follow the links present on that page. In theory, this is supposed to guide Googlebot to other URLs while keeping the intermediate page out of the index.
But here's the catch: Mueller specifies that if the page is not indexed, the links cannot pass anything. No indexing = no presence in the link graph = no PageRank flow. It's harsh, but logical: a page invisible to the index is invisible to the ranking algorithm.
Why is this distinction important?
For years, some SEOs believed they could use noindex,follow to finely control the distribution of internal PageRank. The idea was to block weak or duplicate pages with noindex while allowing juice to flow to strategic pages via outgoing links.
This statement from Mueller shatters that logic. If you noindex a page that served as a relay in your internal linking, you lose the transfer. The downstream target pages receive nothing, even if Googlebot technically visits the links.
How does Google handle a noindex page?
Once the noindex directive is detected, Google removes the page from its index during the next update cycle. The page disappears from search results but may continue to be crawled sporadically to check if the directive persists.
Crawling is not enough: without indexing, the page does not enter the PageRank calculations. It becomes an algorithmic dead end. The links it carries are technically detected but treated as if they don’t exist for popularity calculations.
- Noindex blocks indexing, not crawling — but that changes nothing about the PageRank flow.
- Follow has no effect if the page is not in the index: the links are seen but ignored for ranking.
- Internal linking loses its effectiveness as soon as an intermediate page goes noindex.
- Old PageRank sculpting tactics using noindex are outdated.
- Every noindex page must be justified: if it carries strategic links, you’re breaking your SEO architecture.
SEO Expert opinion
Is this statement consistent with field observations?
Yes, and it is actually a brutal confirmation of what many suspected. On sites where relay pages (filtered categories, tag pages) were put in noindex to avoid duplication, there was a noticeable drop in visibility for the downstream product or article pages. Internal links were losing their weight.
Some SEO audits revealed traffic losses of 20 to 40% across entire sections after aggressive cleaning via noindex. The cause? A broken link architecture: strategic pages were no longer receiving juice from the upper levels. [To be verified]: Google has never published specific data on the extent of this loss, but field experience is unanimous.
What nuances should be considered?
Mueller talks about PageRank, but what about other signals? Crawling remains active, so Googlebot can discover new URLs through a noindex page. The indexing of target pages is not blocked, just the transfer of popularity.
Another point: this rule applies to noindex in meta robots or X-Robots-Tag. It does not relate to robots.txt, which blocks crawling (and therefore prevents Google from seeing the links, but that’s another topic). Also, watch out for mixed cases: a noindex page accessible through multiple paths may still receive PageRank from other indexed URLs.
In what cases does this rule not apply?
If a page receives external backlinks while it's in noindex, those links do nothing for the site. But if the same URL is accessible in another indexed form (different canonical, ignored parameter), the juice may flow through the indexed version.
Another theoretical exception: 301 redirects. If you redirect a noindex page to an indexed page, the PageRank is supposed to be passed through the redirect, not through the noindex. But in this case, you might as well remove the noindex.
Practical impact and recommendations
What should you do if noindex pages carry strategic links?
The first step: audit all noindex pages on your site. Identify those that contain links to priority pages (product sheets, pillar articles, landing pages). If these intermediate pages are in noindex, you are losing internal PageRank.
Two solutions: either you remove the noindex and manage the risk of duplication differently (canonical, unique content, partial disallowing), or you modify the architecture so that strategic pages are linked directly from indexed pages. No compromises: noindex kills the juice.
What mistakes should you avoid in managing noindex?
The classic mistake: putting filtered category or paginated pages in noindex to avoid duplicate content, without realizing that they serve as a hub in the internal linking. Result: product sheets at the end of the chain lose their internal popularity.
Another trap: noindex on tag or taxonomy pages that group thematic content. These pages can concentrate a lot of internal and external links. Noindexing them is like throwing PageRank in the trash. Prefer editorial work to differentiate these pages rather than a blind noindex.
How to restructure your linking if noindex breaks the transfer?
If you cannot remove the noindex (technical pages, private spaces, temporary content), create alternative paths. Add direct links from menus, footers, or indexed pillar pages to priority URLs. Avoid having strategic pages depend solely on noindex pages.
Also, think about checking your XML sitemaps: never list noindex pages in them. Google crawls them, sees the directive, and wastes time. A clean sitemap = optimized crawl budget = more resources for the important pages.
- Audit all noindex pages to identify those that carry strategic internal links.
- Remove noindex from intermediate pages if they are essential for internal linking.
- Create direct link paths from indexed pages to priority URLs.
- Exclude noindex pages from XML sitemaps to avoid wasting crawl budget.
- Avoid noindexing category, tag, or taxonomy pages that structure the site.
- Favor canonicals and unique content to manage duplication rather than blind noindexing.
❓ Frequently Asked Questions
Une page en noindex peut-elle quand même être crawlée par Googlebot ?
Quelle est la différence entre noindex,follow et noindex,nofollow ?
Peut-on utiliser le noindex pour sculpter le PageRank interne ?
Le canonical est-il une meilleure alternative au noindex pour gérer la duplication ?
Faut-il inclure les pages en noindex dans le sitemap XML ?
🎥 From the same video 10
Other SEO insights extracted from this same Google Search Central video · duration 53 min · published on 21/09/2017
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.