Official statement
Other statements from this video 13 ▾
- 2:10 Vos pages de localisation risquent-elles d'être pénalisées comme des doorway pages ?
- 5:30 Les alertes HTTPS de Search Console influencent-elles vraiment votre classement Google ?
- 6:58 Pourquoi Google ajoute-t-il votre nom de marque dans les titres de page ?
- 11:37 Pourquoi Google désindexe-t-il des pages après une migration HTTPS ?
- 13:45 Pourquoi robots.txt bloque-t-il aussi les directives noindex et canonical ?
- 15:05 Faut-il vraiment bloquer les facettes de navigation dans robots.txt ?
- 16:57 Faut-il signaler le spam des concurrents à Google pour gagner des positions ?
- 25:19 Faut-il montrer à Googlebot les bannières anti-bloqueurs de pub ?
- 28:26 Faut-il vraiment optimiser ses sitemaps pour influencer le crawl de Google ?
- 30:01 Les méta descriptions longues génèrent-elles vraiment plus de clics ?
- 36:49 Peut-on vraiment transformer un site éditorial en site transactionnel sans pénalité SEO ?
- 44:22 Faut-il vraiment cacher du contenu à Googlebot pour optimiser l'expérience géolocalisée ?
- 53:55 Googlebot indexe-t-il vraiment tout le contenu JavaScript sans interaction utilisateur ?
Google confirms that a noindex page eventually disappears from the index AND the link graph. Essentially, the outbound links from that page no longer pass any PageRank, whether you use nofollow or follow. For SEOs who are heavily noindexing certain sections, this is a clear signal: you may be unknowingly fragmenting your internal linking structure.
What you need to understand
How does this statement change the game regarding noindex?
Most SEO practitioners view noindex as a simple indexing filter: the page remains crawlable, the links remain followed, only the display in the SERPs is blocked. This is technically true at first. But Mueller points out a rarely documented point here: over time, Google also removes the links from this page from the graph.
The link graph is the internal mapping that Google uses to calculate authority, distribute PageRank, and understand the structure of the site. If your noindex pages disappear from this graph, their outbound links no longer count. It doesn't matter whether you set it to follow or nofollow: a page absent from the graph passes nothing.
What really happens between the moment you add noindex and the removal of links?
Google does not instantly remove a noindexed page. There is a latency period, often several weeks, during which the page is still crawled and the links are still taken into account. This time frame creates a dangerous gray area: you think you have noindexed properly, but the graph remains intact… temporarily.
Once Google considers the page definitively out of the index, it removes it from the graph. From that point on, all link signals vanish. If this page served as an internal hub, connecting several strategic sections, you have unknowingly cut a critical node. Crawlers visit less, PageRank stagnates, and some deep pages become functionally orphaned.
Why does the nofollow attribute become unnecessary on a noindex page?
Many SEOs combine noindex and nofollow out of caution, thinking they double-lock the flow of link juice. Mueller rules that out: it is redundant. Once the page is removed from the graph, it can no longer pass PageRank, whether the links are follow or nofollow.
Nofollow remains relevant on indexed pages to sculpt the flow of PageRank or to avoid endorsing certain outbound links. But on a noindexed page that will eventually be removed from the graph, it's just unnecessary noise. You gain nothing by adding nofollow: Google is already ignoring the links anyway.
- Noindex removes the page from the index AND the link graph after a variable delay.
- Outbound links from a noindexed page no longer pass PageRank once the page is removed from the graph.
- The nofollow attribute has no additional impact on a page that is already noindexed.
- The removal is not instantaneous: there is a latency window, but it is unpredictable.
- Noindex pages can fragment internal linking if they served as connectors between sections.
SEO Expert opinion
Is this statement consistent with real-world observations?
Yes, and it is even confirmed by empirical tests conducted on high-volume sites. Massively noindexed pages (out-of-stock product pages, parametric filters) end up seeing their crawl frequency drop drastically. The sections linked only via these pages lose visibility, even if they remain technically accessible through other paths.
What is missing from Mueller's statement is the precise timeline. How long before Google removes links from the graph? A few days, weeks, several months? [To be verified]. Field reports vary widely: some sites see the effect in 2 weeks, others in 3 months. This likely depends on crawl budget, visit frequency, and domain authority.
What real risks do sites face when heavily noindexing?
The first risk is the silent fragmentation of internal linking. Imagine an e-commerce site that noindexes all its filter pages, thinking it is cleaning the index. If these pages served as bridges between categories, you’ve just severed internal highways. Deep categories become more difficult to crawl, and their PageRank stagnates.
The second risk: the lag effect. You noindex a section, everything seems stable for weeks, then suddenly, your strategic pages lose rankings. Why? Because Google has finally removed the links from the graph. You search for the cause for days, while the decision was made a month ago. Diagnosing this kind of issue is an analytical nightmare.
The third risk, often overlooked: poorly managed temporary redirects. Some sites noindex a page before redirecting it, thinking they are avoiding double indexing. If the redirect is delayed, the noindexed page loses its links from the graph, and when you finally redirect, the PageRank is gone. You are redirecting from empty space.
In which cases does this rule not apply?
If a page remains noindexed but crawlable for a very short time (a few days at most), Google probably doesn’t have time to remove it from the graph. This is typically the case for a page temporarily closed for maintenance and then quickly reopened. The noindex serves as a temporary filter, but the links remain active.
Another edge case: noindex + disallow pages. If you also block crawling in robots.txt, Google cannot even see the noindex. The page technically remains in the graph if external links point to it, but Google cannot crawl to check its status. This is a shaky configuration that creates inconsistencies. [To be verified] according to the latest algorithm iterations, but historically, this combination generates unpredictable behaviors.
Practical impact and recommendations
What should you do if you are noindexing pages?
First action: audit existing noindexed pages and identify those that serve as connectors in your internal linking structure. Use Screaming Frog or Oncrawl to map the outbound links from these pages. If they point to strategic sections without another solid crawl path, you have a problem.
Second action: rebuild the linking structure before noindexing. If a page needs to disappear from the index but served as an internal relay, first create alternative links from other indexed pages. Don’t let Google remove the links from the graph without preparing a safety net.
What mistakes should you absolutely avoid?
Classic error: noindexing category or tag pages to "clean" the index without measuring the impact on the link graph. These pages often connect dozens of articles. Removing them from the graph cuts off hundreds of internal links at once. Result: your articles lose juice, their crawl frequency decreases, and some disappear from the SERPs.
Another frequent error: combining noindex and nofollow out of reflex. As Mueller points out, this is unnecessary. Worse, it muddles the signals: if you end up removing the noindex later, you also need to think about removing the nofollow, otherwise you keep a stray directive. Simplify: noindex is sufficient.
Third error: not monitoring crawl after noindex. Set up alerts in Google Search Console to monitor drops in crawled pages. If you noindex 500 pages and overall crawl falls by 30% three weeks later, it is a clear signal that Google has removed links from the graph. At this point, you may have already lost ground.
How can you check that your site is compliant and optimized?
First check: server log analysis. Look to see if Googlebot continues to crawl noindexed pages at the same frequency. If the crawl collapses after a few weeks, it means the page is out of the graph. Compare with indexed control pages to measure the gap.
Second check: internal PageRank testing. Use tools such as OnCrawl or Botify to model the flow of internal PageRank. Run a before/after noindex simulation. If strategic pages significantly lose juice, it means your noindexed pages played a hub role. Adjust the linking structure before definitively validating the noindex.
- Audit current noindexed pages and map their outbound links
- Identify hub pages and rebuild the internal linking structure before noindexing
- Never combine noindex and nofollow without a precise strategic reason
- Monitor crawl through Search Console and server logs after each wave of noindex
- Simulate internal PageRank flow to anticipate juice losses
- Set up automatic alerts for drops in crawl frequency
❓ Frequently Asked Questions
Combien de temps faut-il à Google pour retirer une page noindexée du graphique de liens ?
Si je noindexe une page, puis-je quand même utiliser ses liens internes pour distribuer du PageRank ?
Est-ce qu'ajouter nofollow sur une page noindexée apporte un bénéfice supplémentaire ?
Que se passe-t-il si je noindexe des pages catégories qui servent de hub interne ?
Comment détecter si des pages noindexées impactent mon crawl budget ?
🎥 From the same video 13
Other SEO insights extracted from this same Google Search Central video · duration 57 min · published on 12/12/2017
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.