Official statement
Other statements from this video 21 ▾
- 1:43 Google réécrit-il vraiment vos meta descriptions si elles contiennent trop de mots-clés ?
- 4:20 Pourquoi modifier le code Analytics bloque-t-il la vérification Search Console ?
- 5:58 Pourquoi votre balisage hreflang ne fonctionne-t-il toujours pas malgré vos efforts ?
- 5:58 Faut-il privilégier hreflang langue seule ou langue+pays pour vos versions internationales ?
- 9:09 Hreflang n'influence pas l'indexation : pourquoi Google indexe une seule version mais affiche plusieurs URLs ?
- 12:32 Pourquoi votre site disparaît-il complètement de l'index Google et comment le récupérer ?
- 15:51 L'outil de paramètres URL consolide-t-il vraiment tous les signaux comme Google le prétend ?
- 19:03 Les core updates ne sanctionnent-elles vraiment aucune erreur technique ?
- 23:00 L'outil de contenu obsolète supprime-t-il vraiment l'indexation ou juste le snippet ?
- 23:56 Pourquoi la commande site: est-elle inutile pour diagnostiquer l'indexation ?
- 23:56 L'outil de suppression d'URL désindexe-t-il vraiment vos pages ?
- 26:59 Les 50 000 URLs d'un sitemap : pourquoi cette limite ne concerne-t-elle pas ce que vous croyez ?
- 30:10 BERT pénalise-t-il vraiment les sites qui perdent du trafic après sa mise en place ?
- 32:07 Google Images choisit-il vraiment la bonne image pour vos pages ?
- 33:50 Faut-il vraiment détailler ses anchor texts avec prix, avis et notes ?
- 38:03 Pourquoi Google refuse-t-il d'indexer toutes vos pages et comment y remédier ?
- 40:12 L'anchor text interne répétitif est-il vraiment un problème pour Google ?
- 42:48 Les paramètres UTM créent-ils vraiment du contenu dupliqué indexé par Google ?
- 45:27 Le mixed content HTTPS/HTTP impacte-t-il vraiment le référencement Google ?
- 47:16 Le hreflang en HTML alourdit-il vraiment vos pages ou est-ce un mythe ?
- 53:53 Pourquoi les anciennes URLs restent-elles dans l'index après une redirection 301 ?
Google states that a complete crawl requires three types of links: descending (to subcategories), ascending (to parent), and horizontal (between items in the same category). Without this bidirectional navigation, some pages remain inaccessible to the bot. Recommendation: Test with a third-party crawler to ensure all URLs are reachable from any entry point of the site.
What you need to understand
What does Google mean by bidirectional navigation?
Mueller's statement focuses on three aspects of internal linking: vertical descending, vertical ascending, and horizontal. Vertical descending is the classic hierarchy — home to category, category to subcategory, and subcategory to product page.
Vertical ascending refers to breadcrumbs and upward links: from a product page, you should be able to return to the subcategory, then to the parent category. Horizontal, often overlooked, consists of links between pages at the same level — related products, related articles, and faceted navigation within a category.
Why is unidirectional linking problematic?
Imagine an e-commerce site with 10,000 products spread across 50 categories. If your links only descend from the homepage, the bot must follow the entire hierarchical chain to reach each product. No shortcuts. If an intermediate page fails (500 error, timeout, exhausted crawl budget), the entire sub-tree becomes invisible.
With bidirectional linking, Googlebot can take multiple paths to reach the same URL: from the parent category, from a similar product, or from a filtered results page. This redundancy dramatically increases the likelihood that a page will be discovered and crawled.
How can you tell if your site is compliant?
Mueller recommends using a third-party crawler — Screaming Frog, Oncrawl, Botify — to simulate Googlebot's behavior. The test: launch a crawl from any deep page on the site (not just the homepage). If certain URLs do not appear in the report, your linking has dead zones.
Then compare with a crawl from the homepage. If you discover 30% more pages starting from the root, it indicates that your structure is not bidirectional — it relies on a single entry point. This approach works for a site with 200 pages, but crumbles at scale.
- Descending Linking: links to lower levels (category → products)
- Ascending Linking: breadcrumbs, links to parent
- Horizontal Linking: links between pages at the same level (similar products, pagination, facets)
- Crawler Test: launch from multiple entry points to detect inaccessible areas
- Redundancy: multiple paths to each critical URL
SEO Expert opinion
Is this guideline really new?
Let’s be honest: bidirectional navigation has been a fundamental aspect of SEO for 15 years. What Mueller is emphasizing here is that Google isn't working magic. If your internal linking resembles a tree with dead branches, the bot won’t guess the missing URLs.
The real novelty is the emphasis on horizontal linking. Many sites still neglect links between products, within articles of the same category, or between filtered results pages. The consequence is that large parts of the catalog remain orphaned, accessible only through internal search or manually typed URLs.
What limitations does this rule have?
Mueller’s advice works well for tree-structured sites — e-commerce, media sites with sections, corporate websites. However, for a flat architecture site (single-category blog, directory without hierarchy), horizontal linking becomes the only leverage. No parent, no subcategory — just contextual links between contents.
Another limitation is link over-optimization. Adding 50 horizontal links on each product page to ensure bidirectionality dilutes PageRank and overwhelms the user. The ideal balance is 3-8 relevant links, not a footer stuffed with 200 anchors. [To be verified]: Google has never set a numerical threshold on the optimal number of internal links per page.
When does this approach fail?
If your crawl issue stems from crawl budget (slow server, chain redirects, indexed junk pages), multiplying internal links won’t change anything. The bot will take longer to crawl but won’t go deeper if the server times out after 3 seconds.
The same applies to sites with dynamically generated content: if your product URLs change based on selected facets, bidirectional linking isn't sufficient — you also need a comprehensive XML sitemap and strict management of URL parameters. Mueller doesn’t mention this, but it’s a prerequisite for his advice to work.
Practical impact and recommendations
What should you prioritize in your audit?
First step: crawl your site from 5-10 URLs spread throughout the hierarchy — deep category page, isolated product page, old blog article. If Screaming Frog or Oncrawl doesn’t discover 100% of your strategic pages from each starting point, you have a linking issue.
Second step: check the breadcrumbs. It must be present on all non-home pages, clickable, and structured using Schema.org BreadcrumbList. This is your basic ascending link structure. If a page lacks a breadcrumb, it is potentially orphaned for the bot.
How can you fix a failing link structure?
For horizontal linking, integrate related product blocks on each product page (3-6 products max, not 50). On a blog, add "related articles" at the end of the content, based on tags or category. On a media site, use pagination with rel="prev"/"next" and links to previous/next articles.
For ascending linking, don’t settle for breadcrumbs: add a text link to the parent category in the body of the page (e.g., "See all products in category X"). This reinforces the signal and provides an alternative if the breadcrumb is poorly rendered JavaScript.
How do you verify that corrections work?
Run a third-party crawl after making modifications. If the discovery rate increases from 75% to 98% from any entry point, you’re on the right track. Also compare with server logs: if Googlebot increases the crawl frequency on previously orphaned sections, it indicates that the linking is working.
Monitor the Search Console, Coverage section: the pages "Detected, not indexed" should decrease if they become accessible through multiple paths. However, be cautious — accessibility does not mean automatic indexing. A crawled but deemed low-quality page will remain excluded from the index.
- Crawl the site from 5-10 spread URLs (not just the homepage)
- Ensure all strategic pages are discovered from each entry point
- Audit the breadcrumbs: presence, Schema.org markup, clickable links
- Add 3-6 relevant horizontal links on each page (similar products, related articles)
- Integrate text upward links to parent categories
- Compare server logs before/after to measure the impact on crawl
❓ Frequently Asked Questions
Un sitemap XML suffit-il à compenser un maillage interne défaillant ?
Combien de liens internes par page sont recommandés ?
Le maillage horizontal impacte-t-il le PageRank interne ?
Quel crawler tiers utiliser pour tester le maillage bidirectionnel ?
Comment gérer le maillage bidirectionnel sur un site avec pagination infinie ?
🎥 From the same video 21
Other SEO insights extracted from this same Google Search Central video · duration 57 min · published on 13/05/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.