What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

For Google to crawl the entire site, it requires links that allow for descending through the hierarchy (to subcategories), ascending back up, and navigating horizontally between items in the same category. Test with a third-party crawler to confirm that all pages are reachable from any starting point.
35:26
🎥 Source video

Extracted from a Google Search Central video

⏱ 57:01 💬 EN 📅 13/05/2020 ✂ 22 statements
Watch on YouTube (35:26) →
Other statements from this video 21
  1. 1:43 Google réécrit-il vraiment vos meta descriptions si elles contiennent trop de mots-clés ?
  2. 4:20 Pourquoi modifier le code Analytics bloque-t-il la vérification Search Console ?
  3. 5:58 Pourquoi votre balisage hreflang ne fonctionne-t-il toujours pas malgré vos efforts ?
  4. 5:58 Faut-il privilégier hreflang langue seule ou langue+pays pour vos versions internationales ?
  5. 9:09 Hreflang n'influence pas l'indexation : pourquoi Google indexe une seule version mais affiche plusieurs URLs ?
  6. 12:32 Pourquoi votre site disparaît-il complètement de l'index Google et comment le récupérer ?
  7. 15:51 L'outil de paramètres URL consolide-t-il vraiment tous les signaux comme Google le prétend ?
  8. 19:03 Les core updates ne sanctionnent-elles vraiment aucune erreur technique ?
  9. 23:00 L'outil de contenu obsolète supprime-t-il vraiment l'indexation ou juste le snippet ?
  10. 23:56 Pourquoi la commande site: est-elle inutile pour diagnostiquer l'indexation ?
  11. 23:56 L'outil de suppression d'URL désindexe-t-il vraiment vos pages ?
  12. 26:59 Les 50 000 URLs d'un sitemap : pourquoi cette limite ne concerne-t-elle pas ce que vous croyez ?
  13. 30:10 BERT pénalise-t-il vraiment les sites qui perdent du trafic après sa mise en place ?
  14. 32:07 Google Images choisit-il vraiment la bonne image pour vos pages ?
  15. 33:50 Faut-il vraiment détailler ses anchor texts avec prix, avis et notes ?
  16. 38:03 Pourquoi Google refuse-t-il d'indexer toutes vos pages et comment y remédier ?
  17. 40:12 L'anchor text interne répétitif est-il vraiment un problème pour Google ?
  18. 42:48 Les paramètres UTM créent-ils vraiment du contenu dupliqué indexé par Google ?
  19. 45:27 Le mixed content HTTPS/HTTP impacte-t-il vraiment le référencement Google ?
  20. 47:16 Le hreflang en HTML alourdit-il vraiment vos pages ou est-ce un mythe ?
  21. 53:53 Pourquoi les anciennes URLs restent-elles dans l'index après une redirection 301 ?
📅
Official statement from (5 years ago)
TL;DR

Google states that a complete crawl requires three types of links: descending (to subcategories), ascending (to parent), and horizontal (between items in the same category). Without this bidirectional navigation, some pages remain inaccessible to the bot. Recommendation: Test with a third-party crawler to ensure all URLs are reachable from any entry point of the site.

What you need to understand

What does Google mean by bidirectional navigation?

Mueller's statement focuses on three aspects of internal linking: vertical descending, vertical ascending, and horizontal. Vertical descending is the classic hierarchy — home to category, category to subcategory, and subcategory to product page.

Vertical ascending refers to breadcrumbs and upward links: from a product page, you should be able to return to the subcategory, then to the parent category. Horizontal, often overlooked, consists of links between pages at the same level — related products, related articles, and faceted navigation within a category.

Why is unidirectional linking problematic?

Imagine an e-commerce site with 10,000 products spread across 50 categories. If your links only descend from the homepage, the bot must follow the entire hierarchical chain to reach each product. No shortcuts. If an intermediate page fails (500 error, timeout, exhausted crawl budget), the entire sub-tree becomes invisible.

With bidirectional linking, Googlebot can take multiple paths to reach the same URL: from the parent category, from a similar product, or from a filtered results page. This redundancy dramatically increases the likelihood that a page will be discovered and crawled.

How can you tell if your site is compliant?

Mueller recommends using a third-party crawler — Screaming Frog, Oncrawl, Botify — to simulate Googlebot's behavior. The test: launch a crawl from any deep page on the site (not just the homepage). If certain URLs do not appear in the report, your linking has dead zones.

Then compare with a crawl from the homepage. If you discover 30% more pages starting from the root, it indicates that your structure is not bidirectional — it relies on a single entry point. This approach works for a site with 200 pages, but crumbles at scale.

  • Descending Linking: links to lower levels (category → products)
  • Ascending Linking: breadcrumbs, links to parent
  • Horizontal Linking: links between pages at the same level (similar products, pagination, facets)
  • Crawler Test: launch from multiple entry points to detect inaccessible areas
  • Redundancy: multiple paths to each critical URL

SEO Expert opinion

Is this guideline really new?

Let’s be honest: bidirectional navigation has been a fundamental aspect of SEO for 15 years. What Mueller is emphasizing here is that Google isn't working magic. If your internal linking resembles a tree with dead branches, the bot won’t guess the missing URLs.

The real novelty is the emphasis on horizontal linking. Many sites still neglect links between products, within articles of the same category, or between filtered results pages. The consequence is that large parts of the catalog remain orphaned, accessible only through internal search or manually typed URLs.

What limitations does this rule have?

Mueller’s advice works well for tree-structured sites — e-commerce, media sites with sections, corporate websites. However, for a flat architecture site (single-category blog, directory without hierarchy), horizontal linking becomes the only leverage. No parent, no subcategory — just contextual links between contents.

Another limitation is link over-optimization. Adding 50 horizontal links on each product page to ensure bidirectionality dilutes PageRank and overwhelms the user. The ideal balance is 3-8 relevant links, not a footer stuffed with 200 anchors. [To be verified]: Google has never set a numerical threshold on the optimal number of internal links per page.

When does this approach fail?

If your crawl issue stems from crawl budget (slow server, chain redirects, indexed junk pages), multiplying internal links won’t change anything. The bot will take longer to crawl but won’t go deeper if the server times out after 3 seconds.

The same applies to sites with dynamically generated content: if your product URLs change based on selected facets, bidirectional linking isn't sufficient — you also need a comprehensive XML sitemap and strict management of URL parameters. Mueller doesn’t mention this, but it’s a prerequisite for his advice to work.

Warning: Poorly designed bidirectional linking can generate duplicate content (category pages accessible via multiple paths) or infinite loops (cross pagination). Always test with robots.txt and rel="canonical" before deployment.

Practical impact and recommendations

What should you prioritize in your audit?

First step: crawl your site from 5-10 URLs spread throughout the hierarchy — deep category page, isolated product page, old blog article. If Screaming Frog or Oncrawl doesn’t discover 100% of your strategic pages from each starting point, you have a linking issue.

Second step: check the breadcrumbs. It must be present on all non-home pages, clickable, and structured using Schema.org BreadcrumbList. This is your basic ascending link structure. If a page lacks a breadcrumb, it is potentially orphaned for the bot.

How can you fix a failing link structure?

For horizontal linking, integrate related product blocks on each product page (3-6 products max, not 50). On a blog, add "related articles" at the end of the content, based on tags or category. On a media site, use pagination with rel="prev"/"next" and links to previous/next articles.

For ascending linking, don’t settle for breadcrumbs: add a text link to the parent category in the body of the page (e.g., "See all products in category X"). This reinforces the signal and provides an alternative if the breadcrumb is poorly rendered JavaScript.

How do you verify that corrections work?

Run a third-party crawl after making modifications. If the discovery rate increases from 75% to 98% from any entry point, you’re on the right track. Also compare with server logs: if Googlebot increases the crawl frequency on previously orphaned sections, it indicates that the linking is working.

Monitor the Search Console, Coverage section: the pages "Detected, not indexed" should decrease if they become accessible through multiple paths. However, be cautious — accessibility does not mean automatic indexing. A crawled but deemed low-quality page will remain excluded from the index.

  • Crawl the site from 5-10 spread URLs (not just the homepage)
  • Ensure all strategic pages are discovered from each entry point
  • Audit the breadcrumbs: presence, Schema.org markup, clickable links
  • Add 3-6 relevant horizontal links on each page (similar products, related articles)
  • Integrate text upward links to parent categories
  • Compare server logs before/after to measure the impact on crawl
A bidirectional internal linking structure is not just about adding links haphazardly. It’s about creating multiple logical paths to each strategic page, balancing vertical hierarchy and horizontal navigation. The goal: for Googlebot to reach any URL within 3-4 clicks from any entry point. If your architecture exceeds 10,000 pages, or if you observe persistent orphan zones despite adjustments, this complexity often justifies the intervention of a specialized SEO agency to map the existing structure, identify technical blockages, and deploy a safe reworking of the links without regression risks.

❓ Frequently Asked Questions

Un sitemap XML suffit-il à compenser un maillage interne défaillant ?
Non. Le sitemap aide Google à découvrir les URLs, mais ne remplace pas le maillage interne pour le crawl régulier et la distribution du PageRank. Une page accessible uniquement via sitemap risque d'être crawlée moins souvent.
Combien de liens internes par page sont recommandés ?
Google n'a jamais donné de chiffre officiel. En pratique, 3-8 liens contextuels pertinents fonctionnent mieux que 50 liens footer génériques. L'objectif est la pertinence, pas la quantité.
Le maillage horizontal impacte-t-il le PageRank interne ?
Oui. Les liens horizontaux redistribuent le PageRank entre pages de même niveau, ce qui peut renforcer des sections entières si elles sont bien maillées entre elles. Mais attention à ne pas diluer en liant tout à tout.
Quel crawler tiers utiliser pour tester le maillage bidirectionnel ?
Screaming Frog (jusqu'à 500 URLs en gratuit), Oncrawl, Botify, ou Sitebulb. L'essentiel est de pouvoir lancer un crawl depuis une URL de départ personnalisée, pas seulement la racine.
Comment gérer le maillage bidirectionnel sur un site avec pagination infinie ?
Utilisez rel="prev"/"next" ou une pagination numérotée classique en parallèle. La pagination infinie en JavaScript pose problème pour le crawl si elle n'est pas accompagnée d'URLs distinctes accessibles en HTML.
🏷 Related Topics
Domain Age & History Crawl & Indexing AI & SEO Links & Backlinks Pagination & Structure

🎥 From the same video 21

Other SEO insights extracted from this same Google Search Central video · duration 57 min · published on 13/05/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.