What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

Internal linking is one of the most important SEO actions you can take on a site to guide Google and visitors toward important pages. Structured data does not replace normal HTML links. URLs in hreflang annotations, breadcrumbs, or other structured data are not used as classic internal links. A well-thought-out internal linking strategy remains essential.
🎥 Source video

Extracted from a Google Search Central video

💬 EN 📅 05/03/2022 ✂ 22 statements
Watch on YouTube →
Other statements from this video 21
  1. Faut-il créer une nouvelle URL ou mettre à jour la même page pour du contenu quotidien ?
  2. Faut-il arrêter d'utiliser l'outil de soumission manuelle dans Search Console ?
  3. Les balises H2 dans le footer posent-elles un problème pour le référencement ?
  4. Les balises <header> et <footer> HTML5 améliorent-elles vraiment le SEO ?
  5. Faut-il vraiment se fier au validateur schema.org pour optimiser ses données structurées ?
  6. La vitesse de page améliore-t-elle vraiment le classement aussi vite qu'on le croit ?
  7. Google crawle-t-il tous les sitemaps au même rythme ?
  8. Google continue-t-il vraiment de crawler un sitemap supprimé de Search Console ?
  9. Pourquoi Google n'indexe-t-il pas une page crawlée régulièrement si elle ne présente aucun problème technique ?
  10. Peut-on utiliser des canonical bidirectionnels entre deux versions d'un site sans risque ?
  11. Pourquoi un seul x-default suffit-il pour toute votre configuration hreflang multi-domaines ?
  12. Faut-il vraiment éviter le structured data produit sur les pages catégories ?
  13. Faut-il vraiment choisir une langue principale pour chaque page si vous visez plusieurs marchés ?
  14. Pourquoi Google ignore-t-il complètement votre version desktop en mobile-first indexing ?
  15. Le contenu 'commodity' peut-il vraiment survivre dans les résultats Google ?
  16. Faut-il isoler ses FAQ dans des pages séparées pour mieux ranker ?
  17. Pourquoi Google réduit-il drastiquement l'affichage des FAQ dans les résultats de recherche ?
  18. Pourquoi Google n'indexe-t-il qu'une infime fraction de vos URLs ?
  19. Peut-on héberger son sitemap XML sur un domaine différent de son site principal ?
  20. Les Core Web Vitals : pourquoi le passage de « Bad » à « Medium » change tout pour votre ranking ?
  21. La vitesse serveur impacte-t-elle vraiment le crawl budget des gros sites ?
📅
Official statement from (4 years ago)
TL;DR

Google confirms that URLs present in structured data (hreflang, breadcrumb, etc.) are NOT treated as classic internal links. HTML internal linking remains one of the most powerful SEO levers for guiding crawl and signaling your strategic pages. Structured data complements this approach, but never substitutes for it.

What you need to understand

Why does Google make this distinction between structured data and HTML links?

Structured data primarily serves to contextualize content for search engines — not to pass PageRank or organize site architecture. When you declare a URL in a breadcrumb Schema.org or hreflang annotation, you're giving Google semantic information, not a navigation signal.

Crawling, on the other hand, is based on traditional HTML links (anchor tags with href attributes). It's through these links that Googlebot discovers, hierarchizes, and evaluates page depth. If a page is only accessible through a URL mentioned in structured data, it risks being completely missed by the bot.

What does "guiding Google and visitors" concretely mean?

A well-planned internal linking strategy pilots two simultaneous flows: Google's crawl budget and the user journey. Each internal link acts as a vote of confidence: the more internal links a page receives from strategic pages, the more Google understands its relative importance within your ecosystem.

Structured data, on the other hand, provides complementary context — they enrich rich snippets, clarify a page's position in your information architecture (breadcrumb), or signal language versions (hreflang). But they trigger no crawl on their own.

What risks if you neglect internal linking in favor of structured data?

A site that bets everything on structured annotations without caring for HTML links shoots itself in the foot. Orphaned pages — accessible only through internal search or structured data — receive neither link juice nor regular crawl.

Result: unpredictable indexation, mediocre rankings, and complete misunderstanding of your editorial hierarchy by Google. Structured data won't fill this gap — they just dress up a poorly built skeleton.

  • HTML links drive crawl, internal PageRank distribution, and page discovery
  • Structured data enrich semantic understanding and SERP display, but don't create crawl paths
  • A page without incoming HTML links risks remaining invisible or poorly indexed, even if referenced in a breadcrumb Schema.org
  • Internal linking remains one of the few directly controllable on-site levers with high SEO impact

SEO Expert opinion

Is this statement consistent with real-world observations?

Absolutely. Crawl tests have shown for years that Googlebot follows anchor tags with href attributes as a priority. URLs mentioned in structured data (breadcrumb, hreflang, sameAs) don't trigger systematic crawling — they serve as metadata, not entry points.

Sites that rely solely on breadcrumb Schema.org to "link" their pages often encounter indexation issues or pages that never rank in the SERPs. Conversely, dense and logical HTML linking quickly improves the visibility of strategic pages.

Should you neglect structured data then?

No — that's a false dilemma. Structured data and internal linking play in two different but complementary leagues. Schema.org breadcrumbs improve SERP display (clickable breadcrumb trails), hreflang annotations prevent cannibalization between language versions, and Article or Product markup boost click-through rates.

The trap is believing that polishing your structured data compensates for sloppy internal linking. It doesn't work that way. Both are necessary, but HTML linking remains the load-bearing foundation. Structured data dress up that foundation, they don't replace it.

In what cases could this rule pose problems?

Some sites — particularly large e-commerce with dynamic page generation — use JavaScript links or SPAs (Single Page Applications) where links aren't always classic anchor tags. In these cases, Google's JavaScript rendering can capture links, but with lower reliability and delay.

If your internal links depend on heavy JS frameworks, you're already in a gray area. Adding structured data won't change the fundamental problem: Googlebot still prefers static HTML links that are crawlable without JavaScript execution.

Warning: Client-side JavaScript sites (React, Vue, Angular without SSR) must thoroughly audit their internal linking on the rendered HTML side. Structured data will never compensate for linking that's invisible to initial crawl.

Practical impact and recommendations

What should you concretely do to optimize your internal linking?

First step: map your strategic pages. Identify high-value content (pillar pages, main categories, bestselling product sheets) and verify they receive links from your homepage, main menu, and other high PageRank pages.

Next, audit click depth: an important page should never be more than 3 clicks away from your homepage. Use Screaming Frog or Oncrawl to spot orphaned or too-deep pages, then create linking bridges from better-connected pages.

For structured data, implement breadcrumb Schema.org to improve SERP display — but don't count on them for crawling. Ensure every page has at least one classic HTML incoming link, ideally several from relevant editorial contexts.

What mistakes to absolutely avoid?

Don't confuse semantic annotations and link architecture. A page mentioned in JSON-LD breadcrumbs but without any anchor tag link remains an orphaned page for Google. This is the most common mistake on sites migrating to JavaScript-dependent architectures.

Also avoid overloading your pages with unnecessary internal links. Good linking is about contextual quality, not blind quantity. One link from a relevant editorial paragraph beats 50 automated footer links to every site category.

  • Audit click depth for all strategic pages (goal: ≤3 clicks from homepage)
  • Spot and remove orphaned pages using Screaming Frog or Search Console crawl data
  • Create contextual HTML links from pillar pages to important subpages
  • Implement breadcrumb Schema.org to enhance SERP display (complement, not replacement)
  • Verify that each page receives at least 2-3 internal links from well-crawled pages
  • Avoid links generated only in JavaScript without HTML fallback
  • Use descriptive and varied anchor text for internal links (avoid repetitive "click here")

How do you verify your site follows these principles?

Run a full crawl with a tool like Screaming Frog in Spider mode. Analyze the link graph to identify isolated clusters and pages with low "InRank" (internal PageRank equivalent). Cross-reference with Search Console data to see whether poorly linked pages are also those struggling to be indexed or ranked.

Also verify that your structured data (breadcrumb, hreflang) are properly implemented and validated through Google's testing tool — but remember they come after HTML linking in SEO priority order.

Internal linking remains the most powerful and controllable on-site SEO lever. Structured data complements this strategy by enriching SERP display and clarifying semantic context, but they create no crawl paths. Regularly audit your HTML link architecture, eliminate orphaned pages, and prioritize contextual quality of internal links. If your site has complex architecture or large page volume, it may be wise to engage a specialized SEO agency to structure an optimal linking plan and avoid technical pitfalls related to JavaScript or misconfigured structured data.

❓ Frequently Asked Questions

Les URLs dans les breadcrumbs Schema.org aident-elles au crawl ?
Non. Google utilise les breadcrumbs structurées uniquement pour enrichir l'affichage dans les SERP (fil d'Ariane cliquable). Le crawl repose exclusivement sur les liens HTML classiques.
Les annotations hreflang sont-elles suivies comme des liens internes ?
Non. Hreflang indique des versions linguistiques alternatives, mais ne transmet pas de PageRank ni ne déclenche de crawl. Ce sont des métadonnées, pas des liens de navigation.
Combien de liens internes minimum par page ?
Il n'y a pas de seuil magique, mais chaque page stratégique devrait recevoir au moins 2-3 liens contextuels depuis des pages bien crawlées. L'important est la qualité et la pertinence, pas le volume brut.
Les liens JavaScript sont-ils équivalents aux liens HTML pour le SEO ?
Non. Google peut les interpréter via le rendering JavaScript, mais avec un délai et une fiabilité moindres. Les liens HTML statiques restent la norme recommandée pour un crawl optimal.
Faut-il privilégier le maillage interne ou les structured data ?
Le maillage interne en priorité absolue — c'est le socle du crawl et de la distribution du PageRank. Les structured data viennent ensuite comme complément pour l'affichage SERP et le contexte sémantique.
🏷 Related Topics
Domain Age & History Structured Data AI & SEO Links & Backlinks Domain Name Pagination & Structure International SEO

🎥 From the same video 21

Other SEO insights extracted from this same Google Search Central video · published on 05/03/2022

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.