What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

Allow Googlebot to crawl your pages using properly anchored internal links with the HTML <a> tag, and ensure that the descriptive text of the links is informative to facilitate user navigation and crawling by search engines.
2:37
🎥 Source video

Extracted from a Google Search Central video

⏱ 5:12 💬 EN 📅 13/03/2019 ✂ 3 statements
Watch on YouTube (2:37) →
Other statements from this video 2
  1. Les balises title et meta description influencent-elles vraiment votre classement Google ?
  2. 2:07 Pourquoi Google indexe-t-il le JavaScript en deux vagues et comment ça impacte ton référencement ?
📅
Official statement from (7 years ago)
TL;DR

Martin Splitt emphasizes that Googlebot only crawls standard HTML links with the <a> tag and href effectively. JavaScript links, buttons without HTML anchor, or custom navigation systems can block crawling. For SEO, it means systematically auditing the technical architecture of internal links and favoring native HTML over JavaScript solutions, even when the latter appear to work visually.

What you need to understand

What does Splitt really mean by "properly anchored internal links"?

This phrasing may seem basic, but it conceals a precise technical reality. A "properly anchored" link for Google is a tag with a href attribute pointing to a valid URL. Not a

SEO Expert opinion

Is this statement consistent with field observations?

Yes, but with a significant nuance: Google is improving its crawling of JavaScript, and many poorly configured SPA sites still end up indexed. Splitt's message remains valid in theory, but practice shows that Google often compensates for developers' mistakes. The real issue isn’t so much "crawled or not crawled", it’s the delay and cost in crawl budget.

A site that forces Google to execute JS to discover its internal links consumes more resources. The result: fewer pages crawled per session, slower indexing of updates, reduced priority on deep pages. For a small blog, it’s manageable. For e-commerce with 50,000 products, it’s a structural bottleneck.

What nuances should be added regarding descriptive anchors?

A "descriptive" anchor doesn’t mean over-optimized. Stuffing an internal link with "cheap men's running shoes free shipping" is counterproductive — it reeks of manipulation and degrades user experience. A natural and contextual anchor is more than sufficient.

A second point: the anchor alone doesn’t do everything. If 90% of your internal links to a page use the same exact anchor, you’re creating an artificial pattern. Vary your formulations, alternating exact anchors with semantically close anchors. Google has the means to understand that "running sneakers" and "running shoes" point to the same thematic universe.

In which cases does this rule not strictly apply?

For certain interface elements — filters, sorting, pagination — JavaScript links with client-side state management remain relevant, provided there’s a crawlable HTML fallback. A pure JS pagination link with no HTML alternative is a mistake. A hybrid system with by default and progressive enhancement in JS is the right approach.

[To be verified]: Google claims to crawl JS "like a modern browser", but indexing delays remain opaque. No public metrics allow precise quantification of the crawl budget penalty related to JS-only links. Field tests show enormous variations based on sites, their authority, and update frequency.

Practical impact and recommendations

What should be prioritized in auditing your internal linking?

First step: crawl your site with JavaScript disabled. Use Screaming Frog in classic mode, or check the "text only" view in your browser. All pages you can’t see in this configuration are potentially invisible to Googlebot on the first pass — or crawled with a delay.

Next, inspect the raw source code (View Source, not Inspect) of your key pages. Internal links should appear directly in the initial HTML, not only after JavaScript execution. If you have to open DevTools to see your links, that’s a bad sign. Google Search Console can also reveal pages discovered but not crawled, an indicator of malformed or inaccessible links.

How can you fix a non-crawlable link system without a complete overhaul?

If you’re using a modern JS framework, switch to Server-Side Rendering (SSR) or Static Site Generation (SSG). Next.js for React, Nuxt for Vue, Angular Universal — all offer turnkey solutions to generate pre-rendered HTML with native links. The implementation cost is real, but the impact on crawling is immediate.

Intermediate solution: implement a dynamically pre-rendering system (Rendertron, Prerender.io) that serves static HTML to bots and JS to users. It’s less clean conceptually, but it works if your architecture doesn’t allow for a quick overhaul. Ensure that the content served to bots and users remains identical to avoid any suspicion of cloaking.

What best practices should be adopted for internal link anchors?

Favor contextual and natural anchors, integrated into a complete sentence rather than isolated. "Check out our complete guide on internal linking" is better than "click here" or a keyword-stuffed anchor. The context around the anchor also matters — Google analyzes adjacent words to refine thematic understanding.

Vary your formulations: if you are linking 10 times to your page /seo-technique/, alternate "technical SEO optimization", "on-site optimization techniques", "advanced technical SEO", etc. Avoid generic anchors ("learn more", "read more") unless the surrounding context compensates. And most importantly, ensure that the anchor truly reflects the content of the target page — no surprises for the user who clicks.

❓ Frequently Asked Questions

Les liens en JavaScript sont-ils complètement ignorés par Google ?
Non, Google crawle et indexe le JavaScript, mais avec un délai et un coût en ressources supérieurs. Les liens JS-only sont découverts plus lentement et consomment du crawl budget inutilement.
Peut-on utiliser des boutons <button> pour le maillage interne si on ajoute un attribut href ?
Non, un <button> n'accepte pas d'attribut href valide en HTML standard. Seule la balise <a> est reconnue comme lien crawlable par les moteurs. Utilisez <a> avec un style CSS si vous voulez l'apparence d'un bouton.
Les ancres de liens internes influencent-elles encore le ranking des pages cibles ?
Oui, mais avec un poids nettement réduit depuis Penguin. L'ancre aide Google à comprendre le sujet de la page cible et renforce la pertinence thématique dans le contexte du maillage interne, sans être un levier de manipulation efficace.
Comment vérifier si mes liens sont bien crawlés par Googlebot ?
Utilisez l'outil d'inspection d'URL dans Google Search Console et consultez la version "crawlée" de la page. Comparez avec un crawl Screaming Frog JS désactivé. Les écarts révèlent les liens invisibles au crawl initial.
Faut-il éviter les liens en nofollow dans le maillage interne ?
Oui, sauf cas particuliers (pages login, panier, filtres). Le nofollow interne bloque le flux de PageRank et empêche Google de découvrir ou prioriser les pages cibles. Privilégiez le dofollow par défaut pour les liens éditoriaux.

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.