Official statement
Other statements from this video 9 ▾
- 5:21 Faut-il vraiment bloquer l'indexation des traductions automatiques de votre site ?
- 9:59 Google suit-il vraiment vos balises canoniques ou décide-t-il seul ?
- 10:31 Pourquoi Google indexe-t-il la mauvaise version de vos URLs ?
- 13:12 Faut-il indexer les pages de recherche interne d'un site e-commerce ?
- 18:50 Le CSS display:none pénalise-t-il vraiment votre SEO ?
- 20:21 Faut-il vraiment séparer les contenus multilingues page par page pour ranker ?
- 42:04 Comment un nouveau site e-commerce peut-il se différencier pour être indexé et classé par Google ?
- 52:00 Les images responsive améliorent-elles vraiment votre SEO ?
- 54:09 Le HTTPS booste-t-il vraiment le ranking dans Google ?
Google claims it can follow links created via JavaScript, but warns that JS should never contradict static HTML. For directives like rel='nofollow', the HTML version prevails. Specifically, a link marked nofollow in HTML but follow in JS remains nofollow in the eyes of Googlebot. This statement serves as a reminder that the two-phase indexing (HTML first, then JS) creates hierarchical priorities that must be respected.
What you need to understand
Why does Google emphasize the non-contradiction between HTML and JavaScript?
Google's crawl operates in two distinct phases. The first analyzes the raw HTML, and the second executes JavaScript to detect dynamic changes. This sequential process naturally creates a processing hierarchy.
When a link exists in HTML with a rel='nofollow' attribute, Googlebot records it immediately as such. If JavaScript then modifies this attribute to remove it, Google does not consistently reconsider this directive. The initial captured state prevails.
What happens when JavaScript adds links that are missing from static HTML?
Google can technically discover and follow these links, but with temporal and budgetary constraints. JavaScript rendering consumes more resources than simple HTML parsing. On large sites, Googlebot may not render all pages in JS.
Links added solely through JavaScript are thus discovered later in the crawl process. They may be discovered with a delay or even ignored if the crawl budget is exhausted. This latency directly impacts the speed of discovery and indexing of targeted content.
Does this recommendation extend to other robot directives as well?
Absolutely. The principle extends to all meta robots directives, canonical tags, and even hreflang tags. If your HTML states one thing and JavaScript contradicts it, Google consistently prioritizes the initial state captured in HTML.
This logic addresses an efficiency constraint: parsing HTML is infinitely faster than executing a full JavaScript engine. Google optimizes its resources by fixing critical directives as early as the first phase.
- Static HTML defines the initial state of links and directives; JavaScript cannot replace it but can only enhance it
- The two-phase crawl creates a processing hierarchy: HTML first, then JS, with priority to the initial
- Links added solely in JS are discovered later and consume more crawl budget
- Directives nofollow, noindex, canonical stated in HTML prevail even if JavaScript attempts to modify them
- This architecture protects Google from dynamic manipulations intended to present different content to crawlers
SEO Expert opinion
Is this statement consistent with field observations?
Yes, and it is even one of the few statements from Google that is perfectly verifiable in production. Numerous tests show that changing a nofollow to a follow via JavaScript has no measurable effect on the PageRank transmitted. Google indeed freezes HTML directives.
However, Google's ability to follow links created solely in JS remains highly variable depending on context. On sites with a high crawl budget, it works well. On less prioritized sites, these links may remain undiscovered for weeks. [To be verified]: Google never specifies what crawl budget threshold guarantees systematic JS rendering.
What nuances should be added to this rule?
Google talks about a "recommendation", not a prohibition. Technically, nothing prevents you from using JavaScript to create all your navigation links. But this approach exposes you to the risk of delayed or even incomplete discovery.
The real question becomes: why accept this risk when the alternative—serving links in HTML—costs nothing? Modern frameworks (Next.js, Nuxt) allow for pre-rendering critical HTML while maintaining JavaScript interactivity. It's the best of both worlds.
In what scenarios does this rule cause practical issues?
E-commerce sites with dynamic filters are typically affected. When a user refines their search by price, size, or color, JavaScript often regenerates the list of products and their links. If these links do not exist in the initial HTML, Google may never discover certain product pages.
Another tricky case involves A/B testing systems that change links in JavaScript to segment traffic. If the HTML version contains a nofollow and JavaScript removes it for a segment of users, Google will still see nofollow. The test then becomes biased since the SEO performance remains fixed on the HTML version.
Practical impact and recommendations
What practical steps should be taken to stay compliant?
First, serve a complete static HTML that contains all critical links for navigation and internal linking. JavaScript can then enhance user experience (lazy loading, interactions), but the basic link structure must exist from the start in the HTML.
Next, align the robot directives between HTML and JS. If a link should be nofollow, place the attribute directly in HTML. Never rely on JavaScript to add or remove these directives reliably with respect to Google.
How can I check that my site adheres to this recommendation?
Use the URL Tester tool in Search Console and compare the views of "raw HTML" and "rendered HTML". Critical links must appear in both. If certain links only appear after rendering, they are likely to be crawled with a delay.
Crawl your site with a tool like Screaming Frog while disabling JavaScript rendering. All strategic links should be discovered. If orphan pages only appear after enabling JS, that is a warning signal: Google may miss them.
What mistakes should be avoided at all costs?
Never declare a directive in HTML and then contradict it in JavaScript, hoping that Google will consider the latter. It does not work. The HTML prevails, end of story.
Avoid serving an empty (or nearly empty) HTML while relying on JS rendering for everything. This approach may work on sites with very high crawl budgets, but it's a risky bet for most sites. Why take this risk when SSR solves the problem?
- Ensure that all navigation and internal linking links exist in the source HTML (view-source:)
- Place the nofollow, noindex, canonical directives directly in HTML, never added solely through JS
- Enable server-side rendering (SSR) or static site generation (SSG) for modern JS frameworks
- Regularly test with Search Console "URL Tester" and compare raw HTML vs rendered
- Crawl the site with JS disabled to identify links missing from the static HTML
- Document any exceptions where JavaScript modifies links, and measure the crawl impact
❓ Frequently Asked Questions
Google indexe-t-il les liens créés uniquement en JavaScript ?
Si je change un lien nofollow en follow via JavaScript, Google transmet-il du PageRank ?
Puis-je utiliser une SPA (Single Page Application) sans risque SEO ?
Comment savoir si Google a rendu le JavaScript de ma page ?
Les frameworks modernes comme Next.js ou Nuxt résolvent-ils ce problème ?
🎥 From the same video 9
Other SEO insights extracted from this same Google Search Central video · duration 55 min · published on 29/06/2018
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.