What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 5 questions

Less than a minute. Find out how much you really know about Google search.

🕒 ~1 min 🎯 5 questions

Official statement

Google can follow links generated via JavaScript, but it is recommended that JavaScript does not contradict static HTML content, especially for directives like rel='nofollow'.
37:49
🎥 Source video

Extracted from a Google Search Central video

⏱ 55:13 💬 EN 📅 29/06/2018 ✂ 10 statements
Watch on YouTube (37:49) →
Other statements from this video 9
  1. 5:21 Faut-il vraiment bloquer l'indexation des traductions automatiques de votre site ?
  2. 9:59 Google suit-il vraiment vos balises canoniques ou décide-t-il seul ?
  3. 10:31 Pourquoi Google indexe-t-il la mauvaise version de vos URLs ?
  4. 13:12 Faut-il indexer les pages de recherche interne d'un site e-commerce ?
  5. 18:50 Le CSS display:none pénalise-t-il vraiment votre SEO ?
  6. 20:21 Faut-il vraiment séparer les contenus multilingues page par page pour ranker ?
  7. 42:04 Comment un nouveau site e-commerce peut-il se différencier pour être indexé et classé par Google ?
  8. 52:00 Les images responsive améliorent-elles vraiment votre SEO ?
  9. 54:09 Le HTTPS booste-t-il vraiment le ranking dans Google ?
📅
Official statement from (7 years ago)
TL;DR

Google claims it can follow links created via JavaScript, but warns that JS should never contradict static HTML. For directives like rel='nofollow', the HTML version prevails. Specifically, a link marked nofollow in HTML but follow in JS remains nofollow in the eyes of Googlebot. This statement serves as a reminder that the two-phase indexing (HTML first, then JS) creates hierarchical priorities that must be respected.

What you need to understand

Why does Google emphasize the non-contradiction between HTML and JavaScript?

Google's crawl operates in two distinct phases. The first analyzes the raw HTML, and the second executes JavaScript to detect dynamic changes. This sequential process naturally creates a processing hierarchy.

When a link exists in HTML with a rel='nofollow' attribute, Googlebot records it immediately as such. If JavaScript then modifies this attribute to remove it, Google does not consistently reconsider this directive. The initial captured state prevails.

What happens when JavaScript adds links that are missing from static HTML?

Google can technically discover and follow these links, but with temporal and budgetary constraints. JavaScript rendering consumes more resources than simple HTML parsing. On large sites, Googlebot may not render all pages in JS.

Links added solely through JavaScript are thus discovered later in the crawl process. They may be discovered with a delay or even ignored if the crawl budget is exhausted. This latency directly impacts the speed of discovery and indexing of targeted content.

Does this recommendation extend to other robot directives as well?

Absolutely. The principle extends to all meta robots directives, canonical tags, and even hreflang tags. If your HTML states one thing and JavaScript contradicts it, Google consistently prioritizes the initial state captured in HTML.

This logic addresses an efficiency constraint: parsing HTML is infinitely faster than executing a full JavaScript engine. Google optimizes its resources by fixing critical directives as early as the first phase.

  • Static HTML defines the initial state of links and directives; JavaScript cannot replace it but can only enhance it
  • The two-phase crawl creates a processing hierarchy: HTML first, then JS, with priority to the initial
  • Links added solely in JS are discovered later and consume more crawl budget
  • Directives nofollow, noindex, canonical stated in HTML prevail even if JavaScript attempts to modify them
  • This architecture protects Google from dynamic manipulations intended to present different content to crawlers

SEO Expert opinion

Is this statement consistent with field observations?

Yes, and it is even one of the few statements from Google that is perfectly verifiable in production. Numerous tests show that changing a nofollow to a follow via JavaScript has no measurable effect on the PageRank transmitted. Google indeed freezes HTML directives.

However, Google's ability to follow links created solely in JS remains highly variable depending on context. On sites with a high crawl budget, it works well. On less prioritized sites, these links may remain undiscovered for weeks. [To be verified]: Google never specifies what crawl budget threshold guarantees systematic JS rendering.

What nuances should be added to this rule?

Google talks about a "recommendation", not a prohibition. Technically, nothing prevents you from using JavaScript to create all your navigation links. But this approach exposes you to the risk of delayed or even incomplete discovery.

The real question becomes: why accept this risk when the alternative—serving links in HTML—costs nothing? Modern frameworks (Next.js, Nuxt) allow for pre-rendering critical HTML while maintaining JavaScript interactivity. It's the best of both worlds.

Beware of poorly configured JavaScript frameworks: A pure Single Page Application (SPA) that generates 100% of its links in JS exposes you to major indexing issues. Links discovered late delay the indexing of entire sections of the site. Server-side rendering (SSR) or static site generation (SSG) then become essential.

In what scenarios does this rule cause practical issues?

E-commerce sites with dynamic filters are typically affected. When a user refines their search by price, size, or color, JavaScript often regenerates the list of products and their links. If these links do not exist in the initial HTML, Google may never discover certain product pages.

Another tricky case involves A/B testing systems that change links in JavaScript to segment traffic. If the HTML version contains a nofollow and JavaScript removes it for a segment of users, Google will still see nofollow. The test then becomes biased since the SEO performance remains fixed on the HTML version.

Practical impact and recommendations

What practical steps should be taken to stay compliant?

First, serve a complete static HTML that contains all critical links for navigation and internal linking. JavaScript can then enhance user experience (lazy loading, interactions), but the basic link structure must exist from the start in the HTML.

Next, align the robot directives between HTML and JS. If a link should be nofollow, place the attribute directly in HTML. Never rely on JavaScript to add or remove these directives reliably with respect to Google.

How can I check that my site adheres to this recommendation?

Use the URL Tester tool in Search Console and compare the views of "raw HTML" and "rendered HTML". Critical links must appear in both. If certain links only appear after rendering, they are likely to be crawled with a delay.

Crawl your site with a tool like Screaming Frog while disabling JavaScript rendering. All strategic links should be discovered. If orphan pages only appear after enabling JS, that is a warning signal: Google may miss them.

What mistakes should be avoided at all costs?

Never declare a directive in HTML and then contradict it in JavaScript, hoping that Google will consider the latter. It does not work. The HTML prevails, end of story.

Avoid serving an empty (or nearly empty) HTML while relying on JS rendering for everything. This approach may work on sites with very high crawl budgets, but it's a risky bet for most sites. Why take this risk when SSR solves the problem?

  • Ensure that all navigation and internal linking links exist in the source HTML (view-source:)
  • Place the nofollow, noindex, canonical directives directly in HTML, never added solely through JS
  • Enable server-side rendering (SSR) or static site generation (SSG) for modern JS frameworks
  • Regularly test with Search Console "URL Tester" and compare raw HTML vs rendered
  • Crawl the site with JS disabled to identify links missing from the static HTML
  • Document any exceptions where JavaScript modifies links, and measure the crawl impact
Google can technically follow JavaScript links, but consistently prioritizes the state defined in static HTML. To maximize your chances of quick crawling and indexing, serve a complete HTML with all critical links and directives. JavaScript should enhance the user experience, not replace the fundamental SEO structure. These technical optimizations—especially implementing SSR or SSG on modern frameworks—can prove complex to implement correctly. If your team lacks expertise in these areas, a specialized SEO agency can audit your JavaScript architecture and assist you in ensuring that your code meets Google's requirements without sacrificing front-end performance.

❓ Frequently Asked Questions

Google indexe-t-il les liens créés uniquement en JavaScript ?
Oui, Google peut les découvrir et les suivre, mais avec un délai potentiellement significatif. Le rendu JavaScript intervient après le parsing HTML, et tous les liens JS ne sont pas garantis d'être crawlés, surtout sur des sites à faible crawl budget.
Si je change un lien nofollow en follow via JavaScript, Google transmet-il du PageRank ?
Non. Google fige les directives rel='nofollow' dès la lecture du HTML statique. Toute modification ultérieure via JavaScript est ignorée pour le calcul du PageRank et la transmission de jus SEO.
Puis-je utiliser une SPA (Single Page Application) sans risque SEO ?
Techniquement oui, mais avec des précautions majeures. Une SPA pure expose à des retards de découverte et d'indexation. Le rendu côté serveur (SSR) ou la génération statique (SSG) deviennent indispensables pour garantir que Google accède au contenu dès le HTML initial.
Comment savoir si Google a rendu le JavaScript de ma page ?
Utilisez l'outil "Tester l'URL" dans la Search Console et consultez l'onglet "HTML rendu". Comparez-le au HTML source (view-source:). Si les contenus diffèrent, Google a exécuté le JS, mais cela ne garantit pas que ce rendu ait été utilisé pour l'indexation.
Les frameworks modernes comme Next.js ou Nuxt résolvent-ils ce problème ?
Oui, s'ils sont correctement configurés. Next.js et Nuxt proposent SSR et SSG, qui génèrent du HTML statique côté serveur. Google reçoit alors un HTML complet dès le départ, éliminant la dépendance au rendu JavaScript pour la découverte des liens.
🏷 Related Topics
Content AI & SEO JavaScript & Technical SEO Links & Backlinks

🎥 From the same video 9

Other SEO insights extracted from this same Google Search Central video · duration 55 min · published on 29/06/2018

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.