What does Google say about SEO? /

Official statement

Links added via JavaScript after the HTML rendering are discovered a few hours later than those present in the raw HTML. Google first examines the raw HTML to discover links, and then after rendering. This delay only affects discovery, not indexing or ranking. For sites with fewer than 10 million pages, this is generally not an issue.
🎥 Source video

Extracted from a Google Search Central video

💬 EN 📅 26/04/2021 ✂ 26 statements
Watch on YouTube →
Other statements from this video 25
  1. Why does Google ignore your canonical tags when the raw HTML contradicts the rendered output?
  2. Does a raw HTML noindex really prevent JavaScript rendering by Google?
  3. Can you really modify title, meta, and links on the client side with JavaScript without risks?
  4. Is client-side JavaScript really holding back your SEO performance?
  5. Raw HTML vs Rendered: Does Google really not care?
  6. Does Google AdSense really penalize your site's speed like any other third-party script?
  7. Should you be worried about 'other error' issues with images in the Search Console?
  8. Should you prioritize user agent or viewport detection for your separate mobile versions?
  9. Do JavaScript navigation links really affect your site's SEO?
  10. Can you really lose control of your canonical by leaving the href attribute empty at load time?
  11. Does Google really use different crawlers for its SEO testing tools?
  12. Are the structured data from your mobile version also applicable to desktop?
  13. Should you really stop fearing JavaScript for SEO?
  14. Do JavaScript links really slow down Google's discovery process?
  15. How can a different canonical tag between raw HTML and rendered output destroy your canonicalization strategy?
  16. Can you really remove a noindex via JavaScript without risking de-indexation?
  17. Is it truly safe to modify meta tags and links with JavaScript without risking your SEO?
  18. Do Google products really get a hidden SEO advantage in search results?
  19. Should you be concerned about 'other' errors in the URL Inspection Tool?
  20. Does Google really overlook your images during web search rendering?
  21. User agent or viewport: Does Google really differentiate for mobile indexing?
  22. Do JavaScript-generated links truly pass ranking signals like traditional HTML links?
  23. Can an empty HTML canonical tag mistakenly force Google to auto-canonicalize your page?
  24. Can the Mobile-Friendly Test really substitute the URL Inspection Tool for auditing mobile crawling?
  25. Why does Google ignore your desktop structured data after switching to mobile-first indexing?
📅
Official statement from (5 years ago)
TL;DR

Google discovers links added via JavaScript a few hours later than those present in the raw HTML, as it examines the source code first before rendering. This delay only affects the URL discovery phase, not their indexing or ranking once crawled. For sites with fewer than 10 million pages, Martin Splitt states that this lag remains negligible.

What you need to understand

Why does Google discover JavaScript links later? <\/h3>

Google's crawl process occurs in two distinct phases <\/strong>. First, Googlebot fetches the raw HTML returned by the server — this is the initial download phase. In this source code, it identifies all links present in the classic <a href> <\/code> tags.<\/p>

Then, a few hours later <\/strong>, Google passes the HTML through its JavaScript rendering engine to execute client-side scripts. It is only at this point that it discovers links dynamically injected by React, Vue, Angular, or any other front-end framework. This time lag is not a penalty — it is a technical constraint related to Google's crawl architecture.<\/p>

Does this delay actually affect the indexing of target pages? <\/h3>

Martin Splitt states that the delay only concerns discovery <\/strong>, not indexing or ranking. Once a JavaScript link is discovered and Googlebot visits the target page, it enters the standard indexing process.<\/p>

In practical terms? If page A contains a JavaScript link to page B, Google will discover this link a few hours after crawling A. But once B is discovered, its processing follows the same path as a URL found through a classic HTML link. No difference in weight <\/strong>, PageRank transferred, or indexing priority.<\/p>

Is the 10 million page threshold relevant? <\/h3>

Splitt asserts that for sites with fewer than 10 million pages, this lag remains anecdotal <\/strong>. This precision suggests that Google considers the crawl budget as non-limiting for the majority of websites.<\/p>

However, for massive platforms — marketplaces, media outlets, directories — the delay can be problematic. If your site publishes thousands of new URLs each day and your crawl budget is saturated <\/strong>, every lost hour counts. [To be checked] <\/strong>: Google does not provide any numerical data on the actual impact for sites exceeding this threshold.<\/p>

  • Google crawls the raw HTML first, then renders JavaScript a few hours later <\/li>
  • The delay only affects link discovery, not their indexing or ranking <\/li>
  • For sites with fewer than 10 million pages, the impact is deemed negligible by Google <\/li>
  • Massive sites with a saturated crawl budget may face significant indexing delays <\/li><\/ul>

SEO Expert opinion

Is this statement consistent with field observations? <\/h3>

On paper, yes. Crawl tests with tools like OnCrawl or Botify indeed show a time gap between Googlebot's visit and the appearance of JavaScript links in the logs <\/strong>. The documented delay typically varies between 2 and 48 hours based on the site's crawl frequency.<\/p>

But the nuance that Splitt omits: this lag can extend considerably on sites with low authority or technical issues. On an e-commerce site with thousands of product pages generated in React, some URLs may remain undiscovered for weeks <\/strong> if they do not benefit from any internal or external HTML link.<\/p>

Is the 10 million page threshold credible? <\/h3>

Honestly? It’s a vague statement <\/strong>. Google never discloses how many pages a Googlebot can crawl per day on a given site — the crawl budget remains a black box. This figure of 10 million seems arbitrary and probably calibrated to reassure 99% of sites.<\/p>

Let's be honest: if your site publishes 50,000 new URLs per month via JavaScript and your crawl budget stagnates at 10,000 pages/day, you will feel the delay <\/strong>. It doesn't matter if you fall below the famed threshold of 10 million. [To be checked] <\/strong>: no official metric confirms this limit.<\/p>

When does this delay become critical? <\/h3>

For news sites, marketplaces with limited stock, or classified ad platforms, a few hours of delay can mean lost sales <\/strong>. If your product pages are crawled with a 24-hour lag and your stock is depleted in 12 hours, Google indexes out-of-stocks.<\/p>

Another problematic case: JavaScript-only sites without HTML fallback. If your architecture relies 100% on a front-end framework and your internal linking is entirely dynamic, you are entirely dependent on Google's rendering queue <\/strong>. And this queue can be capricious.<\/p>

Warning: <\/strong> Google never guarantees a maximum time for JavaScript rendering. Splitt's "a few hours" remains vague — it can be 3 hours or 72 hours depending on their server load.<\/div>

Practical impact and recommendations

Should you prioritize raw HTML for critical links? <\/h3>

Yes, without hesitation. If you want to maximize the discovery speed <\/strong> of your strategic pages — new categories, flagship product sheets, blog articles — make sure they are accessible via HTML links present in the initial source code.<\/p>

In practical terms: your header, footer, main menu, and internal linking on high-traffic pages must be in native HTML <\/strong>. Keep JavaScript for secondary elements like search filters, personalized recommendations, or lazy-loaded content.<\/p>

How to audit your JavaScript link architecture? <\/h3>

Disable JavaScript in your browser (DevTools > Settings > Debugger > Disable JavaScript) and navigate your site. Any link invisible in this setup will be discovered with a delay by Google <\/strong>. This is the quickest method to spot issues.<\/p>

Also, use a crawler like Screaming Frog in "Text Only" mode to simulate Googlebot's behavior before rendering. Then compare it with a crawl in full rendering mode: the missing URLs in the first crawl are your risk areas <\/strong>.

What to do if your site exceeds 10 million pages? <\/h3>

Prioritize ruthlessly. If you manage a massive site, every JavaScript link must be justified <\/strong>. Critical business pages — those generating revenue or traffic — must be accessible via pure HTML.<\/p>

Next, optimize your crawl budget: block unnecessary URLs via robots.txt, fix redirect chains, eliminate soft 404s. And most importantly, don’t count on Google to crawl everything <\/strong> — submit your new URLs via the Indexing API for urgent pages.<\/p>

  • Ensure that the links in the header, footer, and main menu are in raw HTML <\/li>
  • Audit the site with JavaScript disabled to identify invisible links <\/li>
  • Compare a "Text Only" crawl with a rendered crawl to detect discrepancies <\/li>
  • Submit new strategic URLs via the Indexing API or Search Console <\/li>
  • Monitor server logs to measure the actual delay between initial crawl and rendering <\/li>
  • Prioritize server-side rendering (SSR) if the crawl budget is saturated <\/li><\/ul>
    The discovery delay of JavaScript links is not a fatality, but it requires thoughtful technical architecture. For high-volume sites or urgent content, betting on native HTML remains the guarantee of fast indexing. These optimizations may seem simple in theory, but their consistent implementation on a complex site often requires an in-depth technical audit and a partial front-end overhaul. If your team lacks resources or expertise in these areas, partnering with a specialized technical SEO agency <\/strong> can help avoid costly mistakes and speed up correction deployment.<\/div>

❓ Frequently Asked Questions

Les liens JavaScript transmettent-ils du PageRank comme les liens HTML ?
Oui. Une fois découverts et traités, les liens JavaScript ont le même poids qu'un lien HTML classique. Le délai concerne uniquement la découverte, pas la transmission d'autorité.
Le délai de découverte affecte-t-il le positionnement d'une page ?
Non, selon Martin Splitt. Le délai impacte uniquement le moment où Google trouve le lien, pas la façon dont la page cible sera indexée ou classée une fois découverte.
Comment savoir si mon crawl budget est saturé par le JavaScript ?
Analyse tes logs serveur : si Googlebot crawle peu de pages par jour malgré un volume important de contenu, ou si le délai entre publication et indexation dépasse plusieurs jours, ton crawl budget est probablement limité.
Le server-side rendering (SSR) élimine-t-il ce problème ?
Oui, complètement. Avec le SSR, le HTML renvoyé au serveur contient déjà tous les liens, donc Google les découvre immédiatement lors du crawl initial sans attendre le rendu JavaScript.
Faut-il éviter React, Vue ou Angular pour le SEO ?
Non. Ces frameworks sont compatibles avec le SEO si tu utilises du SSR ou du pre-rendering. Le problème survient uniquement avec du client-side rendering pur sans fallback HTML.

🎥 From the same video 25

Other SEO insights extracted from this same Google Search Central video · published on 26/04/2021

🎥 Watch the full video on YouTube →

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.