What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

Links added via JavaScript after the HTML rendering are discovered a few hours later than those present in the raw HTML. Google first examines the raw HTML to discover links, and then after rendering. This delay only affects discovery, not indexing or ranking. For sites with fewer than 10 million pages, this is generally not an issue.
🎥 Source video

Extracted from a Google Search Central video

💬 EN 📅 26/04/2021 ✂ 26 statements
Watch on YouTube →
Other statements from this video 25
  1. Pourquoi Google ignore-t-il vos balises canoniques quand le HTML brut contredit le rendu ?
  2. Le noindex en HTML brut empêche-t-il définitivement le rendu JavaScript par Google ?
  3. JavaScript et SEO : peut-on vraiment modifier title, meta et liens côté client sans risque ?
  4. Le JavaScript côté client est-il vraiment un frein pour vos performances SEO ?
  5. HTML brut vs rendu : Google s'en fiche-t-il vraiment ?
  6. Google AdSense pénalise-t-il vraiment la vitesse de votre site comme n'importe quel script tiers ?
  7. Faut-il s'inquiéter des erreurs 'other error' sur les images dans la Search Console ?
  8. User agent ou viewport : quelle détection privilégier pour vos versions mobiles séparées ?
  9. Les liens de navigation JavaScript affectent-ils vraiment le référencement de votre site ?
  10. Peut-on vraiment perdre le contrôle de sa canonical en laissant l'attribut href vide au chargement ?
  11. Quel crawler Google utilise vraiment ses outils de test SEO ?
  12. Les données structurées de votre version mobile s'appliquent-elles aussi au desktop ?
  13. Faut-il vraiment arrêter de craindre le JavaScript pour le SEO ?
  14. Les liens JavaScript retardent-ils vraiment la découverte par Google ?
  15. Pourquoi une balise canonical différente entre HTML brut et rendu peut-elle ruiner votre stratégie de canonicalisation ?
  16. Peut-on vraiment retirer un noindex via JavaScript sans risquer la désindexation ?
  17. Peut-on vraiment modifier les balises meta et les liens en JavaScript sans risque SEO ?
  18. Les produits Google bénéficient-ils d'un avantage SEO caché dans les résultats de recherche ?
  19. Faut-il s'inquiéter des erreurs 'other' dans l'outil d'inspection d'URL ?
  20. Google ignore-t-il vraiment vos images lors du rendu pour la recherche web ?
  21. User agent ou viewport : Google fait-il vraiment la différence pour l'indexation mobile ?
  22. Les liens générés en JavaScript transmettent-ils vraiment les signaux de ranking comme les liens HTML classiques ?
  23. Une balise canonical vide en HTML peut-elle forcer Google à auto-canonicaliser votre page par erreur ?
  24. Le Mobile-Friendly Test peut-il remplacer l'URL Inspection Tool pour auditer le crawl mobile ?
  25. Pourquoi Google ignore-t-il vos données structurées desktop après le mobile-first indexing ?
📅
Official statement from (5 years ago)
TL;DR

Google discovers links added via JavaScript a few hours later than those present in the raw HTML, as it examines the source code first before rendering. This delay only affects the URL discovery phase, not their indexing or ranking once crawled. For sites with fewer than 10 million pages, Martin Splitt states that this lag remains negligible.

What you need to understand

Why does Google discover JavaScript links later? <\/h3>

Google's crawl process occurs in two distinct phases <\/strong>. First, Googlebot fetches the raw HTML returned by the server — this is the initial download phase. In this source code, it identifies all links present in the classic <a href> <\/code> tags.<\/p>

Then, a few hours later <\/strong>, Google passes the HTML through its JavaScript rendering engine to execute client-side scripts. It is only at this point that it discovers links dynamically injected by React, Vue, Angular, or any other front-end framework. This time lag is not a penalty — it is a technical constraint related to Google's crawl architecture.<\/p>

Does this delay actually affect the indexing of target pages? <\/h3>

Martin Splitt states that the delay only concerns discovery <\/strong>, not indexing or ranking. Once a JavaScript link is discovered and Googlebot visits the target page, it enters the standard indexing process.<\/p>

In practical terms? If page A contains a JavaScript link to page B, Google will discover this link a few hours after crawling A. But once B is discovered, its processing follows the same path as a URL found through a classic HTML link. No difference in weight <\/strong>, PageRank transferred, or indexing priority.<\/p>

Is the 10 million page threshold relevant? <\/h3>

Splitt asserts that for sites with fewer than 10 million pages, this lag remains anecdotal <\/strong>. This precision suggests that Google considers the crawl budget as non-limiting for the majority of websites.<\/p>

However, for massive platforms — marketplaces, media outlets, directories — the delay can be problematic. If your site publishes thousands of new URLs each day and your crawl budget is saturated <\/strong>, every lost hour counts. [To be checked] <\/strong>: Google does not provide any numerical data on the actual impact for sites exceeding this threshold.<\/p>

  • Google crawls the raw HTML first, then renders JavaScript a few hours later <\/li>
  • The delay only affects link discovery, not their indexing or ranking <\/li>
  • For sites with fewer than 10 million pages, the impact is deemed negligible by Google <\/li>
  • Massive sites with a saturated crawl budget may face significant indexing delays <\/li><\/ul>

SEO Expert opinion

Is this statement consistent with field observations? <\/h3>

On paper, yes. Crawl tests with tools like OnCrawl or Botify indeed show a time gap between Googlebot's visit and the appearance of JavaScript links in the logs <\/strong>. The documented delay typically varies between 2 and 48 hours based on the site's crawl frequency.<\/p>

But the nuance that Splitt omits: this lag can extend considerably on sites with low authority or technical issues. On an e-commerce site with thousands of product pages generated in React, some URLs may remain undiscovered for weeks <\/strong> if they do not benefit from any internal or external HTML link.<\/p>

Is the 10 million page threshold credible? <\/h3>

Honestly? It’s a vague statement <\/strong>. Google never discloses how many pages a Googlebot can crawl per day on a given site — the crawl budget remains a black box. This figure of 10 million seems arbitrary and probably calibrated to reassure 99% of sites.<\/p>

Let's be honest: if your site publishes 50,000 new URLs per month via JavaScript and your crawl budget stagnates at 10,000 pages/day, you will feel the delay <\/strong>. It doesn't matter if you fall below the famed threshold of 10 million. [To be checked] <\/strong>: no official metric confirms this limit.<\/p>

When does this delay become critical? <\/h3>

For news sites, marketplaces with limited stock, or classified ad platforms, a few hours of delay can mean lost sales <\/strong>. If your product pages are crawled with a 24-hour lag and your stock is depleted in 12 hours, Google indexes out-of-stocks.<\/p>

Another problematic case: JavaScript-only sites without HTML fallback. If your architecture relies 100% on a front-end framework and your internal linking is entirely dynamic, you are entirely dependent on Google's rendering queue <\/strong>. And this queue can be capricious.<\/p>

Warning: <\/strong> Google never guarantees a maximum time for JavaScript rendering. Splitt's "a few hours" remains vague — it can be 3 hours or 72 hours depending on their server load.<\/div>

Practical impact and recommendations

Should you prioritize raw HTML for critical links? <\/h3>

Yes, without hesitation. If you want to maximize the discovery speed <\/strong> of your strategic pages — new categories, flagship product sheets, blog articles — make sure they are accessible via HTML links present in the initial source code.<\/p>

In practical terms: your header, footer, main menu, and internal linking on high-traffic pages must be in native HTML <\/strong>. Keep JavaScript for secondary elements like search filters, personalized recommendations, or lazy-loaded content.<\/p>

How to audit your JavaScript link architecture? <\/h3>

Disable JavaScript in your browser (DevTools > Settings > Debugger > Disable JavaScript) and navigate your site. Any link invisible in this setup will be discovered with a delay by Google <\/strong>. This is the quickest method to spot issues.<\/p>

Also, use a crawler like Screaming Frog in "Text Only" mode to simulate Googlebot's behavior before rendering. Then compare it with a crawl in full rendering mode: the missing URLs in the first crawl are your risk areas <\/strong>.

What to do if your site exceeds 10 million pages? <\/h3>

Prioritize ruthlessly. If you manage a massive site, every JavaScript link must be justified <\/strong>. Critical business pages — those generating revenue or traffic — must be accessible via pure HTML.<\/p>

Next, optimize your crawl budget: block unnecessary URLs via robots.txt, fix redirect chains, eliminate soft 404s. And most importantly, don’t count on Google to crawl everything <\/strong> — submit your new URLs via the Indexing API for urgent pages.<\/p>

  • Ensure that the links in the header, footer, and main menu are in raw HTML <\/li>
  • Audit the site with JavaScript disabled to identify invisible links <\/li>
  • Compare a "Text Only" crawl with a rendered crawl to detect discrepancies <\/li>
  • Submit new strategic URLs via the Indexing API or Search Console <\/li>
  • Monitor server logs to measure the actual delay between initial crawl and rendering <\/li>
  • Prioritize server-side rendering (SSR) if the crawl budget is saturated <\/li><\/ul>
    The discovery delay of JavaScript links is not a fatality, but it requires thoughtful technical architecture. For high-volume sites or urgent content, betting on native HTML remains the guarantee of fast indexing. These optimizations may seem simple in theory, but their consistent implementation on a complex site often requires an in-depth technical audit and a partial front-end overhaul. If your team lacks resources or expertise in these areas, partnering with a specialized technical SEO agency <\/strong> can help avoid costly mistakes and speed up correction deployment.<\/div>

❓ Frequently Asked Questions

Les liens JavaScript transmettent-ils du PageRank comme les liens HTML ?
Oui. Une fois découverts et traités, les liens JavaScript ont le même poids qu'un lien HTML classique. Le délai concerne uniquement la découverte, pas la transmission d'autorité.
Le délai de découverte affecte-t-il le positionnement d'une page ?
Non, selon Martin Splitt. Le délai impacte uniquement le moment où Google trouve le lien, pas la façon dont la page cible sera indexée ou classée une fois découverte.
Comment savoir si mon crawl budget est saturé par le JavaScript ?
Analyse tes logs serveur : si Googlebot crawle peu de pages par jour malgré un volume important de contenu, ou si le délai entre publication et indexation dépasse plusieurs jours, ton crawl budget est probablement limité.
Le server-side rendering (SSR) élimine-t-il ce problème ?
Oui, complètement. Avec le SSR, le HTML renvoyé au serveur contient déjà tous les liens, donc Google les découvre immédiatement lors du crawl initial sans attendre le rendu JavaScript.
Faut-il éviter React, Vue ou Angular pour le SEO ?
Non. Ces frameworks sont compatibles avec le SEO si tu utilises du SSR ou du pre-rendering. Le problème survient uniquement avec du client-side rendering pur sans fallback HTML.

🎥 From the same video 25

Other SEO insights extracted from this same Google Search Central video · published on 26/04/2021

🎥 Watch the full video on YouTube →

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.