What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

Links present only after JavaScript rendering can cause a delay of a few hours in their discovery by Google. Google first examines the raw HTML to discover links, then does so again after rendering. For sites with fewer than 1 to 10 million pages, this generally poses no issue.
🎥 Source video

Extracted from a Google Search Central video

💬 EN 📅 26/04/2021 ✂ 26 statements
Watch on YouTube →
Other statements from this video 25
  1. Les liens JavaScript retardent-ils vraiment la découverte par Google ?
  2. Pourquoi Google ignore-t-il vos balises canoniques quand le HTML brut contredit le rendu ?
  3. Le noindex en HTML brut empêche-t-il définitivement le rendu JavaScript par Google ?
  4. JavaScript et SEO : peut-on vraiment modifier title, meta et liens côté client sans risque ?
  5. Le JavaScript côté client est-il vraiment un frein pour vos performances SEO ?
  6. HTML brut vs rendu : Google s'en fiche-t-il vraiment ?
  7. Google AdSense pénalise-t-il vraiment la vitesse de votre site comme n'importe quel script tiers ?
  8. Faut-il s'inquiéter des erreurs 'other error' sur les images dans la Search Console ?
  9. User agent ou viewport : quelle détection privilégier pour vos versions mobiles séparées ?
  10. Les liens de navigation JavaScript affectent-ils vraiment le référencement de votre site ?
  11. Peut-on vraiment perdre le contrôle de sa canonical en laissant l'attribut href vide au chargement ?
  12. Quel crawler Google utilise vraiment ses outils de test SEO ?
  13. Les données structurées de votre version mobile s'appliquent-elles aussi au desktop ?
  14. Faut-il vraiment arrêter de craindre le JavaScript pour le SEO ?
  15. Pourquoi une balise canonical différente entre HTML brut et rendu peut-elle ruiner votre stratégie de canonicalisation ?
  16. Peut-on vraiment retirer un noindex via JavaScript sans risquer la désindexation ?
  17. Peut-on vraiment modifier les balises meta et les liens en JavaScript sans risque SEO ?
  18. Les produits Google bénéficient-ils d'un avantage SEO caché dans les résultats de recherche ?
  19. Faut-il s'inquiéter des erreurs 'other' dans l'outil d'inspection d'URL ?
  20. Google ignore-t-il vraiment vos images lors du rendu pour la recherche web ?
  21. User agent ou viewport : Google fait-il vraiment la différence pour l'indexation mobile ?
  22. Les liens générés en JavaScript transmettent-ils vraiment les signaux de ranking comme les liens HTML classiques ?
  23. Une balise canonical vide en HTML peut-elle forcer Google à auto-canonicaliser votre page par erreur ?
  24. Le Mobile-Friendly Test peut-il remplacer l'URL Inspection Tool pour auditer le crawl mobile ?
  25. Pourquoi Google ignore-t-il vos données structurées desktop après le mobile-first indexing ?
📅
Official statement from (5 years ago)
TL;DR

Google first discovers links in raw HTML, then after JavaScript rendering — which can create a delay of a few hours. For sites with fewer than 1 to 10 million pages, Martin Splitt asserts that this gap generally has no impact. Essentially, this statement invites us to put JS concerns into perspective, but it doesn't exempt from case-by-case analysis depending on the site's architecture.

What you need to understand

What happens exactly when Googlebot encounters JavaScript?

Googlebot operates in two phases: it first retrieves the raw HTML returned by the server, scans the links present, and then — after a JavaScript rendering phase — revisits the content enriched by the client. This second pass can occur several hours later, depending on the rendering queue.

If your strategic links only appear after JS execution (e.g., a menu generated via React, SPA navigation), Google detects them only on the second pass. The observed delay is described as “slightly delayed” by Martin Splitt — a vague formulation that deserves scrutiny.

Why is there a distinction between raw HTML and rendering?

Google favors quick discovery via raw HTML because it is less resource-intensive on servers. JavaScript rendering utilizes CPU, memory, and a dedicated queue – hence the time delay.

For an e-commerce site with product listings added daily, this delay may mean new products remain invisible for several hours. Conversely, on a nearly static showcase site, the impact is negligible: a few hours don’t change the overall SEO.

At what page threshold should we be concerned?

Splitt establishes a threshold between 1 and 10 million pages — a range broad enough to be questionable. Below this, he says, “it generally poses no problem.” Above this, silence.

This assertion remains vague: what is the true impact beyond 10M pages? Does a delay extend from a few hours to a few days? A critical crawl budget loss? [To be verified] — Google provides neither precise numbers nor public case studies.

  • Two crawl passes: raw HTML first, then JavaScript rendering a few hours later.
  • Declared delay: “a few hours” — an imprecise formulation that does not allow for refined anticipation.
  • Risk threshold: between 1 and 10 million pages, but no numerical data on the impact beyond.
  • Determining context: frequency of new content addition, SPA or hybrid architecture, allocated crawl budget.
  • Crawl budget: a massive site with JS links may saturate its rendering queue before all links are discovered.

SEO Expert opinion

Is this statement consistent with real-world observations?

Yes and no. On medium-sized sites, it is observed that JS links eventually get crawled without issues. The delay of some hours corresponds to real-world reports — provided the site has no structural problems (JS timeouts, console errors, chain redirects).

However, on platforms with frequent publication (news, flash e-commerce, aggregators), this delay becomes significant. An article published at 9 AM may only be discovered by 2 PM — directly impacting visibility on Google Discover or Top Stories. [To be verified]: Google never specifies whether this delay is unavoidable or if a well-optimized site can speed it up.

What nuances should be added to this threshold of 1 to 10 million pages?

This threshold raises more questions than it answers. Firstly, there is a factor of 10 between the two bounds — hard to make it an actionable rule. Secondly, the number of pages is just one indicator: a site with 500,000 pages with 80% duplicated or thin content will experience more issues than a well-structured site with 5 million pages.

Concretely, the true criterion is not so much the number of pages but the update frequency and the internal link density generated by JS. A media site with 200,000 articles updated every hour via React will be more impacted than a static product catalog of 2 million listings. Google does not say this — but it is what we observe.

When does this rule not apply at all?

Let’s be honest: if your main navigation, breadcrumb, or pagination links are exclusively in JS, the delay becomes a real problem. Googlebot may ultimately discover everything, but it will do so with a latency that penalizes the indexing of new pages.

Another critical case: sites with infinite scroll using Ajax to load subsequent content. Google does not scroll — if the “next page” link exists only in JS and you have no HTML fallback, part of your content remains invisible. Splitt does not mention this, which is curious for a statement intended to reassure.

Warning: a discovery delay, even of a few hours, can be enough to lose a ranking window on highly competitive or time-sensitive queries. Never underestimate the impact of a 100% JS architecture without a fallback.

Practical impact and recommendations

What concrete steps should be taken to limit this delay?

First rule: don’t put all your eggs in one basket. If your navigation, pagination, or strategic internal linking can exist in raw HTML, do it. Server-side rendering (SSR) or static site generation (SSG) ensures immediate discovery.

Second instinct: test the raw HTML returned by your server. Perform a curl or use “View Page Source” (not the inspector, which shows the final DOM). Are all your critical links present? If not, you depend on JS rendering — and therefore on the delay mentioned by Splitt.

What mistakes should be absolutely avoided?

Don’t rely on Google’s patience. Even if Splitt says “it generally doesn’t pose a problem,” a poorly optimized site accumulates delays: rendering delays, uncaught JS errors, timeouts, client-side redirects. Result: pages take several days to be discovered, or worse, never crawled if they fall outside the budget.

Another classic pitfall: thinking that an XML sitemap is enough to compensate. The sitemap speeds up URL discovery but does not replace a solid internal linking in HTML. If your internal links exist only in JS, internal PageRank circulates poorly — and no sitemap can correct this.

How can I check if my site is being crawled correctly despite JS?

Use Search Console: the “Coverage” section to spot discovered but non-indexed pages, and “URL Inspection” to visualize the final rendering as seen by Google. Compare the raw HTML and the rendered HTML — if the gap is huge, you are in a risk zone.

Simultaneously, monitor the server logs: how long between the first Googlebot request (raw HTML) and the second (rendering)? If this delay regularly exceeds 24 hours, your rendering queue is likely saturated, or Google considers your site low priority. In such cases, optimize: reduce JS weight, fix console errors, switch to partial SSR.

  • Audit the raw HTML returned by the server to verify the presence of strategic links
  • Favor server-side rendering (SSR) or static site generation (SSG) for critical areas
  • Test Google’s rendering via the “URL Inspection” tool in Search Console
  • Analyze server logs to measure the actual delay between HTML crawl and rendered crawl
  • Never solely rely on the XML sitemap to compensate for weak internal linking
  • Monitor JavaScript errors in production — broken JS blocks discovery
In summary: if your site has fewer than a million pages and you publish infrequently, the JS delay is minimal. Beyond that, or if you need rapid indexing (news, dynamic e-commerce), aim for a hybrid rendering with critical links in raw HTML. These optimizations can be complex to implement alone, especially if your tech stack relies on modern frameworks. Engaging a specialized SEO agency allows for precise diagnostics, tailored architectural recommendations, and technical support to ensure optimal discovery without sacrificing user experience.

❓ Frequently Asked Questions

Le délai de découverte des liens JavaScript affecte-t-il directement le ranking ?
Pas directement, mais indirectement oui : si une page n'est pas découverte à temps, elle ne peut pas être indexée ni ranker. Sur des contenus time-sensitive, ce délai peut donc coûter cher en visibilité.
Un sitemap XML peut-il compenser l'absence de liens en HTML brut ?
Il accélère la découverte des URL mais ne remplace pas le maillage interne. Sans liens HTML, le PageRank circule mal, et certaines pages peuvent rester non crawlées malgré leur présence dans le sitemap.
Comment savoir si mon site est concerné par ce délai ?
Comparez le HTML brut (source de la page) et le HTML rendu (via l'outil d'inspection de la Search Console). Si vos liens critiques n'apparaissent qu'après rendu, vous êtes concerné.
Tous les frameworks JavaScript sont-ils égaux face à ce problème ?
Non. Next.js en SSR ou Nuxt.js en mode universel génèrent du HTML côté serveur — donc pas de délai. React ou Vue en client-side pur dépendent du rendu Google, donc délai garanti.
Que se passe-t-il au-delà de 10 millions de pages selon Splitt ?
Il ne le dit pas — c'est précisément le flou de cette déclaration. On peut supposer que le délai s'allonge, mais aucune donnée officielle ne le confirme. À vérifier sur le terrain avec vos propres logs.
🏷 Related Topics
Domain Age & History AI & SEO JavaScript & Technical SEO Links & Backlinks

🎥 From the same video 25

Other SEO insights extracted from this same Google Search Central video · published on 26/04/2021

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.