Official statement
Other statements from this video 12 ▾
- 10:15 Does Google really factor in consecutive loading times, or just the initial visit?
- 60:22 Is Server-Side Rendering really essential for SEO in 2025?
- 76:24 Does the JSON hydration at the bottom of the page harm SEO?
- 121:54 Has Googlebot really become foolproof when it comes to JavaScript?
- 152:49 How does switching to Evergreen Chrome revolutionize Google's page rendering?
- 183:08 Does Google really render ALL of your JavaScript pages?
- 196:12 Why does Google never click on your Load More buttons, and how can you avoid this?
- 226:28 Should you really hide cumulative content from infinite paginations from Google?
- 251:03 Can you really provide a different navigation experience to Google without risking a cloaking penalty?
- 271:04 Does Googlebot really click on the JavaScript buttons and links on your site?
- 303:17 Should you create a separate page for each day of a multi-day event or canonize to a single page?
- 402:37 Is it true that JavaScript is fully compatible with modern SEO?
Google crawls URLs from both the initial HTML and the rendered HTML after JavaScript execution. Links that are only visible in the raw source HTML but are dynamically removed may technically work, but create a risky inconsistency. Martin Splitt recommends avoiding this practice to prevent future issues, without specifying what those might be.
What you need to understand
What Does This Difference Between Initial HTML and Rendered HTML Really Mean? <\/h3>
The initial HTML <\/strong> refers to the raw source code returned by the server before any JavaScript execution. This is what you see in a simple The rendered HTML <\/strong>, on the other hand, corresponds to the final DOM after executing all client-side scripts. A link present in the initial HTML can disappear if a script removes it, hides it via Googlebot performs a two-phase crawl <\/strong>: first the raw HTML (quick, low-cost crawl), then the JavaScript rendering (slower, resource-intensive crawl). Links in the initial HTML are discovered immediately, while those added by JS require additional time.<\/p> If a link is present in the initial HTML but disappears after rendering, Googlebot can technically detect it in the first phase. However, this inconsistency creates a semantic ambiguity <\/strong>: should the bot follow this link or not? The webmaster's intent is unclear.<\/p> Martin Splitt mentions "future problems" without elaborating — a typical Google communication tactic. In practice, this could lead to wasted crawl budget <\/strong> on URLs you don't want indexed, or conversely, hinder the quick discovery of strategic pages.<\/p> Another risk: if Google detects suspicious patterns (dynamically hidden links, unintentional cloaking), it could trigger a manipulation signal <\/strong>. Nothing confirmed, but the history of algorithm updates shows that HTML/JS inconsistencies are closely monitored.<\/p>curl <\/code> or in the browser's source view.<\/p>display:none <\/code>, or modifies the DOM. Google crawls both states — but that doesn't mean they carry the same weight.<\/p>Why Does Google Extract URLs from Both Versions? <\/h3>
What Are the Concrete Risks of This Inconsistency? <\/h3>
SEO Expert opinion
Is This Statement Consistent with Real-World Observations? <\/h3>
Yes and no. On paper, Google claims to extract URLs from both versions — this is confirmed by the URL Inspection Tool <\/strong> and server log audits. But stating that "links only present in the initial HTML may work" remains deliberately vague <\/strong>. Work how? Are they crawled with the same priority? Do they pass PageRank? No concrete data.<\/p> In practice, links present only in the initial HTML are often crawled <\/strong> but may be ignored in internal link calculations if Google detects that they disappear after rendering. [To be verified] <\/strong> — no official documentation confirms the relative weight of these links in the ranking algorithm.<\/p> The classic scenario: a site with lazy-loaded navigation <\/strong> or a hamburger menu managed in JS that dynamically removes footer links present in the raw HTML. Another frequent case: pagination links <\/strong> generated server-side and then hidden by an infinite scroll script.<\/p> These practices do not break indexing — Google eventually discovers the URLs — but create a discovery latency <\/strong> and a risk of misallocated crawl budget. On a site with 100,000 pages, this matters. Splitt recommends consistency without explaining the threshold of tolerance: how many inconsistent links before demotion? We do not know.<\/p> No, that would be an excessive interpretation. Google does not say "no using JavaScript for links," it says "be consistent." If your navigation is managed in JS, make sure that the final links in the rendered DOM match those in the initial HTML <\/strong> — or that the initial HTML only contains links you genuinely want to see crawled.<\/p> The real advice: audit your strategic pages using the Mobile-Friendly Test <\/strong> (which shows the rendered HTML) and compare it with a What Use Cases Truly Cause Problems? <\/h3>
Should You Recode Everything in Server-Side Rendering? <\/h3>
curl <\/code> of the initial HTML. If you see differences in the main navigation links or critical internal linking, correct them.<\/p>
Practical impact and recommendations
How Can I Identify Inconsistencies Between Initial vs Rendered HTML on My Site? <\/h3>
Start with a manual audit <\/strong> of your critical templates: homepage, categories, product sheets. Use the Next, automate with Screaming Frog <\/strong> by activating the "Render JavaScript" mode: the tool will crawl both versions and flag any link differences. Export the reports and look for URLs present only in "HTML" but absent from "Rendered HTML" — these are your problem candidates.<\/p> First rule: if a link is in the initial HTML, it must remain in the final DOM <\/strong> — unless you have a legitimate reason to hide it (e.g., personalized content, A/B testing). In that case, it’s better not to include it at all in the raw HTML and generate it only client-side.<\/p> Second action: harmonize your navigation menus <\/strong>. If your header is in JS, ensure that the same links exist in the initial HTML via a This is where many React or Vue sites without SSR struggle. The technical solution: implement Server-Side Rendering (SSR) <\/strong> or Static Site Generation (SSG) <\/strong> through Next.js, Nuxt, or Gatsby. But this requires heavy refactoring.<\/p> Pragmatic alternative: use prerender.io <\/strong> or a similar service to serve a pre-rendered HTML version to Googlebot. Less elegant, but effective if you cannot touch the code. In any case, test with the Search Console and the URL Inspection Tool to ensure Googlebot sees the expected links.<\/p>curl -A "Googlebot" <\/code> command to retrieve the raw HTML, then compare it with the rendered HTML <\/strong> in Chrome DevTools (Elements tab after full load). <\/p>What Corrections Should I Prioritize? <\/h3>
<noscript><\/code> tag or server-side rendering. Pagination links should follow the same logic: either fully server-side or fully client-side with prerendering.<\/p>What If My JS Architecture Makes Consistency Challenging? <\/h3>
❓ Frequently Asked Questions
Google suit-il vraiment les liens présents uniquement dans le HTML initial mais absents du rendu ?
Un lien masqué en display:none après chargement JS pose-t-il problème ?
Comment vérifier que Googlebot voit les mêmes liens que moi après rendu JS ?
Est-ce que les frameworks JS modernes (React, Next.js) créent automatiquement ce type d'incohérence ?
Faut-il privilégier le server-side rendering pour éviter tout risque ?
🎥 From the same video 12
Other SEO insights extracted from this same Google Search Central video · duration 465h56 · published on 24/03/2021
🎥 Watch the full video on YouTube →Related statements
Get real-time analysis of the latest Google SEO declarations
Be the first to know every time a new official Google statement drops — with full expert analysis.
💬 Comments (0)
Be the first to comment.