Official statement
Other statements from this video 13 ▾
- 0:33 Is JavaScript pagination really an issue for Google?
- 1:36 Should You Really Fix All the 404 Errors Reported in Search Console?
- 4:04 Is server-side rendering truly the magic solution for JavaScript SEO?
- 5:16 Do JavaScript charts create duplicate content on your pages?
- 5:49 Should you really bundle your JavaScript files to preserve your crawl budget?
- 5:49 Could Fixing CSS Dimensions of Your Graphics Save Your Core Web Vitals?
- 7:00 Can Geolocation-Based JavaScript Redirects Really Be Crawled Safely by Google?
- 11:30 Should you really be concerned about corrupted titles in the site: operator?
- 12:35 Should you really use server-side rendering for your metadata?
- 14:42 Should you really avoid CDNs for your API calls?
- 16:50 Should you really limit client-side API calls to boost your SEO?
- 21:01 Should you really sacrifice tracking accuracy to speed up your page loading?
- 30:33 Should we really consider Googlebot as a user with accessibility needs?
Martin Splitt positions search engine visibility as a non-negotiable technical constraint, on par with performance and accessibility. This means that development teams must integrate SEO from the design phase, not as an afterthought. The challenge is to prevent a technically impeccable site from being invisible because it wasn't designed with crawlability and indexing in mind.
What you need to understand
Why does Google compare SEO to performance or accessibility?
This analogy is not trivial. Performance and accessibility are non-functional requirements: they shape a site's architecture from the very first technical decisions. Google is urging developers to view SEO in the same way — not as an optional "optimization" applied in pre-production but as a foundational criterion that dictates architectural, routing, and rendering choices.
Let's be honest: this stance makes sense from Google's perspective, but it clashes directly with the organizational reality of many companies. Developers think in code, SEO professionals think in traffic, and the two often talk too late. Splitt is attempting to force a paradigm shift: SEO is no longer solely a marketing issue; it’s now an engineering topic.
What specifically falls under this “technical visibility”?
This includes everything that affects Google’s ability to discover, crawl, index, and properly interpret your pages. This encompasses JavaScript management (SSR, SSG, or client hydration), URL structure, robots.txt file, XML sitemap, canonical management, server response time, pagination, redirects, and the handling of 404/410 errors.
These elements are rarely documented in classic technical specs. And that’s where the problem arises: a site can be fast, WCAG AAA accessible, and completely invisible because pages are generated client-side without SSR fallback, or because a robots.txt rule accidentally blocks entire sections.
Is this view shared by development teams?
No. And that’s the central problem. In most organizations, SEO comes after the tech stack. The dev team has already chosen React with client rendering, or a proprietary framework that generates unreadable dynamic URLs. When SEO gets involved, they are told, “This is how it works; we can’t redo everything.”
What Splitt is asking for is that search engine visibility be a prerequisite included in the initial specifications, on the same level as “the site must load in under 2 seconds.” This is ambitious. It assumes that technical decision-makers understand the business stakes of organic traffic — and that they are willing to sometimes sacrifice developer comfort to ensure crawlability.
- SEO visibility must be a non-negotiable constraint from the technical design phase, not a post-production adjustment.
- Stack choices (JavaScript, routing, rendering) must be evaluated based on their impact on crawling and indexing.
- A technically impeccable site (fast, accessible) can generate zero traffic if it is not crawlable or indexable correctly.
- Google is pushing for an organizational shift: SEO is becoming an engineering topic, not just a marketing one.
- Ground reality remains far from this ideal — SEO teams often arrive after crucial technical decisions have been made.
SEO Expert opinion
Is this statement consistent with observed practices in the field?
Yes and no. Google is correct in principle: an invisible site is an useless site, regardless of its technical quality. However, in reality, this ideal vision conflicts with organizational realities that an engineer at Google may not necessarily see. SEO teams are rarely consulted during tech stack or architecture decisions. They discover the disaster in the testing phase, when everything is already in place.
What’s missing in this statement is a concrete manual for imposing this requirement within organizations. Splitt speaks to developers, but developers don’t decide alone. It’s also necessary to convince product managers, CTOs, and business leaders. And that’s a political battle, not a technical one.
What nuances should be added to this claim?
Firstly, not all sites have the same organic visibility stakes. A B2B SaaS in its early stages with a 100% outbound strategy may legitimately prioritize product speed over crawlability. An e-commerce site, however, wagers its survival on SEO traffic — here, yes, search engine visibility must be a core requirement.
Secondly, Splitt doesn’t mention the cost of this requirement. Making a React site crawlable properly (SSR, pre-rendering) adds complexity, slows down deployments, and requires additional resources. It’s easy to say, “consider SEO like performance” when you’re not writing the code yourself or managing an overloaded product backlog. [To verify]: does Google have data on the real ROI of an SEO-first architecture versus post-optimization?
In what cases does this rule not fully apply?
There are contexts where search engine visibility is not critical. Private web applications (connected SaaS, intranets, dashboards) have no reason to be crawlable. The same applies to certain institutional or niche sites that generate their traffic through direct sources or referrals, without ambitions for organic growth.
Additionally, some experimental sites or MVPs may legitimately sacrifice crawlability to move faster. The problem arises when this technical debt becomes permanent — when “we’ll address it later” turns into “it’s too late; we already have 500,000 indexed URLs with poor parameters and a broken client rendering.”
Practical impact and recommendations
What concrete steps should be taken to implement this recommendation?
Start by integrating a crawlability and indexability audit into your technical specifications, even before the first development sprint. This means documenting how URLs will be generated, what type of rendering will be used (client, server, hybrid), how critical content will be served to bots, and how metadata will be managed.
Next, establish SEO validation criteria on the same level as performance criteria. For instance: “All priority pages must be indexable without JavaScript execution,” “TTFB must remain under 600ms for Googlebot,” “No page should return a 404 without a business reason.” These criteria must be tested in CI/CD, not discovered in production.
What mistakes should be avoided during implementation?
Don’t fall into the trap of “we’ll make everything SSR to be sure” without understanding the implications. SSR adds server complexity, lengthens build times, and can degrade user experience if poorly implemented. Sometimes, static pre-rendering or progressive hydration does a better job.
Another classic mistake is thinking that Google’s dynamic rendering tool (the infamous “second wave rendering”) relieves the need to optimize the initial HTML. No. This deferred rendering is a patch, not a solution. Sites that rely on it face indexing delays, crawl budget issues, and sometimes a vague understanding of their content by Google.
How do I verify that my site meets this requirement?
Use Google Search Console, especially the Coverage section and the URL Inspection tool. Check that strategic pages are properly indexed, that the initial HTML rendering contains critical content, and that internal links are crawlable (not in non-executed JavaScript). Compare the raw source code (curl or “View Source”) with what you see in the browser — any discrepancies must be justified.
Supplement this with a crawler like Screaming Frog or Oncrawl to simulate Googlebot's behavior and detect orphan pages, redirect chains, duplicate content, and canonical errors. If your performance monitoring tool (Lighthouse CI, WebPageTest) doesn't also test crawlability, you have a blind spot.
- Integrate a technical SEO audit into the initial specifications, before any development
- Define crawlability/indexability validation criteria on the same level as performance criteria
- Test initial HTML rendering without JavaScript to ensure critical content is accessible
- Implement continuous crawl monitoring (Search Console, server logs, recurring crawler)
- Train development teams on the basics of crawlability and indexing
- Document technical decisions impacting SEO (framework choices, routing management, rendering strategy)
❓ Frequently Asked Questions
Est-ce que tous les sites doivent traiter le SEO comme une exigence technique prioritaire ?
Comment convaincre une équipe de développement de prendre en compte le SEO dès la conception ?
Le rendu dynamique de Google suffit-il pour compenser un site mal conçu pour le crawl ?
Quels frameworks JavaScript sont les plus compatibles avec cette approche SEO-first ?
Comment mesurer si mon site respecte cette exigence de visibilité technique ?
🎥 From the same video 13
Other SEO insights extracted from this same Google Search Central video · duration 36 min · published on 30/10/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.