What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

In a pre-rendered React SPA, serving an HTTP 404 code to Googlebot (via pre-render) while the user sees a 200 error page is generally not considered cloaking, unless you are doing something really dubious. If the 200 page for the user is also an error page, Google will detect it as a soft 404.
9:35
🎥 Source video

Extracted from a Google Search Central video

⏱ 51:17 💬 EN 📅 12/05/2020 ✂ 37 statements
Watch on YouTube (9:35) →
Other statements from this video 36
  1. 1:02 Faut-il ignorer le score Lighthouse pour optimiser son SEO ?
  2. 1:02 La vitesse de page est-elle vraiment un facteur de classement Google ?
  3. 1:42 Lighthouse et PageSpeed Insights ne servent-ils vraiment à rien pour le ranking ?
  4. 2:38 Les Web Vitals de Google modélisent-ils vraiment l'expérience utilisateur ?
  5. 3:40 La vitesse de page est-elle vraiment un facteur de ranking aussi décisif qu'on le prétend ?
  6. 7:07 Faut-il vraiment injecter la balise canonical via JavaScript ?
  7. 7:27 Peut-on vraiment injecter la balise canonical via JavaScript sans risque SEO ?
  8. 8:28 Google Tag Manager ralentit-il vraiment votre site et faut-il l'abandonner ?
  9. 8:31 GTM sabote-t-il vraiment votre temps de chargement ?
  10. 10:06 Servir un 404 à Googlebot et un 200 aux utilisateurs, est-ce vraiment du cloaking ?
  11. 16:16 Les redirections 301, 302 et JavaScript sont-elles vraiment équivalentes pour le SEO ?
  12. 16:58 Les redirections JavaScript sont-elles vraiment équivalentes aux 301 pour Google ?
  13. 17:18 Le rendu côté serveur est-il vraiment indispensable pour le référencement Google ?
  14. 17:58 Faut-il vraiment investir dans le server-side rendering pour le SEO ?
  15. 19:22 Le JSON sérialisé dans vos apps JavaScript compte-t-il comme du contenu dupliqué ?
  16. 20:02 L'état applicatif en JSON dans le DOM crée-t-il du contenu dupliqué ?
  17. 20:24 Cloudflare Rocket Loader passe-t-il le test SEO de Googlebot ?
  18. 20:44 Faut-il tester Cloudflare Rocket Loader et les outils tiers avant de les activer pour le SEO ?
  19. 21:58 Faut-il ignorer les erreurs 'Other Error' dans Search Console et Mobile Friendly Test ?
  20. 23:18 Faut-il vraiment s'inquiéter du statut 'Other Error' dans les outils de test Google ?
  21. 27:58 Faut-il choisir un framework JavaScript plutôt qu'un autre pour son SEO ?
  22. 31:27 Le JavaScript consomme-t-il vraiment du crawl budget ?
  23. 31:32 Le rendering JavaScript consomme-t-il du crawl budget ?
  24. 33:07 Faut-il abandonner le dynamic rendering pour le SEO ?
  25. 33:17 Faut-il vraiment abandonner le dynamic rendering pour le référencement ?
  26. 34:01 Faut-il vraiment abandonner le JavaScript côté client pour l'indexation des liens produits ?
  27. 34:21 Le JavaScript asynchrone post-load bloque-t-il vraiment l'indexation Google ?
  28. 36:05 Faut-il vraiment passer sur un serveur dédié pour améliorer son SEO ?
  29. 36:25 Serveur mutualisé ou dédié : Google fait-il vraiment la différence ?
  30. 40:06 L'hydration côté client pose-t-elle vraiment un problème SEO ?
  31. 40:06 L'hydratation SSR + client est-elle vraiment sans danger pour le SEO Google ?
  32. 42:12 Faut-il arrêter de surveiller le score Lighthouse global pour se concentrer sur les métriques Core Web Vitals pertinentes à son site ?
  33. 42:47 Faut-il vraiment viser 100 sur Lighthouse ou est-ce une perte de temps ?
  34. 45:24 La 5G va-t-elle vraiment accélérer votre site ou est-ce une illusion ?
  35. 49:09 Googlebot ignore-t-il vraiment vos images WebP servies via Service Workers ?
  36. 49:09 Pourquoi Googlebot ignore-t-il vos images WebP servies par Service Worker ?
📅
Official statement from (5 years ago)
TL;DR

Google allows a pre-rendered React SPA to serve an HTTP 404 code to Googlebot while the user sees a 200 displaying an error. This is generally not considered cloaking, unless there is clear manipulation. If the 200 page for the user is indeed an error page, Google will detect it as a soft 404 anyway.

What you need to understand

Why is there tolerance for divergent HTTP codes?

In modern Single Page Applications, pre-rendering has become essential for facilitating indexing. Tools like Prerender.io or Rendertron intercept Googlebot's requests and serve them static HTML, while human visitors load the client-side JavaScript application.

The issue arises when a route does not exist: the SPA visually displays an error message with a 200 code, while the pre-rendering sends a 404 to Googlebot. Technically, this involves serving different content based on the user-agent — the classic definition of cloaking. However, Google recognizes that the intent here is not fraudulent.

This clarification addresses a legitimate anxiety: many developers fear that optimizing the Googlebot experience through pre-rendering may trigger a manual penalty. Martin Splitt clarifies this ambiguity for standard cases.

What defines the line between optimization and manipulation?

The critical nuance lies in the expression ‘something really dubious’. Google does not precisely define this threshold, but the context suggests that it pertains to cases where the 200 content for the user would be rich and functional, while Googlebot would systematically receive 404s to conceal entire sections.

If the user indeed sees an error page (“Page not found”, “This resource no longer exists”), then serving a 404 to Googlebot aligns with the reality of the user experience. It is even more honest than allowing Google to index an empty 200 that would trigger a soft 404.

How does Google detect soft 404s?

A soft 404 occurs when a server returns a 200 code for a page that logically should be a 404 — very sparse content, a visible error message, degraded UX signals. Google uses content heuristics: text/HTML ratio, linguistic patterns typical of errors, absence of usual structural elements.

In the case of a SPA, if the 200 page served to visitors is indeed an error, Google will classify it as a soft 404 even without receiving the appropriate HTTP code. That is why serving the true 404 to Googlebot via pre-rendering is actually an improvement: you align the HTTP signal with the content reality.

  • Pre-rendered 404 for Googlebot + visual error 200 page for the user: tolerated by Google, considered a legitimate technical optimization.
  • Soft 404: detected through content analysis, not just the HTTP code — Google identifies empty pages or error messages even with a 200.
  • Cloaking prohibited: hiding existing content from Googlebot by serving systematic 404s or showing rich content to the bot while displaying an error to visitors.
  • Vague threshold: Google does not clarify ‘really dubious’ — caution dictates documenting any divergence and ensuring it accurately reflects the user experience.

SEO Expert opinion

Is this statement consistent with field observations?

Overall, yes. For years, sites using pre-rendering with differentiated HTTP codes have not faced visible manual penalties, as long as the user experience remains consistent. Comparison crawls between Googlebot and standard browsers show that Google tolerates these technical discrepancies when they support indexing.

However, the phrase ‘something really dubious’ remains vague. [To be verified]: no numerical metrics, no precise tolerance threshold is provided. We remain in the subjective judgment of the Quality Raters team or anti-spam algorithms. A site may fly under the radar for months and then change if usage patterns shift.

What gray areas should be monitored?

The real danger is not a legitimate SPA 404, but a gradual drift: a developer starting to serve 404s for marginal pages ‘just to see’, then extending the practice to entire categories. Or worse, serving a rich 200 to the visitor and a 404 to Googlebot to control indexing without using robots.txt.

Another trap: pre-rendering configuration errors. I’ve seen sites where the pre-rendering service served 404s by default due to a poorly calibrated timeout, while the JavaScript page eventually finished loading on the client-side. Google can interpret this as instability, or even manipulation if the pattern is systematic.

Attention: If your pre-rendering serves 404s for pages that truly exist on the client-side and are not errors, you cross the red line. Google can detect this through UX metrics (session time, bounce rate, navigation) and via random headless Chrome crawls.

In what cases does this rule not apply?

This tolerance only concerns true error pages. If you serve a 404 to Googlebot for an active product page, a service sheet, or a blog post, that is pure cloaking — even if the user sees a 200 with content.

Similarly, if you use this technique to hide duplicate content or low-quality pages that you do not want to index, Google might see it as a manipulation attempt. The best practice remains using noindex or canonical, not a selective 404 based on user-agent.

Practical impact and recommendations

What concrete steps should you take to stay compliant?

First, audit your pre-rendering: list all the routes serving a different HTTP code to Googlebot compared to users. Document each case with a clear justification — ‘page removed’, ‘invalid parameter’, ‘resource never created’. If you cannot justify the divergence in 10 seconds, it is probably a red flag.

Next, test the visual consistency: navigate to the URLs that return a 404 to Googlebot. Does the user actually see an error message, a broken layout, or empty content? If the page displays useful content, align the HTTP code — serve a 200 everywhere or a 404 everywhere.

What mistakes should you absolutely avoid?

Never create a whitelist/blacklist based on user-agent to serve strategic 404s. This is exactly the pattern that anti-spam algorithms detect. If you need to block indexing, use the robots.txt file, the noindex meta tag, or the X-Robots-Tag header.

Avoid the default 404: some pre-rendering services are configured to return 404s in case of timeout or JavaScript errors. This can mask real bugs and create an illegitimate gap with the user experience. Configure generous timeouts and log rendering errors.

How can I check if my implementation is compliant?

Use the URL Inspection tool in Google Search Console to compare the version rendered by Googlebot with the one viewed in a standard browser. Look for discrepancies in HTTP codes, but also in content — a massive gap signals a problem.

Monitor the index coverage reports for spikes in soft 404s. If Google classifies many of your pages as soft 404s while you are serving clean 404s via pre-rendering, that’s a good sign — it confirms that both signals align. Conversely, if you see soft 404s without having sent an HTTP 404, dig deeper.

  • Document each route serving a different HTTP code to Googlebot versus users, with clear business justification.
  • Manually test pre-rendered URLs as 404s: does the user actually see a visual error page?
  • Configure generous timeouts on the pre-rendering service to avoid accidental 404s due to JavaScript delays.
  • Use the URL Inspection tool in Search Console to compare Googlebot rendering and standard browser rendering.
  • Monitor coverage reports to spot anomalies (mass soft 404s, unexpected de-indexing).
  • Avoid any whitelist/blacklist user-agent logic — prefer robots.txt, noindex, or canonical to control indexing.
Martin Splitt's statement explicitly allows a common technical pattern in SPA architectures, provided that the HTTP code discrepancy accurately reflects the user experience. Specifically, if your page is a true error for the visitor, serving a clean 404 to Googlebot via pre-rendering is actually recommended — it’s more honest than a soft 404. However, implementing this requires a nuanced understanding of pre-rendering mechanisms, monitoring tools, and Search Console signals. If you manage a complex SPA with thousands of dynamic routes, auditing and configuration can quickly become time-consuming. In this context, working with an SEO agency specializing in JavaScript and modern architectures can accelerate compliance and avoid costly errors that impact indexing.

❓ Frequently Asked Questions

Servir un 404 à Googlebot et un 200 aux visiteurs est-il toujours autorisé ?
Oui, si la page 200 pour l'utilisateur est effectivement une page d'erreur visuelle. Google tolère cet écart quand il reflète fidèlement l'expérience utilisateur, pas quand il sert à masquer du contenu.
Qu'est-ce qu'un soft 404 et comment Google le détecte-t-il ?
Un soft 404 est une page qui renvoie un code 200 mais affiche un contenu d'erreur. Google le détecte via des heuristiques de contenu : ratio texte/HTML faible, patterns linguistiques typiques des erreurs, signaux UX dégradés.
Puis-je utiliser cette technique pour désindexer des pages low-quality ?
Non, c'est considéré comme une manipulation. Pour contrôler l'indexation, utilisez robots.txt, la balise meta noindex ou le header X-Robots-Tag, pas un 404 sélectif selon le user-agent.
Comment vérifier que mon pré-rendu ne crée pas de cloaking involontaire ?
Utilisez l'Inspection d'URL dans Search Console pour comparer le rendu Googlebot et navigateur standard. Surveillez aussi les rapports de couverture pour détecter des soft 404 inattendus.
Quels sont les risques si je configure mal mon service de pré-rendu ?
Un timeout trop court peut servir des 404 accidentels à Googlebot pour des pages valides, créant un écart illégitime avec l'expérience utilisateur. Google peut interpréter ça comme instabilité ou manipulation.
🏷 Related Topics
Domain Age & History Crawl & Indexing HTTPS & Security AI & SEO JavaScript & Technical SEO Penalties & Spam

🎥 From the same video 36

Other SEO insights extracted from this same Google Search Central video · duration 51 min · published on 12/05/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.