Official statement
Other statements from this video 7 ▾
- □ Can JavaScript really manage the entire lifecycle of a Single Page App for SEO?
- 2:05 How can you prevent indexing errors linked to code paths that Googlebot might reject?
- 2:38 What happens when Googlebot consistently misses your pages if the URL never changes?
- 2:38 How can you ensure your single-page app is crawlable by Google without losing its indexing?
- 3:09 Why does Google emphasize unique titles and meta descriptions for each view?
- 4:02 Why does sending a HTTP 200 on your errors sabotage your crawl budget?
- 4:47 How should you properly handle HTTP error codes in a single-page app?
Google states that if JavaScript redirects to a URL configured with a valid HTTP error code, Googlebot interprets the original page as redirecting to an error. In practice, the source page is unlikely to be indexed, as the bot will follow the redirection and read the final HTTP status. This mechanism assumes that the target URL indeed returns a 404 or 410 on the server side — otherwise, Googlebot might see a 200 with a JavaScript error message, a much more ambiguous scenario.
What you need to understand
What happens when JavaScript redirects to an error page?
When a JavaScript script performs a redirection (window.location, dynamic meta refresh, or other), Googlebot follows this redirection during the rendering of the page. If the destination URL returns an HTTP error code like 404 or 410, the bot logs that the original page leads to an error.
What matters here is the combination: the script triggers the redirection AND the target page presents a valid HTTP error status. If the target returns a 200 OK with just an error message displayed in JavaScript, Googlebot will technically see an indexable page — even though visually, the user notices an error.
Why is there this distinction from traditional server-side redirects?
Server-side redirects (301, 302, 307) are processed before rendering: Googlebot follows the HTTP header, point final. With JavaScript, the bot must first load the original page, execute the JS, detect the redirection, then crawl the target URL. This introduces a delay and a risk of different interpretations.
Martin Splitt's statement specifies that when the process is correctly set up, Googlebot signals the error. In other words: the source page is unlikely to be indexed, and no PageRank will be passed. It's equivalent to a direct 404 but with a detour through JavaScript rendering.
In what cases does this mechanism fail?
If the destination page returns a 200 instead of a 404/410, Googlebot will see a redirection to a valid page. The result: the original page disappears from the index, replaced by the error URL which, in turn, can get indexed with hollow content. This is a common scenario on sites with soft 404s — pages that visually display an error but do not return the correct HTTP status.
Another trap: if JavaScript is blocked by robots.txt or if rendering fails for some reason, Googlebot may never see the redirection. It will index the original page without knowing it should lead to an error. This is a classic issue on sites with heavily JS-based architectures that are poorly configured.
- A JavaScript redirection to an error requires a valid HTTP error code on the server side of the target URL to be correctly interpreted by Googlebot.
- The bot follows the redirection during rendering, which adds delay and depends on the proper execution of JavaScript.
- If the target URL returns a 200, Googlebot will see a redirection to an indexable page, not an error.
- Blocking JavaScript or poorly configuring rendering prevents Googlebot from detecting the redirection, creating indexed orphan pages.
- This mechanism is less reliable than a direct 404 on the server side but works if all technical elements are in place.
SEO Expert opinion
Is this statement consistent with what we observe in the field?
Yes, broadly speaking. When a JavaScript page properly redirects to a URL that returns a 404 or 410 server, Google ultimately does end up de-indexing the source page. The delay may vary — sometimes a few days, sometimes several weeks — but the final behavior is consistent.
The problem is that Splitt's statement remains vague on a critical point: what happens if the JavaScript redirection is detected but the target URL returns a 200 with an error message displayed on the client side? [To verify] Does Google treat this as a soft 404, or does it stupidly index the target URL? Tests show it to be erratic: sometimes Google detects the soft 404, sometimes it indexes the empty page.
What nuances should be added to this statement?
The statement assumes that JS rendering works correctly at every passage of Googlebot. Yet, we know that rendering is asynchronous, sometimes deferred several days after the initial crawl. If the JavaScript redirection takes too long to execute or depends on slow external resources, Googlebot may miss the redirection at the first rendering.
A second nuance: Splitt says it 'signals' an error, but he does not specify whether this consumes crawl budget in the same way as a direct server 404. My field observation: yes, it consumes budget — and even more so, since the bot has to render the original page AND crawl the target URL. On a large site with thousands of error pages managed in JS, this can become a pitfall.
In what cases does this method cause problems?
Let's be honest: managing errors in JavaScript rather than on the server side is rarely a good idea for SEO. It complicates debugging, introduces points of failure (JS blocked, rendering failed, timeout), and slows down indexing. Google may eventually understand, but why take this risk?
The legitimate use case is a full JavaScript site (SPA, React, Vue, Angular) where the technical architecture necessitates routing everything in JS. Even then, there are hybrid solutions (pre-rendering, SSR) that allow returning a clean server 404 without going through client-side routing. If you have a choice, always prefer server-side management of error codes.
Practical impact and recommendations
What should you concretely do if you're using JavaScript redirections?
First, check that the target URL indeed returns an HTTP error code on the server side (404, 410), not a 200 with an error message displayed in JavaScript. Test with curl or DevTools: the raw HTTP response should show the correct status even before JS executes.
Next, use Google Search Console and the 'URL Inspection' tool to verify that Googlebot sees the redirection during rendering. Look at the 'Rendered Page' section: if the bot detects a redirection to a 404, it will clearly indicate it. If nothing appears, rendering has failed, or JS is blocked.
What errors should be absolutely avoided?
Never redirect in JavaScript to a page that returns a 200 OK with just a 'Page not found' message displayed on the screen. Googlebot will index this page as a valid page, creating a soft 404 that clutters the index. Always configure the server to return the correct HTTP status.
Another common trap: blocking JavaScript files in robots.txt. If Googlebot cannot download your scripts, it will never see the redirection. Result: the original page remains indexed even though it should be de-indexed. Ensure that all your critical JS files are crawlable.
How can I check if my site complies with this mechanism?
Audit your error pages with Screaming Frog or a similar crawler configured to execute JavaScript. Compare the HTTP codes returned before and after rendering. If a page shows 200 before JS and remains 200 after, you have a configuration issue.
Also, monitor your server logs: if Googlebot is massively crawling error URLs, it means it detects them but they consume budget. Ideally, reduce the number of JS redirections to errors by fixing broken links at the source — it's always cleaner than letting Googlebot follow dozens of JavaScript redirections to land on 404s.
- Check that each target error URL returns a 404 or 410 on the server side, not a 200 with a JS error.
- Test rendering with Google Search Console and the 'URL Inspection' tool to confirm that Googlebot detects the redirection.
- Ensure that JavaScript files are not blocked in robots.txt and that rendering works properly.
- Audit error pages with a JavaScript crawler (Screaming Frog, OnCrawl, Botify) to detect soft 404s.
- Monitor the server logs to spot excessive crawling of error URLs and optimize crawl budget.
- Prefer server management of error codes over JavaScript when technically possible.
❓ Frequently Asked Questions
Googlebot suit-il systématiquement les redirections JavaScript ?
Que se passe-t-il si l'URL cible retourne un 200 au lieu d'un 404 ?
Les redirections JavaScript consomment-elles du crawl budget ?
Peut-on utiliser des redirections JavaScript pour masquer du contenu à Googlebot ?
Comment vérifier que Googlebot détecte correctement ma redirection JavaScript ?
🎥 From the same video 7
Other SEO insights extracted from this same Google Search Central video · duration 5 min · published on 14/10/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.