Official statement
Other statements from this video 12 ▾
- 3:15 Peut-on repousser la date d'expiration d'une page avec unavailable_after ?
- 8:28 Faut-il vraiment un fichier robots.txt pour être indexé par Google ?
- 8:28 Les tags et catégories sont-ils vraiment inutiles pour le référencement ?
- 9:40 Supprimer les paramètres URL pour Googlebot : du cloaking sans pénalité ?
- 11:12 Fusions et scissions de sites : pourquoi Google ne garantit-il jamais un classement stable après migration ?
- 13:13 Les fichiers audio sur vos pages boostent-ils vraiment votre référencement ?
- 22:47 Pourquoi Google n'indexe-t-il qu'une fraction ridicule de vos pages ?
- 26:39 Faut-il vraiment implémenter hreflang entre langues éloignées ?
- 46:09 Pourquoi vos correctifs Core Web Vitals mettent-ils 30 jours à impacter vos positions ?
- 47:33 Faut-il vraiment renommer toutes vos images pour le SEO ?
- 48:59 La fraîcheur du contenu est-elle vraiment un facteur de classement déterminant ?
- 51:44 Les signaux sociaux influencent-ils vraiment le classement Google ?
Google treats URL changes via the History API as standard redirects. Specifically, if your page changes its URL during loading, the bot will attempt to crawl and index the final URL on its next visit. The destination URL must be directly accessible; otherwise, you risk severe indexing issues.
What you need to understand
What is the History API and why do sites use it?
The History API allows you to change the URL displayed in the browser's address bar without reloading the page. It’s a very popular JavaScript mechanism in modern web applications and Single Page Applications (SPA).
Developers mainly use it to enhance the user experience: smooth navigation, no full reload, and a responsive interface. Typically, when you navigate a site and the URL changes without the page blinking, that’s the History API working behind the scenes.
How does Google really interpret these URL changes?
Mueller’s statement is clear: Google considers these modifications as redirects. Not merely cosmetic URL changes, but as a genuine redirect signal.
The process is as follows: Googlebot loads the initial URL, detects the URL change via JavaScript, records the final URL, and then during the next crawl, it will attempt to crawl the destination URL directly. The initial URL will gradually be abandoned in favor of the new one.
What is the major technical constraint here?
The critical point — and this is where many sites fail — is that the final URL must be directly accessible. If a user or Googlebot types the destination URL in their browser, the page must load normally.
If the final URL only exists within the context of the JavaScript application and returns a 404 or redirects elsewhere when accessed directly, you are creating an indexing nightmare. Google will try to crawl a URL that doesn't actually exist as an independent server resource.
- The History API is not invisible to Google — the bot interprets it as a redirect
- The destination URL will gradually replace the source URL in the index
- Each final URL must be independently crawlable, without going through the JavaScript path
- This logic applies even if the URL change occurs quickly after the initial load
- Sites that heavily use the History API without corresponding server URLs risk structural indexing issues
SEO Expert opinion
Is this statement consistent with field observations?
Overall, yes. We do see that Google monitors JavaScript URL changes and attempts to index the final URLs. Laboratory tests confirm that Googlebot registers the final state of the URL after executing the JavaScript.
But — and this is a big but — the delay between detecting and effectively crawling the final URL varies tremendously. Mueller mentions "next crawl," which can mean days, weeks, or even months depending on your crawl budget. [To be verified]: the typical duration of this cycle is not officially documented.
What are the gray areas of this explanation?
Mueller does not specify how Google handles conflicts. What happens if both the source URL and the destination URL exist as valid server pages, with different content? Which version prevails in the index?
Another unclear point: the behavior with complex SPAs that make multiple successive URL changes within a few seconds. Does Google capture the URL after a fixed delay (like 5 seconds) or does it wait for the URL to stabilize? [To be verified] on real cases with multiple rapid transitions.
In which cases does this rule not apply as expected?
First observable exception: sites with extremely limited crawl budgets. If Google crawls your site once a month, the final URL may take a long time to be discovered and indexed, creating a prolonged period of uncertainty.
Second problematic case: applications with authentication. If the final URL requires a login to be accessible, but the initial page is public, Google will detect a redirect to a URL it can never actually crawl. Result: potential confusion in the index.
Practical impact and recommendations
What should you prioritize auditing on your site?
First step: identify all pages using pushState or replaceState (the two methods of the History API). A full JavaScript audit is necessary, not just a quick glance at the code.
For each detected URL change, manually test: is the final URL accessible by directly typing the address in the browser? Does it return a 200 OK with the expected content, requiring no prior JavaScript path?
What mistakes should you absolutely avoid?
Classic mistake number one: creating "virtual" URLs that only exist client-side. Your JavaScript router handles /products/red-shoes, but the server returns a 404 if accessed directly. Google will try to crawl this URL and fail.
Second trap: using the History API for minor variations (sorting, filters) thinking Google will ignore these changes. If you modify the URL, Google treats it as a redirect — hence as a different page to crawl. Think twice before changing the URL for every filter click.
How to check if your implementation is compliant?
Use the Search Console and analyze the indexed URLs versus the URLs you thought you submitted. If you find that Google is massively indexing the final URLs (post-History), but some are marked as 404 errors, that’s a warning sign.
Follow up with a test in the URL Inspection tool: submit the source URL, check which URL Google ultimately detects. If the rendered URL differs from the source URL, Google will treat it as a redirect in the next crawl.
- List all uses of pushState/replaceState in your codebase
- Test each final URL in direct access (curl, private browsing mode)
- Ensure the server returns 200 for all final URLs, not 404 or 302
- Check in Search Console for discrepancies between submitted URLs and indexed URLs
- If you have SPAs, implement server-side rendering (SSR) or prerendering for critical URLs
- Clearly document which URLs are meant to be redirects and which are standalone pages
❓ Frequently Asked Questions
Si j'utilise replaceState au lieu de pushState, Google traite-t-il cela différemment ?
Combien de temps faut-il à Google pour crawler l'URL finale après avoir détecté le changement ?
Est-ce que Google conserve l'URL source dans son index ou la remplace-t-il complètement ?
Peut-on utiliser l'API History pour nettoyer les paramètres d'URL sans impact SEO ?
Les SPA sont-elles forcément problématiques pour le SEO à cause de l'API History ?
🎥 From the same video 12
Other SEO insights extracted from this same Google Search Central video · duration 58 min · published on 12/02/2021
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.