Official statement
Other statements from this video 49 ▾
- 1:38 Does Google really track HTML links that are hidden by JavaScript?
- 1:46 Can JavaScript really hide your links from Google without destroying them?
- 3:43 Is it really necessary to optimize the first link on a page for SEO?
- 3:43 Does Google really combine signals from multiple links pointing to the same page?
- 5:20 Do site-wide links in the menu and footer really dilute the PageRank of your strategic pages?
- 6:22 Is it really necessary to nofollow site-wide links to your legal pages to optimize PageRank?
- 7:24 Should you really keep nofollow on your footer links and service pages?
- 10:10 Why does Google make it impossible to use Search Console Insights without Analytics?
- 11:08 Does Nofollow still affect crawling without passing on PageRank?
- 11:08 Does nofollow really block indexing, or can Google still crawl those URLs?
- 13:50 Why is Google so tight-lipped about its indexing incidents?
- 15:58 Should you really index all paged pages to optimize your SEO?
- 15:59 Is it really necessary to index all pagination pages to optimize your SEO?
- 19:53 Are URL parameters still an obstacle for organic search?
- 19:53 Are URL parameters really a non-issue for SEO anymore?
- 21:50 Is it true that Google is blocking the indexing of new sites?
- 23:56 Do links in embedded tweets really affect your SEO?
- 25:33 Are sitemaps really essential for Google indexing?
- 26:03 How does Google really discover your new URLs?
- 27:28 Why does Google require a canonical on ALL AMP pages, including standalone ones?
- 27:40 Is the rel=canonical really mandatory on all AMP pages, even standalone ones?
- 28:09 Should you really implement hreflang across an entire multilingual site?
- 28:41 Should you really implement hreflang on every page of a multilingual website?
- 29:08 Is it true that AMP is a speed factor for Google?
- 29:16 Should you still invest in AMP to optimize speed and ranking?
- 29:50 Why does Google measure Core Web Vitals on the actual page version your visitors are really viewing?
- 30:20 Do Core Web Vitals really measure what your users actually see?
- 31:23 Should you manually deindex old pagination URLs after changing your site's architecture?
- 31:23 Is it really necessary to manually de-index your old pagination URLs?
- 32:08 Is advertising on your site harming your SEO?
- 32:48 Does having ads on your site really hurt your Google rankings?
- 34:47 Is rel=canonical in syndication really reliable for controlling indexing?
- 34:47 Does rel=canonical really protect your syndicated content from ranking theft?
- 38:14 Do security alerts in Search Console really block Google's crawling?
- 38:14 Can a hacked site lose its crawl budget due to Google security alerts?
- 39:20 Have links in guest posts really lost all SEO value?
- 39:20 Do guest post links really have no SEO value?
- 40:55 Why does Google ignore identical modification dates in your sitemaps?
- 40:55 Why does Google ignore the lastmod dates in your XML sitemap?
- 42:00 Should you really update the lastmod date of the sitemap for every minor change?
- 42:21 Does a poorly configured sitemap really diminish your crawl budget?
- 43:00 Can a misconfigured sitemap really cut down your crawl budget?
- 44:34 Should you really have to choose between reducing duplicate content and using canonical tags?
- 44:34 Is it really necessary to eliminate all duplicate content or should you rely on rel=canonical?
- 45:10 Should you really set a crawl limit in Search Console?
- 45:40 Should you really let Google decide your crawl limit?
- 47:08 Do internal 301 redirects really dilute PageRank?
- 47:48 Do cascading internal 301 redirects really drain SEO juice?
- 49:53 Can Google really treat URL changes made by JavaScript and the History API as redirects?
Google may interpret a URL change via the JavaScript History API as a redirect and index the modified URL instead of the original one. This behavior depends on the timing and context of the script execution, with no absolute rules. The URL Inspection tool allows you to check which version Google retains as canonical, but this unpredictability poses a risk of unintentional cannibalization.
What you need to understand
What is the History API and why is Google interested in it?
The History API is a JavaScript interface that allows manipulation of the browser's navigation history without reloading the page. It is mainly used in Single Page Applications (SPAs) to change the URL displayed in the address bar when users navigate between sections. The pushState() and replaceState() methods change the URL visible to the client, creating a smooth experience without an HTTP request.
Google crawls and indexes the web as its rendering engine sees it, not necessarily as the user perceives it. When Googlebot loads a page and executes the JavaScript that modifies the URL, it may consider this change as a navigation instruction — just like a classic 301 or 302 redirect. The engine then has to choose which URL to index: the one initially requested or the one modified by the script.
When does Google interpret this change as a redirect?
Mueller clarifies that this behavior depends on context and timing. If the URL change occurs quickly after the initial load, before the main content is rendered, Google may interpret it as a classic server-side redirect. Conversely, if the change happens after a delay or as a result of user interaction, Googlebot may consider the two URLs as distinct.
The issue is that Google does not document a precise threshold. A pushState() executed 50ms vs 500ms vs 2000ms after DOMContentLoaded can trigger different behaviors. This timing gray area leaves developers in uncertainty — it is impossible to predict with certainty which URL will be canonical without manual testing via the URL Inspection tool.
How can you check which URL Google actually retains?
The URL Inspection tool in Google Search Console remains the only reliable means. You need to test both the initially requested URL AND the URL modified by the script to compare the indexed versions. Google displays the selected canonical URL, the rendered HTML, and sometimes warning messages about detected redirects.
What does this mean in practice? Run an inspection on both URLs, compare the HTML snapshots, and verify the canonical tags injected on the server vs client side. If Google treats the change as a redirect, you will see the modified URL appear as the canonical URL selected by Google, even if you requested the initial URL. This manual diagnosis is time-consuming but essential to avoid unpleasant surprises in production.
- The History API modifies the client-side URL without a server request, creating ambiguity for Googlebot
- Google may interpret this change as a redirect and index the modified URL rather than the original one
- The timing and execution context influence the decision, with no documented threshold
- The URL Inspection tool is the only way to verify which version Google retains
- Risk of cannibalization if multiple URLs point to the same content with conflicting canonical signals
SEO Expert opinion
Is this statement consistent with real-world observations?
Yes and no. Real-world tests indeed show that Google can index the modified URL by pushState() in certain React, Vue, or Angular applications. But the rule is not systematic. On technically identical sites, sometimes the initial URL is indexed, sometimes the modified one, without a clear pattern. [To verify]: Google has never published technical documentation specifying the exact conditions triggering this behavior.
The problem is that Mueller speaks of “context and timing” without quantifying. A delay of 100ms? 500ms? 2 seconds? This imprecision is typical of Google's statements on JavaScript rendering — many general principles, few actionable thresholds. SEOs managing SPAs know that reproducibility is low: the same code can yield different results depending on Googlebot's server load, crawl budget, or infrastructure variations.
What concrete risks does this ambiguity pose?
The main risk is unintentional cannibalization. Imagine a product page initially loaded at /product-123, where JavaScript modifies the URL to /product-123/blue after detecting the default color. If Google indexes /product-123/blue as canonical, you have two competing URLs: one with the color and one without. Ranking signals (backlinks, anchors, engagement metrics) get dispersed.
Another insidious scenario: listing filters. A /shoes page initially loads a default filter via JavaScript and modifies the URL to /shoes?size=42. Google indexes this parameterized URL as canonical, creating a conflict with your server-side canonical directives pointing to /shoes. The result: authority dilution, perceived duplication, and sometimes deindexing of the desired version.
Let's be honest: this statement resolves nothing. It confirms an already observed behavior without providing technical safeguards. Mueller suggests using the URL Inspection tool — OK, but this forces you to manually test every URL pattern on a site with thousands of pages. Not exactly scalable.
When does this behavior become blocking?
E-commerce and high pagination content sites are the most exposed. Navigation facets, dynamic filters, infinite pagers — anything relying on the History API to maintain navigation state without a complete reload — can trigger this mechanism. If your architecture relies on a “clean” indexable URL and non-indexable parameterized variants, Google may ignore your signals and canonize the variants.
In contrast, simple showcase sites or blogs with little JavaScript are less affected. The problem concentrates on complex SPA architectures where the URL reflects a rich application state. And that’s where it gets tricky: these architectures are precisely the ones where SEO is most difficult to manage, and Google adds a layer of unpredictability.
Practical impact and recommendations
What should be prioritized in auditing your site?
Start by listing all uses of pushState() and replaceState() in your JavaScript code. Identify patterns: section navigation, filters, pagination, UI states. For each pattern, check in Search Console which URLs are actually indexed. Compare with the URLs you want to canonize. Any divergence signals a potential issue.
Use the URL Inspection tool on a representative sample: at least 20-30 URLs covering the main use cases. Check the selected canonical URL, the rendered HTML, and the injected canonical tags. If Google consistently retains the modified URL via JavaScript instead of the original URL, you have confirmation that this mechanism applies to your site. At this point, two options: adapt your architecture or force the server-side canonical signals.
What technical modifications reduce risks?
Option 1: Delay pushState() until the main content is fully rendered and indexable. Add an explicit delay or condition execution to a specific DOM event (e.g., after full rendering of critical content). This reduces the likelihood that Google interprets the change as an immediate redirect. But be careful, no guaranteed threshold — it’s trial and error.
Option 2: Align the initial URL and modified URL on the same canonical structure. If JavaScript changes /product to /product/blue, ensure that /product/blue is the canonical URL served by the server from the start. Avoid contradictions between server-side canonical and client-side. Google generally favors the server signal, but against the History API, this is no longer a certainty.
Option 3: Abandon the History API for critical SEO navigations and revert to classic links with page reloads. Less sexy in terms of UX, but infinitely more predictable for indexing. Reserve pushState() for non-SEO interactions (modals, tabs, temporary UI states). This hybrid approach limits exposure to risk while maintaining smooth navigation where it matters for the user.
How to continuously monitor this phenomenon?
Set up Search Console alerts for variations in indexed URLs. Compare monthly the list of indexed URLs with your reference XML sitemap. Any significant deviation (parameterized URLs indexed when they shouldn’t be, main URLs deindexed) should trigger a manual audit. Coverage reports and server logs cross-referenced with GSC data often reveal these discrepancies.
Use third-party crawl tools (Screaming Frog, OnCrawl, Botify) configured to execute JavaScript and compare the initial URL vs the final URL after script execution. Automate this monthly crawl and compare snapshots. If a category of URLs suddenly starts being implicitly redirected by Google due to a front-end code change, you’ll detect it before it impacts traffic.
- List all occurrences of pushState()/replaceState() in the JavaScript code
- Manually inspect 20-30 representative URLs via the URL Inspection tool
- Compare indexed URLs in Search Console with the reference sitemap
- Check the consistency of canonical tags on the server-side and client-side
- Test a delay or condition for pushState() to reduce interpretation as a redirect
- Set up Search Console alerts for indexing variations
❓ Frequently Asked Questions
Google traite-t-il systématiquement un pushState() comme une redirection ?
Comment savoir quelle URL Google a indexée quand j'utilise l'History API ?
Peut-on forcer Google à ignorer le changement d'URL via pushState() ?
Les balises canonical côté serveur priment-elles sur l'History API ?
Ce comportement affecte-t-il le budget crawl ?
🎥 From the same video 49
Other SEO insights extracted from this same Google Search Central video · duration 55 min · published on 21/08/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.