Official statement
Other statements from this video 11 ▾
- 1:37 Faut-il vraiment tester toutes les nouvelles fonctionnalités de Google ?
- 7:18 Google Tag Manager ralentit-il vraiment votre SEO ?
- 9:24 Pourquoi les grands sites peinent-ils à basculer en mobile-first indexing ?
- 14:01 Google traite-t-il vraiment les sites multilingues comme du contenu dupliqué ?
- 18:01 Google a-t-il vraiment un calendrier prévisible pour ses mises à jour algorithmiques ?
- 20:17 Google Search Console ne notifie-t-elle que les erreurs d'indexation majeures ?
- 30:08 Mobile-first, desktop-last : pourquoi vos positions fluctuent-elles selon l'appareil ?
- 32:27 Comment optimiser l'indexation des offres d'emploi selon Google ?
- 40:29 Les bandeaux cookies pénalisent-ils vraiment le référencement de votre site ?
- 48:10 Votre navigation mobile peut-elle tuer votre référencement en mobile-first indexing ?
- 51:42 Faut-il abandonner la pagination classique au profit d'une page view-all ?
Google only indexes JavaScript dynamically generated links if they use <a href> tags. Links that rely solely on onclick events, without an href attribute, may not be followed by Googlebot. In practice, prioritizing standard HTML structure with href and gradually enhancing with JavaScript ensures optimal crawling.
What you need to understand
Why does Google make this distinction between href and onclick?
Mueller's statement highlights a common web architecture issue: developers sometimes implement links solely via JavaScript, without an underlying HTML structure. An element <div onclick="navigate()"> visually resembles a link to the user, but Googlebot does not recognize it as such.
The engine primarily relies on raw HTML before JavaScript execution. If the href attribute is present in the <a> tag, Google can extract the URL directly from the initial DOM, even if JavaScript later modifies the behavior. Without href, the bot must execute the JavaScript, analyze the event handlers, and guess where the click leads — a costly and uncertain process.
What is the practical difference between these two approaches?
A standard link <a href="/page"> works even if JavaScript is disabled or fails. This is the basis of progressive enhancement: HTML provides the minimal functionality, and JavaScript enriches it. This approach ensures that Googlebot finds the URL in all cases.
In contrast, a button <button onclick="goTo('/page')"> has no navigation semantics in HTML. The bot must guess that goTo triggers navigation and then extract the URL from the string passed as a parameter. This heuristic parsing is not guaranteed — especially if the logic is complex or obfuscated.
Does Google really index all modern JavaScript?
Google does execute JavaScript using a recent Chromium engine, but budget and technical limitations persist. Executing JavaScript consumes crawl budget: sites with thousands of pages cannot rely on exhaustive crawling of every onclick function.
Furthermore, certain patterns (conditional navigation, delegated events, complex asynchronous code) remain challenging to interpret. Mueller implicitly recommends simplicity: if the URL is in a href, there is no ambiguity.
- Always prioritize <a href> tags for all internal and external links
- Use JavaScript to enhance behavior (tracking, animations), not to create basic navigation
- Audit the code to identify fake links (div, span, button with onclick) and refactor them
- Test with a browser that has JavaScript turned off: if the link doesn't work, Googlebot will have the same issue
- Monitor Search Console coverage reports for potential orphan pages caused by unindexable links
SEO Expert opinion
Is this statement consistent with real-world observations?
Absolutely. SEO audits regularly reveal orphan pages created by poorly implemented JavaScript links. Modern frameworks (React, Vue, Angular) often generate Single Page Applications where navigation relies on <router-link> or <Link> components that, if misconfigured, do not expose an href attribute in the initial DOM.
Google has made progress in executing JavaScript, but deferred rendering remains costly. In tests with unoptimized React sites, we sometimes observe delays of 3 to 7 days between HTML crawling and post-JavaScript-rendering indexing. This delay can penalize news sites or e-commerce platforms with limited stock.
What nuances should be made to this rule?
The href/onclick distinction is not binary. A link can have href="#" with an onclick logic that completely changes the destination — this is an anti-pattern that Google struggles to manage. The bot sees the href but the URL points to an empty fragment.
Another subtlety: links added dynamically after user interaction (infinite scroll,
Practical impact and recommendations
What should be prioritized in an audit of an existing site?
Start with a crawl with JavaScript disabled (Screaming Frog in "Render: Text" mode, or simple curl). Compare the number of discovered pages with a crawl where JavaScript is enabled. A significant discrepancy indicates non-indexable links by Google in the worst case.
Next, inspect the HTML source code of the main templates: look for patterns onclick=, onmousedown= without corresponding href. <div> and <button> elements with navigation events are red flags. Use Chrome DevTools to filter the DOM: $$('[onclick]:not(a[href])') lists all clickable elements without href.
How can JavaScript code be refactored to be SEO-friendly?
The golden rule: always start with a functional HTML link, then add JavaScript on top. A clean pattern looks like <a href="/page" class="js-track"> with an event listener attached via addEventListener that sends analytics but allows the browser to follow the href by default.
For Single Page Applications, configure the router to generate real href values. In React Router, <Link to="/page"> produces a <a href="/page"> — make sure this transformation occurs server-side (SSR or pre-rendered). If the framework doesn't do it by default, implement an HTML pre-rendering of critical routes using tools like Prerender.io or Rendertron.
What tools can check compliance?
The URL Inspection Tool in Search Console shows the DOM as Googlebot sees it after JavaScript rendering. Compare the "crawled" version with the live version in a browser. If links are missing in Google's version, the issue is confirmed.
The coverage report in Search Console sometimes indicates pages "Detected, currently not indexed" — a common symptom of orphan pages accessible only via onclick. Cross-reference these URLs with an internal crawl to identify access paths.
For ongoing verification, set up end-to-end tests (Playwright, Puppeteer) that check for href attributes on all navigation elements before each deployment. A simple test: expect(link.getAttribute('href')).not.toBeNull().
- Crawl the site with JavaScript disabled and compare the number of discovered pages
- Audit the source code to identify clickable elements without href
- Refactor onclick links into <a href> tags with progressive enhancement through JavaScript
- Configure the SPA router to generate real href values server-side
- Test each template with the URL Inspection Tool in Search Console
- Automate href verification in CI/CD before deployment
❓ Frequently Asked Questions
Un lien avec href="#" et onclick est-il considéré comme explorable par Google ?
Les frameworks comme React ou Vue génèrent-ils automatiquement des href corrects ?
Faut-il éviter complètement les événements onclick pour le SEO ?
Comment savoir si Googlebot a bien exploré mes liens JavaScript ?
Un site SPA peut-il ranker correctement sans href dans le HTML initial ?
🎥 From the same video 11
Other SEO insights extracted from this same Google Search Central video · duration 54 min · published on 08/08/2019
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.