Official statement
Other statements from this video 19 ▾
- 1:41 Contenu de faible qualité : pourquoi Google ne lance-t-il pas systématiquement d'action manuelle ?
- 3:43 Pourquoi vos Core Web Vitals diffèrent-ils autant entre lab et field ?
- 5:23 D'où viennent vraiment les données Core Web Vitals dans Search Console ?
- 7:23 ccTLD ou sous-répertoires pour l'international : y a-t-il vraiment un avantage SEO ?
- 7:37 Pourquoi une restructuration d'URL provoque-t-elle des fluctuations de trafic pendant 1 à 2 mois ?
- 10:15 Faut-il vraiment optimiser pour l'intention de recherche ou est-ce un piège sémantique ?
- 11:48 Faut-il optimiser son contenu pour BERT ou est-ce une perte de temps ?
- 15:57 Comment tester si SafeSearch pénalise votre contenu dans les résultats Google ?
- 17:32 SafeSearch bloque-t-il vraiment vos résultats enrichis ?
- 19:38 Les Core Web Vitals s'appliquent-ils vraiment partout dans le monde ?
- 22:33 Google traite-t-il vraiment tous les synonymes et variations de mots-clés de la même manière ?
- 26:34 Faut-il vraiment rediriger TOUTES les URLs lors d'une migration ?
- 27:27 Noindex en migration : pourquoi Google considère-t-il que vous perdez toute votre valeur SEO ?
- 28:43 Pourquoi les migrations complexes génèrent-elles toujours des fluctuations de rankings ?
- 32:25 Les Web Stories comptent-elles vraiment comme des pages normales pour Google ?
- 34:58 L'infinite scroll tue-t-il vraiment l'indexation de vos contenus sur Google ?
- 46:50 Hreflang peut-il remplacer les liens internes pour vos pages internationales ?
- 48:46 Payer pour des liens : où passe exactement la ligne rouge de Google ?
- 50:48 Faut-il vraiment implémenter tous les types Schema.org pour améliorer son SEO ?
Googlebot does not recognize HTML <button> elements as links — they are invisible to the crawler. If you use buttons with JavaScript for navigation, you create dead ends in your architecture. The solution: systematically replace buttons with CSS-stylized <a> tags to maintain appearance without sacrificing crawlability.
What you need to understand
What is an HTML button and why does Googlebot ignore it?
An HTML button (<button>) is an interactive element designed to trigger an action — submit a form, open a modal, launch a script. It was never intended as a navigation vector. Googlebot, on the other hand, looks for hyperlinks (<a href="...">) to discover and index your pages.
When a developer uses <button onclick="navigateTo('/page')"> with JavaScript to simulate a link, they create a black hole for the crawler. No href attribute, no native HTML signal: Googlebot passes right by. It doesn't matter that the button works perfectly on the user side — to the engine, it simply doesn't exist.
Why do some sites fall into this trap?
The mistake often comes from a confusion between design and semantics. Frontend teams want a visually distinct button (color, relief, animation) and instinctively choose <button>. They then add a onClick JavaScript to handle navigation, thinking that Google will "understand" the intent.
Modern JavaScript frameworks (React, Vue, Angular) exacerbate the problem by making it easy to create clickable button components without going through <a> tags. The result: SPA (Single Page Applications) filled with buttons that are invisible to Googlebot, leading to a catastrophic discovery rate of deep pages.
What is the difference between a styled link and a JavaScript button?
An ordinary HTML link (<a href="/category/product">) is natively crawlable — Googlebot reads the URL, follows the link, indexes the destination. You can style it with CSS (background, border-radius, padding) to make it look pixel-for-pixel like a button, without losing its navigation function.
A button with JavaScript (<button onclick="router.push('/product')">) works for the user but forces Google to execute the JS to detect the target URL. The problem: Google limits the rendering budget and does not guarantee complete execution of JavaScript on all pages. The result: some URLs are never discovered.
- Tag
<a>withhref: guaranteed crawl, immediate indexing, passing of PageRank. - Button
<button>+ JavaScript: random crawl, delayed or no discovery, no authority passing. - CSS styling on a link: retains all SEO benefits of a native link with the appearance of a button.
- Progressive enhancement: always start with a functional
<a>, then enhance with JS if necessary. - Regular audits: track button-links in the source code and systematically replace them.
SEO Expert opinion
Is this statement consistent with field observations?
Absolutely — and it confirms what has been observed for years on poorly configured SPA sites. Crawl audits regularly reveal orphan pages accessible through user navigation but absent from the Search Console coverage report. The cause? Button components without href.
I have seen e-commerce sites lose 40% of their product listings from the index because category navigation relied on <button> React components with history.push(). Googlebot crawled the homepage, found zero outgoing links, and stopped there. A switch to <Link> Next.js (which compiles into <a> HTML) was enough to recover indexing within two weeks.
What nuances should be added to this rule?
Google can sometimes discover URLs via JavaScript — but it’s a risky bet. The rendering depends on crawl budget, page priority, server load. On a site with 10,000 pages, not all will be rendered. Relying on JS for navigation is like playing Russian roulette with your indexing.
Another point: internal links in JavaScript do not guarantee PageRank transmission. Google has confirmed that links discovered after rendering “may” pass juice, but without specifying the conditions. [To verify]: no public study quantifies PageRank loss on JS links versus native HTML links. My field intuition? A minimum loss of 20-30%.
In what cases is a button still acceptable?
A <button> is legitimate for non-navigational actions: submitting a form (type="submit"), opening a modal, toggling a dropdown menu, launching an AJAX filter. As soon as the action changes the URL or leads to a new page, it’s a link, not a button.
Concrete example: a button “Add to Cart” that opens a popup without changing the page? Perfect <button>. A button “View Product” that redirects to /product/123? <a href="/product/123"> styled like a button. HTML semantics matter — they determine what Googlebot sees.
<a href> blocks crawl. Always plan for an HTML fallback for the discovery of critical URLs.Practical impact and recommendations
What should be done concretely on an existing site?
First step: a source code audit. Use Chrome DevTools or a crawler like Screaming Frog in “Render JS” mode vs. “HTML only.” Compare the two: if links only appear after rendering, they rely on JavaScript. Identify all <button> elements with onClick that trigger navigation.
Next, systematically refactor these buttons into <a href="...">. Keep existing CSS classes to maintain the design. If you use React/Vue, prioritize <Link> or <router-link> components that generate valid <a> tags in HTML. Then verify in the rendered source code (Ctrl+U) that the href attributes are present.
What mistakes should be avoided during the migration?
Don't just add a href to a <button> — that remains invalid in HTML5. A button cannot have a href attribute. The only valid solution: replace <button> with <a> and apply your CSS styles (display: inline-block, padding, background, etc.).
Another trap: leaving a preventDefault() JavaScript that blocks the native link navigation. If you enhance a <a> with JS (analytics, animations), ensure that the default click works without JavaScript. Progressive enhancement: the link should work even if the script fails.
How to verify that the fix worked?
Crawl your site with Screaming Frog in “HTML only” mode (JavaScript disabled). All your critical navigation links must appear. If a page is only discovered in “Render JS” mode, it remains vulnerable. Also check the Search Console coverage report: previously orphaned pages should be reindexed within 2-4 weeks.
Test the internal PageRank passing with a tool like Oncrawl or Botify. Deep pages accessible only via JS buttons should see their internal authority score increase after migration to <a>. If not, check that old buttons weren’t mistakenly replaced by nofollow links.
- Audit the source code to identify
<button onClick>navigation. - Replace all button-links with
<a href>with identical CSS classes. - Check that the
hrefattributes are visible in the raw HTML (Ctrl+U). - Crawl in “HTML only” mode to confirm the discovery of critical pages.
- Monitor the indexing rate in Search Console post-migration.
- Test behavior without JavaScript (navigation must remain functional).
❓ Frequently Asked Questions
Google peut-il quand même découvrir mes pages via des boutons JavaScript ?
Un bouton stylisé en lien perd-il ses fonctionnalités interactives ?
Les frameworks comme React ou Vue posent-ils problème pour le crawl ?
Faut-il réécrire tout mon site si j'ai des boutons de navigation ?
Un bouton avec href est-il une alternative valide ?
🎥 From the same video 19
Other SEO insights extracted from this same Google Search Central video · duration 1h00 · published on 15/01/2021
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.