What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

Google does not click on buttons. For pagination, it is better to use static links (with href) to page 2, page 3, etc. JavaScript can then override the behavior to load content without page reloading for users.
196:12
🎥 Source video

Extracted from a Google Search Central video

⏱ 465h56 💬 EN 📅 24/03/2021 ✂ 13 statements
Watch on YouTube (196:12) →
Other statements from this video 12
  1. 10:15 Les Core Web Vitals mesurent-ils vraiment les chargements consécutifs ou juste la première visite ?
  2. 22:39 Faut-il supprimer les liens présents uniquement dans le HTML initial ?
  3. 60:22 Le Server-Side Rendering est-il vraiment indispensable pour le SEO en 2025 ?
  4. 76:24 Le JSON d'hydratation en bas de page nuit-il au SEO ?
  5. 121:54 Googlebot est-il vraiment devenu infaillible face à JavaScript ?
  6. 152:49 Pourquoi le passage à Evergreen Chrome transforme-t-il le rendu des pages par Google ?
  7. 183:08 Google rend-il vraiment TOUTES vos pages JavaScript ?
  8. 226:28 Faut-il vraiment masquer le contenu cumulatif des paginations infinies à Google ?
  9. 251:03 Peut-on vraiment servir une navigation différente à Google sans risquer une pénalité pour cloaking ?
  10. 271:04 Googlebot clique-t-il vraiment sur les boutons et liens JavaScript de votre site ?
  11. 303:17 Faut-il créer une page par jour pour un événement multi-jours ou canoniser vers une page unique ?
  12. 402:37 Le JavaScript est-il vraiment compatible avec le SEO moderne ?
📅
Official statement from (5 years ago)
TL;DR

Google does not click on buttons — neither Load More, nor Show More, nor any JavaScript interactive elements. For pagination to be crawlable by Googlebot, static classic links with href attributes pointing to page-2, page-3, etc., are necessary. JavaScript can then intercept these clicks to provide a seamless user experience without reloading. This is the progressive enhancement approach that ensures bot accessibility and modern UX.

What you need to understand

What's the issue with Load More buttons for Googlebot?

Googlebot does not simulate complex user interactions. It does not click on buttons, does not submit forms, and does not scroll to trigger infinite lazy-loads. Its behavior remains that of a classic crawler: it follows the static href links it finds in the rendered DOM.

When your pagination relies exclusively on a <button onclick="loadMore()"> type button, Googlebot sees a dead end. It has no way to discover pages 2, 3, 4 and their associated content. The result is that only page 1 is indexed, everything else disappears from the engine's view.

Why does Martin Splitt emphasize static links with href?

A classic HTML link <a href="/products?page=2"> is universally crawlable: by Googlebot, by competing crawlers, by screen readers, by users without JavaScript enabled. It’s the basis of an accessible and indexable web.

JavaScript can then intercept the click event on this link via event.preventDefault() and load the content via AJAX without page reloading. The user benefits from a modern seamless navigation, Googlebot finds its way. This is the principle of progressive enhancement: HTML first, JavaScript as an enhancement layer.

What is the practical impact on indexing if I keep my buttons?

Content located on subsequent pages will likely never be crawled or indexed. If your e-commerce site displays 20 products per page with a Load More button, Googlebot will only see the first 20 products. The other 200 remain invisible.

Some sites compensate with comprehensive XML sitemaps listing all individual product URLs, but this is a patch. Natural internal linking via pagination remains the most reliable method to pass PageRank and ensure regular crawling.

  • Googlebot does not click on buttons — it only follows static href links
  • A pagination in classic HTML links remains the only universally crawlable solution
  • JavaScript can intercept these links to provide a modern UX without reloading
  • Without static links, pages 2+ risk never being indexed
  • XML sitemaps do not replace a coherent internal linking

SEO Expert opinion

Is this recommendation really new or revolutionary?

Absolutely not. It’s a web architecture principle that every technical SEO has known for at least 15 years. Progressive enhancement is not a trend — it’s the official doctrine of the W3C since the 2000s. What’s interesting is that Google still has to remind this publicly.

This means a significant proportion of sites continue to deploy JavaScript-only paginators inaccessible to crawlers. Modern front-end frameworks (React, Vue, Angular) default to SPA architectures that often neglect bot crawlability. Splitt's reminder is a signal: indexability remains a secondary concern for many modern stacks.

In what cases doesn't this rule strictly apply?

If your paginated content has no SEO value — for example, internal user comments in a SaaS app reserved for logged-in members — you can afford a pure JavaScript Load More. No crawl desired = no constraint.

Another exception: sites using server-side rendering (SSR) or static site generation (SSG) with frameworks like Next.js, Nuxt, SvelteKit. In these architectures, each URL /products?page=2 returns full HTML from the server before any JavaScript execution. Googlebot receives a directly exploitable DOM. But beware — badly configured SSR can mislead. [To verify]: always test with a curl or Screaming Frog crawl to ensure that the raw HTML indeed contains the links.

What nuance should be considered regarding Google's JavaScript rendering?

Google has indexed JavaScript for years, that's true. But the WRS (Web Rendering Service) introduces delays, additional crawl budget costs, and potential bugs. Martin Splitt himself has repeatedly stressed that JS rendering remains a fallback, not a first-choice solution.

In concrete terms: even if Googlebot executes your JS and could theoretically click a button via a script, it does not do so by default. It has no heuristic to guess that a "Load More" button should be clicked. Relying on JS rendering to compensate for a failing architecture is playing Russian roulette with your indexing.

Beware: Do not confuse "Google executes JavaScript" with "Google simulates all possible user interactions". The WRS renders the DOM, it does not interact with it.

Practical impact and recommendations

What concrete steps should be taken to correct problematic pagination?

Replace your <button>Load More<\/button> buttons with classic HTML links: <a href="/category?page=2">Next Page<\/a>. If you want to maintain a modern UX, intercept the click with JavaScript to load the content via AJAX without reloading.

Simplified code example: document.querySelectorAll('.pagination a').forEach(link => link.addEventListener('click', function(e) { e.preventDefault(); fetch(this.href).then(...); }));. You keep a seamless navigation for the user, Googlebot follows the hrefs normally. This is the method recommended by all engines.

What critical mistakes should be avoided in implementation?

Do not fall into the trap of the empty JavaScript link type <a href="#" onclick="loadPage(2)">. This is no better than a button — Googlebot sees a href pointing to the same page. It requires a functional href that returns exploitable HTML even without JS.

Another common mistake: forgetting the rel="next" and rel="prev" tags. Even though Google officially stopped using them for indexing in 2019, they remain useful for other engines and for documenting the logical structure of your pagination. Consider them as an optional but clean layer of metadata.

How can I check that my site is compliant after correction?

Use curl or wget in the command line to retrieve the raw HTML of your listing page: curl -A "Googlebot" https://mysite.com/category. Check that the href links to page=2, page=3 appear in the response, before executing any JavaScript.

Run a crawl with Screaming Frog in "JavaScript disabled" mode. If pages 2+ are not discovered, your pagination remains inaccessible to bots. Also validate in Google Search Console that paginated URLs appear in the coverage report — if they remain absent after several weeks, it’s a red flag.

  • Replace all Load More buttons with functional <a href> links
  • Intercept clicks with JavaScript to maintain a UX without reloading
  • Test the raw HTML with curl or wget to confirm the presence of links
  • Crawl the site with Screaming Frog in disabled JS mode
  • Check in Search Console that the paginated pages are indexed
  • Keep rel="next/prev" tags for documentation and compatibility
Static link pagination remains a non-negotiable SEO fundamental. Any modern architecture must respect this bot exploratory principle, even if the final UX relies on advanced JavaScript. These technical optimizations — between progressive enhancement, SSR, and crawlability testing — can quickly become complex depending on your stack. If your team lacks the time or expertise to audit and correct pagination architecture, calling on a specialized technical SEO agency can accelerate compliance and prevent months of invisible content for Google.

❓ Frequently Asked Questions

Google clique-t-il sur les boutons si le site utilise du Server-Side Rendering (SSR) ?
Non. Même avec SSR, Googlebot ne clique pas sur les boutons. SSR garantit que le HTML initial contient déjà les liens, mais n'autorise aucune interaction bot avec des éléments interactifs.
Les balises rel="next" et rel="prev" sont-elles encore nécessaires ?
Google ne les utilise plus depuis 2019 pour l'indexation, mais elles restent utiles pour d'autres moteurs (Bing, Yandex) et pour documenter la structure logique de votre pagination. Conservez-les si votre CMS les génère automatiquement.
Un sitemap XML exhaustif peut-il compenser l'absence de liens statiques ?
Partiellement, mais ce n'est pas une solution optimale. Le sitemap aide à la découverte initiale, mais le maillage interne naturel via pagination transmet du PageRank et garantit un crawl régulier et une meilleure distribution de l'autorité.
Comment intercepter les clics sur les liens de pagination sans casser l'accessibilité ?
Utilisez addEventListener('click', function(e) { e.preventDefault(); fetch(this.href)... }) sur vos liens <a href>. Le lien reste fonctionnel sans JS, et JavaScript améliore l'UX pour ceux qui l'ont activé. C'est le progressive enhancement classique.
Quels outils utiliser pour vérifier que ma pagination est explorable par Googlebot ?
Curl ou wget en ligne de commande pour récupérer le HTML brut, Screaming Frog en mode JavaScript désactivé pour crawler, et Google Search Console pour vérifier l'indexation effective des pages paginées sur plusieurs semaines.
🏷 Related Topics
Domain Age & History Content JavaScript & Technical SEO Links & Backlinks Pagination & Structure

🎥 From the same video 12

Other SEO insights extracted from this same Google Search Central video · duration 465h56 · published on 24/03/2021

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.