Official statement
Other statements from this video 5 ▾
- 4:54 Faut-il vraiment respecter la limite de 500 Ko par page imposée par Google ?
- 7:25 Pourquoi corriger une recommandation Lighthouse n'accélère pas toujours votre page autant que promis ?
- 8:47 Pourquoi Lighthouse ne reflète pas la vraie performance de votre site ?
- 11:21 AMP est-il vraiment inutile pour le classement Google ?
- 14:02 Faut-il vraiment viser un score Lighthouse de 100 pour mieux ranker sur Google ?
Martin Splitt confirms that page speed remains a ranking signal, but far from being a priority compared to content relevance. When two results of equivalent quality are available, the faster one will gain the edge — but Google will never sacrifice relevance at the altar of performance. In practice, optimizing speed is still relevant, but only after consolidating your content and thematic authority.
What you need to understand
Why does Google deliberately downplay the importance of speed?
Google walks a tightrope. If page speed became a dominant factor, the engine might display quick but irrelevant results, sacrificing user satisfaction. Splitt admits it frankly: relevant content always takes priority. A slow but comprehensive site will beat a fast but superficial one.
This hierarchy is not new — it dates back to the introduction of speed as a signal in 2010 for desktop and then in 2018 for mobile. What’s changing is the clarity of the discourse: Google now assumes that performance is just a tie-breaker, a deciding factor in case of a tie.
What does 'equivalent quality' mean in this context?
The term is deliberately vague. Google refers to results of equivalent quality but never defines that threshold. Semantic relevance? E-E-A-T score? Content depth? Freshness? No quantified criteria.
In practice, two pages are rarely perfectly equivalent. One may have better internal linking, another may have stronger backlinks, a third may be fresher. Speed only plays a role in this residual delta — when all other signals neutralize each other. A statistically rare case in competitive SERPs.
Does speed indirectly influence other signals?
This is where it gets complicated. Splitt talks about direct ranking, but speed also affects user behavior: bounce rate, time spent, pages viewed. These UX signals can impact ranking themselves — notably through the Core Web Vitals integrated since 2021.
A slow site degrades the experience, generates frustration, and reduces engagement. Google captures these behavioral signals. So even if technical speed doesn’t weigh heavily directly, its indirect effects create a vicious cycle that is hard to quantify.
- Content relevance: remains the king signal, no technical factor can surpass it
- Speed as a tie-breaker: only intervenes between results of nearly identical quality
- Indirect effects: speed impacts UX, which in turn influences ranking through behavioral signals
- Core Web Vitals: a subset of speed metrics integrated into the page experience signal since 2021
- Undefined threshold: Google never specifies at what level two results are considered 'equivalent'
SEO Expert opinion
Is this statement consistent with on-the-ground observations?
Yes and no. A/B testing shows that fixing a catastrophic LCP (>4s) to a correct LCP (<2.5s) rarely results in a jump in positions — unless the site was already languishing at the bottom of page 1 or page 2. In this case, improvement can shift from rank 12 to rank 8, for example. But a site in position 3 will not rise to 1 solely due to speed.
On the other hand, structurally slow sites (>5s loading) struggle more to stay in the top 3 on competitive queries. Correlation or causation? Hard to unravel. Technically neglected sites often have other weaknesses: superficial content, shaky UX, nonexistent linking.
What nuances should be added to this official discourse?
Splitt speaks of ranking, not crawling or indexing. However, speed also impacts crawl budget: a slow site consumes more Googlebot resources, which reduces its crawl frequency. The result: your new pages take longer to be discovered and indexed. This isn’t ranking, but it affects your overall visibility.
Another point: Core Web Vitals do not only measure raw speed. CLS (cumulative layout shift) and FID/INP (interactivity) have nothing to do with classic loading time. Google often mixes 'speed' and 'page experience', creating semantic confusion that serves to blur the lines. [To be verified]: the real impact of isolated CLS on ranking remains difficult to measure outside of Google case studies.
In what cases does this rule not apply?
On ultra-competitive queries, where the top 10 results all have exhaustive content, solid backlinks, and impeccable E-E-A-T, speed may become the decisive micro-advantage. But this is marginal — and again, we’re talking about going from 1.8s to 1.2s, not from 5s to 2s.
Another exception: e-commerce sites on transactional queries. Google prioritizes the shopping experience, and a lagging cart drives customers away. Here, speed carries more weight — not by algorithmic intent, but because it directly impacts the behavioral signals Google captures (abandonment, quick SERP return, etc.).
Practical impact and recommendations
What should you concretely prioritize on a slow site?
Content first. If your pages do not precisely meet search intent, optimizing LCP will be futile. Audit your top landing pages: do they cover all semantic angles? Do they provide more value than competitors in positions 1-3? If not, rewrite before touching the code.
Next, track down critical UX hindrances: confusing navigation, invisible CTAs, aggressive pop-ups. A frustrated user will bounce quickly — and this behavioral signal likely weighs more than 500ms of LCP gained. Only after that do you tackle the Core Web Vitals: lazy loading, image compression, reducing blocking JS.
What mistakes should be avoided in speed optimization?
Never sacrifice useful content for the sake of speed. Removing explanatory images, reducing the depth of an article, or trimming FAQs to lighten the page is shooting yourself in the foot. Google wants relevance — a fast but shallow site will not rank.
Another trap: the obsession with PageSpeed Insights score. A 95/100 on PSI does not imply a better ranking than a 72/100, if the latter offers a superior real-world experience. Field data (CrUX) matter more than lab tests. Focus on the actual metrics collected by Chrome from your visitors.
How to check if your optimizations are having an SEO effect?
Monitor your positions on key queries before/after optimization — allowing a delay of 4 to 8 weeks for Google to recrawl and reassess. Also compare UX metrics in Google Analytics: bounce rate, pages per session, average duration. If speed improves but engagement stagnates, the issue lies elsewhere.
Utilize Search Console to monitor the evolution of Core Web Vitals. The 'Page Experience' tab shows you the % of URLs in 'Good' vs 'Needs Improvement' vs 'Poor' status. Aim for at least 75% of URLs to be 'Good' — but don’t expect miracles in ranking if your fundamentals (content, backlinks, E-E-A-T) are weak.
- Audit relevance and completeness of content before any technical optimization
- Prioritize critical UX hindrances: navigation, CTAs, intrusive pop-ups
- Optimize Core Web Vitals by targeting real CrUX data, not just lab scores
- Never sacrifice useful content to artificially lighten the page
- Measure the impact on positions and engagement with a 4 to 8-week gap
- Use Search Console to track the evolution of 'Page Experience' status
❓ Frequently Asked Questions
La vitesse de page est-elle toujours un facteur de classement en 2025 ?
Un site lent peut-il quand même ranker en top 3 ?
Les Core Web Vitals ont-ils un impact direct sur le ranking ?
Faut-il viser un score PageSpeed Insights de 90+ ?
La vitesse impacte-t-elle le crawl budget ?
🎥 From the same video 5
Other SEO insights extracted from this same Google Search Central video · duration 14 min · published on 27/07/2020
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.