Official statement
Other statements from this video 25 ▾
- □ Is loading speed really just a secondary ranking factor?
- □ How does Google adapt the weight of its ranking signals after their launch?
- □ Can a site's speed make up for mediocre content?
- □ Is measuring only the LCP a strategic mistake for your SEO?
- □ How does Google truly validate its ranking signals before rolling them out?
- □ Does Google really differentiate between two types of ranking changes?
- □ Why does your Google ranking fluctuate so much based on the location of the query?
- □ Why does Google crawl your site at a different speed than what your users experience?
- □ Is it true that Google refuses to disclose the exact weights of its ranking factors?
- □ Why does Google really prioritize speed as a ranking factor?
- □ Why doesn’t Google care about speed spam?
- □ Why can SEO metrics indicate regression while user experience improves?
- □ Should we still focus so much on loading speed?
- □ Is HTTPS just a simple tiebreaker between equivalent sites?
- □ Is it true that HTTPS is merely a 'tie-breaker' in Google rankings?
- □ How does Google really determine the weight of each ranking signal?
- □ Why does Google sometimes measure the impact of an update with negative metrics?
- □ Is loading speed really just a minor ranking signal?
- □ Is site speed really secondary to content relevance?
- □ Why is measuring only LCP no longer enough for Core Web Vitals?
- □ Why does Google differentiate between crawl speed and user speed?
- □ Why do your search results vary by region and language?
- □ Is your site truly global or just multilingual?
- □ Should you really invest in speed optimization to combat spam?
- □ Why does Google prioritize speed as a ranking factor?
Google is happy to communicate its general goals — relevant content, speed, user experience — but keeps the exact weighting of each factor a secret. According to Martin Splitt, disclosing these weights would encourage gaming the system and trigger a race to the bottom that would diminish the ecosystem. For SEOs, this means prioritizing a holistic approach rather than searching for a magic formula.
What you need to understand
What is Google really looking for by refusing to reveal the exact weights?
The logic is simple: Google wants to avoid webmasters optimizing solely for the algorithm instead of optimizing for the user. If tomorrow Mountain View announces that speed accounts for 15% and backlinks 22%, every site will focus exclusively on these two levers.
The result? A uniform, predictable web that is easy to manipulate. Websites would all look alike, mechanically applying a checklist without caring about the actual experience. Google would lose its ability to discriminate quality, and users would inherit a poorer web.
How does this limited transparency protect the ecosystem?
Splitt talks about a “race to the bottom” — a phrase that deserves attention. Imagine if Google revealed that loading time counts for 10% of the score. Sites would sacrifice visual richness, advanced features, anything that weighs down the page — even if it meant creating meager but ultra-fast pages.
The same goes for backlinks: if Google displayed an exact weight, link farms would explode again. We would see the excesses of the 2000s, with thousands of spammy links mechanically planted. Opacity prevents simplistic reasoning and forces consideration of balance.
Is this stance compatible with a stated desire for transparency?
Google publishes more and more — Search Console Insights, Core Web Vitals, official documentation — but never provides the complete formula. It’s directed transparency: we know the ingredients, but not the proportions.
Some see this as hypocrisy. Others, including Splitt, believe it’s the only way to preserve a healthy ecosystem. The debate remains open, but the reality on the ground confirms that Google will not change course on this point.
- General objectives: Google communicates about the major families of factors (relevance, speed, authority) without quantifying their weight.
- Gaming the system: Revealing exact weights would encourage mechanical optimization at the expense of user experience.
- Race to the bottom: Complete transparency would push sites to favor signals that are easy to manipulate, impoverishing the web.
- Directed transparency: Google provides indicators (CWV, E-A-T, etc.) but keeps the weighting under lock and key.
SEO Expert opinion
Is this statement consistent with observed practices on the ground?
Yes, and it’s even one of the rare statements from Google that perfectly matches reality. For years, Mountain View has published guidelines, recommendations, measurement tools — but never transparent scoring. The most refined SEO audits rely on correlations, A/B tests, and reverse-engineering, not on official weights.
The absence of a public formula has created an ecosystem of experts who test, observe, and share. This is precisely what Google wants: a market of skill rather than a market for a magic recipe. Let's be honest — if the weights were public, there would be no need for experienced SEOs, just Excel and calculators.
What nuances should be added to this position?
Splitt's discourse implies that all webmasters are potential cheaters. This is a perception bias understandable by Google, which must manage millions of manipulation attempts daily. But in real life, many sites are just trying to understand what is expected of them.
Opacity also creates perverse effects: a proliferation of contradictory advice, dependence on third-party tools (which sell “scores”), and legitimate frustration among honest publishers. Google could probably be more precise about certain thresholds — for example, clarifying what constitutes a “good” LCP beyond simply “< 2.5 s.” [To be verified]: Google claims to protect the ecosystem but never publicly measures whether this opacity actually produces better user results.
In what cases does this rule not apply?
Paradoxically, Google has already revealed precise weights in a few cases — notably for Core Web Vitals, where we know that the “page experience signal” has a measured impact (and that it is minor compared to relevance). Similarly, nofollow links had a binary function until Google switched to a “hint” model.
These exceptions confirm the rule: Google only discloses a weight when it believes that this transparency serves its interests (rapid adoption, standardization of the web). But for the core of the algorithm — relevance, authority, freshness signals — opacity remains total.
Practical impact and recommendations
What should be done practically in the face of this opacity?
First rule: stop looking for the magic formula. If you optimize your site on the assumption that you will discover “the” right dosage one day, you are wasting your time. Google will never tell you, and even if it did, it would change the weights three months later.
Second rule: adopt a multi-signal approach. Rather than betting everything on speed or backlinks, work on all pillars simultaneously — technical, content, authority, experience. The sites that perform sustainably are those that do not have gaping holes in one area, not those that excel in just one lever.
What mistakes should be avoided in this context?
The classic mistake: over-optimizing one signal at the expense of others. I have seen sites sacrifice their editorial richness to scrape 0.2 seconds of LCP, then wonder why they lost traffic. Google does not rank sites based on a single criterion — it seeks overall balance.
Another pitfall: blindly relying on third-party tools that assign scores. Ahrefs, SEMrush, Moz have their metrics (DR, DA, etc.), but these are just correlations. No external tool knows Google’s formula. Use these scores as relative indicators, never as absolute values.
How can I verify that my site remains competitive without knowing the exact weights?
Benchmark your direct competitors on all axes simultaneously: speed, technical structure, content depth, link profile, user signals. If you are consistently outperformed on 3 out of 5 axes, you have your action plan — regardless of the exact weights.
Keep testing continuously. Modern SEO is all about test & learn. Change one variable, measure the impact over 4-6 weeks, then adjust. Google’s opacity forces you to become empirical rather than theoretical — and that may be a good thing.
- Audit all SEO pillars (technical, content, popularity, UX) instead of focusing on a single signal.
- Compare performance with competitors on each axis to identify critical gaps.
- Avoid over-optimizing one criterion (e.g., extreme speed at the expense of content richness).
- Regularly test changes over periods of 4-6 weeks to measure real impact.
- Use third-party scores (DR, DA, etc.) as relative indicators, never as absolute targets.
- Maintain active monitoring of Google’s official guidelines to anticipate priority changes.
❓ Frequently Asked Questions
Pourquoi Google ne publie-t-il pas la pondération exacte de ses facteurs de ranking ?
Est-ce que Google a déjà révélé des poids précis pour certains facteurs ?
Les scores d'outils tiers (Ahrefs DR, Moz DA) reflètent-ils les poids réels de Google ?
Quelle est la meilleure stratégie SEO face à cette opacité ?
Cette opacité ne crée-t-elle pas une dépendance aux experts SEO et outils payants ?
🎥 From the same video 25
Other SEO insights extracted from this same Google Search Central video · published on 06/05/2021
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.