Official statement
Other statements from this video 5 ▾
- □ La position moyenne de Google Search Console reflète-t-elle vraiment la réalité de vos rankings ?
- □ Pourquoi votre position Google varie-t-elle selon qui cherche et d'où ?
- □ Pourquoi vos impressions sont-elles si faibles dans la Search Console ?
- □ Les images peuvent-elles booster vos positions dans les résultats web classiques ?
- □ Pourquoi vos données Search Console fluctuent-elles autant d'une requête à l'autre ?
Google calculates the average position by only considering the highest rank among all your URLs that appear for a given query. If three of your pages rank at positions 5, 8, and 12, only position 5 will be counted in the Search Console. This logic radically changes the interpretation of performance curves and can mask SEO cannibalization issues that a manual audit would reveal.
What you need to understand
Why does this statement disrupt the reading of Search Console data?
Most SEO practitioners interpret the average position as an accurate summary of all URLs ranking for a query. This view is incorrect. Google does not average the positions of all your pages — it systematically chooses the best position to calculate the displayed metric.
Specifically, if your site places four different URLs in positions 3, 7, 11, and 15 for "accounting software for SMEs," the Search Console will only display position 3. The other three URLs exist in the SERPs, consume crawl budget, may dilute your topical authority — but they are invisible in the metric you check daily.
How is this different from what SEOs thought until now?
Many experts assumed that Google averaged the positions, or that it counted each URL separately in trend curves. This clarification reveals that the Search Console adopts an optimistic logic: it shows you your best score, not your overall actual performance.
This approach skews the analysis of fluctuations. A drop in the average position may signal that your dominant URL has fallen — but it can also mask that a new cannibalizing URL has surged to position 4 while the old leader has dropped to position 6. The graph will show you a misleading stability around position 4-5.
How does this mechanic impact cannibalization diagnostics?
SEO cannibalization becomes partially invisible in aggregated metrics. You may have three URLs competing for the same query, with weekly rotations between positions 2, 5, and 9, and observe in the Search Console a stable average position around 2-3.
To detect these situations, you need to export data at the URL level, query by query, and cross-reference with a crawler that identifies similar content. Third-party tools like Oncrawl or Botify often reveal rotation patterns that the Search Console smooths out completely.
- Google only retains the best position among all your URLs for a given query in the calculation of the average position.
- This metric masks cannibalization phenomena where multiple pages compete for the same keyword.
- Fluctuations in average position can be misleading if you do not track individual URLs.
- A granular export URL by URL remains essential to diagnose ranking rotations among multiple pages.
- Third-party tools that crawl your site often reveal content duplications that the Search Console does not directly report.
SEO Expert opinion
Is this statement consistent with field observations?
Yes, and it explains several anomalies that practitioners encounter regularly. Many SEO audits reveal sites with a flattering average position in the Search Console but an abnormally low click-through rate. The reason: multiple URLs rank for the same query, Google only displays the best one, but the user may see the version ranked at position 8 or 12.
I have observed cases where a client was tracking an average position of 3.2 for a strategic keyword, convinced of having a good ranking - while five different URLs were alternating weekly between positions 2 and 15. The actual CTR was no more than 1.8%, whereas a stable position 3 should generate 8-12% clicks. [To be verified]: Google does not document whether this logic also applies to click and impression data, or only to the average position.
What nuances should be added to Mueller's statement?
Mueller does not specify how Google handles multiple rich results. If your site occupies a carousel of three positions (e.g., in positions 1, 2, and 3 via featured snippets, video results, and a classic blue link), which position is retained? The snippet's position 1, or the average position of the three placements?
Another grey area: queries with mixed intentions. Google sometimes displays two URLs from the same site to cover an informational intent (position 3) and a transactional intent (position 9). The Search Console will retain position 3, but this does not reflect the reality of the user journey who actually sees two entries from your domain.
In what cases does this rule pose a problem for SEO analysis?
This mechanic makes historical comparisons risky. If you had a dominant URL at position 2 six months ago, then published new content ranking at position 1 while the old one fell to position 7, the average position curve will show an improvement — while you may have created an unnecessary duplicate.
SEO agencies need to rethink their client dashboards. Presenting only the average position without auditing multiple URLs per query amounts to ignoring half the diagnosis. I systematically recommend a weekly export of performance per URL, with an automatic flag when more than two URLs from the same domain rank for a high-volume query.
Practical impact and recommendations
What should you do concretely to audit this phenomenon?
Export Search Console data at the most granular level: query × URL × date. Filter the queries generating more than 100 monthly impressions, then identify those where at least two different URLs appear. A Python script with the Search Console API can automate this detection in 30 minutes of setup.
Cross-reference this data with a semantic crawl of your site. Tools like Screaming Frog with NLP integration or Oncrawl's dedicated modules detect pages that cover the same subject with a content similarity greater than 70%. If these pages rank for the same queries, you have a proven case of cannibalization.
What mistakes should be avoided in interpreting performance curves?
Never compare two periods solely based on the overall average position. An improvement from 8.2 to 6.7 may seem positive, but it can mask that a strategic URL has lost 15 positions while a secondary page has gained 25. The balance is mathematically positive, but the business impact is catastrophic.
Also, avoid relying on automated alerts from Search Console that signal a "position improvement." These notifications are based on the metric as Google calculates it — that is, the best position only. A positive alert may coincide with the emergence of a new cannibalizing URL that has just stolen your traffic.
How to restructure your editorial strategy to avoid these pitfalls?
Map your thematic clusters and assign a primary target query to each URL. Use a dashboard that crosses the search intent, the current ranking URL, and the secondary URLs that appear for the same query. If two pages cover the same subject, decide which one becomes the canonical reference and consolidate the other through a 301 redirect or redesign as complementary content.
Implement a strict editorial policy: every new publication must pass through a cannibalization filter. Before going live, check in Search Console and your crawling tool if an existing URL already covers that intent. If so, enhance the existing one instead of creating a duplicate.
These optimizations require a fine mastery of APIs, semantic analysis tools, and the ability to interpret complex datasets. If your team lacks technical resources or if your site exceeds 10,000 indexed URLs, it may be wise to rely on a specialized SEO agency that has the workflows and tools dedicated to automate these audits and prioritize actions for quick impact.
- Export Search Console data at the level of query × URL × date to identify queries where multiple URLs rank.
- Cross-reference these exports with a semantic crawl to detect similar content that cannibalizes.
- Set up a dashboard that automatically flags queries with more than two URLs from the same domain in the top 20.
- Revise the editorial strategy to assign a unique target query per URL and avoid future duplicates.
- Audit monthly CTR fluctuations against average positions to detect inconsistencies indicative of cannibalization.
- Consider consolidation via 301 redirects or redesign of content for internally competing pages.
❓ Frequently Asked Questions
Si trois de mes URLs rankent pour la même requête, laquelle Google utilise-t-il pour calculer la position moyenne ?
Cette logique de calcul s'applique-t-elle aussi aux impressions et aux clics dans la Search Console ?
Comment détecter si plusieurs de mes URLs se cannibalisent sur une même requête ?
Une amélioration de position moyenne signifie-t-elle toujours que mon SEO progresse ?
Faut-il toujours supprimer ou rediriger les URLs secondaires qui rankent sur les mêmes requêtes ?
🎥 From the same video 5
Other SEO insights extracted from this same Google Search Central video · published on 21/04/2021
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.