Official statement
Other statements from this video 4 ▾
- 1:07 Faut-il vraiment soumettre un sitemap XML pour améliorer son référencement ?
- 2:14 Soumettre un sitemap garantit-il l'indexation de vos pages ?
- 2:34 Un sitemap mal configuré peut-il pénaliser votre site ?
- 3:17 Comment diagnostiquer pourquoi vos URL WordPress n'apparaissent pas dans l'index Google ?
Google states that the average position displayed in Search Console results from a calculation that includes all variations in position according to various contextual factors. In practice, this metric aggregates heterogeneous data that can obscure significant performance discrepancies between audience segments. For an SEO, relying blindly on this average is akin to driving with a truncated dashboard: it’s essential to segment by device, location, and query to grasp the reality.
What you need to understand
What does this "average" calculated by Google really mean?
The average position presented in Search Console corresponds to an arithmetic aggregation of all the positions held by a given page, regardless of the search context. Google calculates this average by considering factors such as the device used (mobile vs desktop), the geographical location of the user, the personalization of results, and the search history.
As a result: a page may appear in position 2 for a mobile user in Paris and in position 12 for a desktop user in Marseille. The average displayed smooths out these discrepancies and may show position 7, which does not reflect either of the two real-life scenarios. This aggregation becomes problematic as soon as we try to analyze the actual performance of a URL across a specific segment.
What factors cause this position to vary from one query to another?
Google mentions "several factors" without being exhaustive, but practical experience identifies at least five major variables. The device remains the most obvious: a mobile-first site may rank higher on smartphones. Geolocation greatly influences local intent queries, even without explicit geographical mention in the query.
Personalization — search history, language preferences, behavioral signals — creates result bubbles. SERP features (featured snippets, People Also Ask, carousels) shift organic positions and further distort the average. Finally, temporal fluctuations — algorithm updates, freshness spikes — generate variations that the average completely overrides.
Why can this metric be misleading?
Because an average always conceals more than it reveals. A site in position 3 on mobile and 18 on desktop displays an average position of 10.5 — a figure that corresponds to no real user experience. Worse, if 80% of traffic comes from mobile, this average gives equal weight to both segments while one is marginal.
This metric also encourages a simplistic view of ranking. A practitioner who observes "average position stable at 8" may believe that nothing is changing, whereas in reality mobile dropped from 5 to 12 while desktop rose from 11 to 4. These inverse movements neutralize each other in the average, masking critical structural changes.
- The average position aggregates radically different search contexts (device, geo, personalization)
- The same average position can mask gaps of 10+ positions between segments
- SERP features and carousels distort the calculation by shifting organic positions
- The equal weighting of all contexts ignores the actual traffic distribution
- Temporal variations and Google's A/B tests create statistical noise
SEO Expert opinion
Does this statement align with observed practices in the field?
Yes, but it remains uncharacteristically cautious. Google confirms what every SEO experiences daily: the average position is an unstable metric that mixes apples and oranges. What Google does not say — and here’s where the issue lies — is how exactly these different factors are weighted in the calculation.
Field observations suggest that not all impressions carry the same weight. Does an impression in position 50 count as much as one in position 3 in the calculation? [To be verified] Google never specifies. Likewise, there’s no indication on how impressions without clicks are treated: are they included in the average even if the URL was technically visible but out of viewport?
What nuances should be added to this official statement?
The wording "the average position can vary" is almost euphemistic. In reality, for competitive queries, the gaps between segments can reach 15 to 20 positions. An e-commerce site will often have radically different positions between mobile (where Google favors PWAs, UX, speed) and desktop (where long content and internal linking weigh more).
Another critical nuance: Google does not mention the impact of SERP tests that it continuously deploys. During a test, a portion of users sees a modified SERP, creating artificial position variations. These parasitic fluctuations integrate into the average without being identifiable or filterable. [To be verified] No official documentation confirms if these tests are reported or excluded from the calculation.
In what cases does this metric become completely unusable?
Whenever we deal with queries with high positional variance. Typically: local queries without explicit geographical mention (e.g., "Italian restaurant" may rank position 1 in Lyon, 47 in Bordeaux), ambiguous queries where Google tests different intents, seasonal keywords where position varies by 30 positions between high and low seasons.
Multilingual or multi-regional sites are also trapped. If your page ranks position 2 in France, 8 in Belgium, 15 in Switzerland, the average at 8.3 means absolutely nothing for driving optimization by market. In these cases, the average position becomes a statistical artifact with no operational value.
Practical impact and recommendations
How should we analyze the average position to keep it actionable?
The first rule: never look at this metric at a global level. Systematically filter by device in Search Console — mobile vs desktop often reveals gaps of 5 to 10 positions. If your traffic is 75% mobile and you rank poorly in that segment, the inflated desktop average masks the real problem.
Next, segment by type of query: brand vs generic, navigational vs informational. Brand queries often show stable positions 1-2, while generic ones fluctuate from 5 to 20. Aggregating the two dilutes the analysis. Use regex query filters to isolate semantic clusters and observe positions by search intent.
What misinterpretation errors should absolutely be avoided?
The classic error: celebrating an "improvement of the average position from 12 to 9" without looking at the underlying distribution. This increase may come from a single brand keyword that moved from position 5 to 1, while 20 generic queries dropped from 8 to 15. The apparent gain disguises a structural erosion of visibility.
Another frequent pitfall: comparing average positions between two competing pages on the same query. If page A shows an average position of 7 and page B position 5, it does not mean B is performing better. Perhaps A dominates mobile (80% of traffic, position 4) but is weak on desktop, while B does the opposite. The CTR and real traffic are the only metrics that truly matter in the end.
What methodology should be adopted for reliable SEO management?
Build your own segmented dashboards. Regular exports from Search Console with device + country + query type segmentation. Cross-reference this data with Google Analytics 4 to verify that positions translate into real traffic. A rising average position without an increase in sessions signals a CTR or cannibalization issue.
Set up a position tracking on a panel of strategic queries via third-party tools (SEMrush, Ahrefs, Ranks). These tools track daily positions by device and precise geolocation, eliminating personalization biases. Compare this data with Search Console to identify discrepancies and refine your understanding of contextual variations.
- Systematically filter average position by device (mobile vs desktop) in Search Console
- Segment queries by semantic cluster and intent (brand, generic, local, informational)
- Cross-reference average position with CTR and real traffic to detect inconsistencies
- Use third-party tools to track precise positions by geo and device on a panel of strategic keywords
- Analyze the distribution of positions rather than the average: quartiles, median, standard deviations
- Compare position changes with algorithm deployments and content updates
❓ Frequently Asked Questions
La position moyenne de Search Console inclut-elle les impressions où ma page était en position 50+ ?
Pourquoi ma position moyenne monte alors que mon trafic organique baisse ?
Les SERP features comme les featured snippets sont-elles comptées dans la position moyenne ?
Faut-il privilégier la position moyenne ou la position médiane pour analyser la performance ?
Comment identifier si une variation de position moyenne est significative ou juste du bruit statistique ?
🎥 From the same video 4
Other SEO insights extracted from this same Google Search Central video · duration 7 min · published on 28/10/2019
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.