Official statement
Other statements from this video 10 ▾
- 8:04 Faut-il vraiment arrêter le marketing dans les balises title pour ranker sur Google ?
- 17:28 Les caractères spéciaux dans les URLs posent-ils vraiment problème pour le SEO ?
- 20:59 Google peut-il ignorer votre site si vos produits sont déjà ailleurs ?
- 25:54 Faut-il vraiment désavouer les liens provenant de TLD suspects ?
- 30:22 Les CCTLD verrouillent-ils vraiment votre site sur un seul pays ?
- 32:47 Hreflang évite-t-il vraiment la duplication de contenu multilingue dans l'index Google ?
- 40:31 Les backlinks que vous créez vous-même peuvent-ils vraiment vous pénaliser ?
- 43:56 Faut-il vraiment soumettre manuellement vos URLs à Google ?
- 51:23 Hreflang : comment Google sélectionne-t-il vraiment la bonne version linguistique ?
- 77:40 Le design de page impacte-t-il réellement votre positionnement Google ?
Google requires that Aggregate Review Markup reflects <strong>all available reviews</strong> without selection or curation. Isolated or personal testimonials should not be marked with this Schema.org format. Practically speaking, if you cannot guarantee mathematical completeness of your data, you risk manual action or removal of star rich snippets in the SERPs.
What you need to understand
What is the difference between aggregate reviews and an isolated testimonial?
An aggregate review (AggregateRating in Schema.org) compiles dozens, hundreds, or thousands of customer reviews to derive an average rating and a total volume. An individual testimonial, even from a real customer, remains a personal opinion.
Google categorically refuses to allow you to mark one or two selected testimonials with AggregateRating markup. The reason is simple: it would be manipulating rich snippets by displaying stars for a biased sample. If you only have three reviews, you cannot claim a legitimate aggregate average.
Why does Google require the complete set of reviews?
The goal of Google is to display statistically representative data in the SERPs. If you only select the top five reviews out of a hundred, the average rating becomes misleading.
Users click on star rich snippets thinking they are viewing an objective summary. Any manipulation—even unintentional—destroys that trust and exposes the site to penalties: removal of rich snippets, or even manual action if Google detects fraudulent intent.
How do you technically define a complete set?
Google does not provide a precise numerical threshold. In practice, completeness means you must include all reviews collected through your platform, without any qualitative filtering or moderation other than that aimed at fake reviews or illegal content.
If you use a third-party solution (Trustpilot, Verified Reviews, Google Customer Reviews), the markup must point to the entirety of the data stream provided by that platform. No manual curation allowed: even 1-star reviews must be included in the calculation.
- Mandatory exhaustive aggregation: all available reviews must be counted in ratingValue and reviewCount.
- Isolated testimonials prohibited in AggregateRating: use individual Review markup instead if relevant.
- No qualitative filtering: you cannot exclude negative reviews to artificially improve the average.
- Third-party sources: if you delegate collection, the markup must reflect the entirety of the dataset provided by the platform.
- Possible penalties: removal of rich snippets or manual action if non-compliance is detected.
SEO Expert opinion
Is this rule consistent with practices observed in the field?
Yes and no. Google has always shown a de facto tolerance for sites that mark subsets of reviews, as long as the volume remains statistically credible. In practice, many e-commerce sites display stars based on 30-50 reviews while having received 300.
The problem is that no official documentation specifies the acceptable threshold. John Mueller says "complete set", but Google does not systematically penalize moderate deviations. This ambiguity creates a legal and SEO risk: you could lose your rich snippets overnight without warning. [To be verified]: the line between legitimate selection and manipulation remains unclear.
What nuances should be added to this statement?
Mueller's position is binary: completeness or nothing. However, in reality, several legitimate use cases require some form of filtering. For instance, if you sell a product in various variants (colors, sizes), you should aggregate only the reviews relevant to each SKU.
Similarly, multi-vendor platforms (marketplaces) sometimes need to isolate reviews by seller or category. Google has never clarified if these segmentations are allowed. My interpretation: as long as the scope is transparent and coherent, the risk remains limited. But this is just a field interpretation, not a guarantee.
In what cases does this rule not apply?
If you publish editorial testimonials (client interviews, case studies), you should never use AggregateRating. Instead, use individual Review markup with an identified author, or better yet: no markup at all if the content is purely promotional.
B2B sites that display 3-4 client logos with a quote cannot claim star rich snippets. Google considers them as marketing content, not as a corpus of customer reviews. Trying to force the markup exposes you to almost certain manual action.
Practical impact and recommendations
What concrete steps should be taken to be compliant?
First step: audit your existing Schema.org markup. Open Google's rich results testing tool and verify that your AggregateRating correctly references the entirety of your review stream. If you use a third-party platform, ensure the API returns 100% of the reviews, not a pre-filtered subset.
Second step: remove any AggregateRating markup on testimonials. If you display 2-3 client quotes on your homepage, immediately remove the Schema. You can keep the visible content, but without structured markup. Rich snippets are not an acquired right: they are earned through technical compliance.
What mistakes should be absolutely avoided?
Do not attempt to artificially inflate the reviewCount to reach a psychological threshold (100 reviews, 500 reviews). Google cross-checks Schema data with behavioral signals: if your product pages display 200 reviews but no one clicks on the “See all reviews” link, the inconsistency will eventually trigger a flag.
Another frequent mistake: marking AggregateRating for reviews collected outside your control perimeter. If you aggregate reviews from Amazon, Google Reviews, and Trustpilot, technically you must include all of them. Choosing only Trustpilot because the rating is better constitutes a direct violation of the completeness rule.
How can I verify that my site complies with this rule?
Install a Schema.org monitoring tool (like Schema App or a custom script) that continuously compares the reviewCount declared in the JSON-LD with the actual number of reviews stored in your database. Any divergence should trigger an alert.
Also, test your pages in the Search Console, under the Enhancements > Product Reviews section. If Google detects an inconsistency, you will receive a warning before any sanctions. Never ignore it: this is your last chance to correct before rich snippets are removed.
- Audit the existing Schema markup with Google's testing tool
- Verify that the third-party API returns 100% of reviews without filtering
- Remove any AggregateRating on isolated or editorial testimonials
- Implement automated monitoring of reviewCount vs database
- Monitor the Search Console warnings in the Enhancements section
- Train marketing teams never to manually modify JSON-LD
❓ Frequently Asked Questions
Puis-je baliser en AggregateRating uniquement les avis clients laissés sur mon site, en excluant ceux d'Amazon ?
Quel est le nombre minimum d'avis pour qu'un AggregateRating soit considéré légitime ?
Les avis modérés pour spam ou contenu illégal doivent-ils figurer dans le reviewCount ?
Si je perds mes rich snippets étoiles, combien de temps faut-il pour les récupérer après correction ?
Peut-on utiliser AggregateRating pour des notes internes (satisfaction collaborateurs, NPS) ?
🎥 From the same video 10
Other SEO insights extracted from this same Google Search Central video · duration 59 min · published on 06/03/2018
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.