Official statement
Other statements from this video 1 ▾
Google offers two tools to validate structured data: the Structured Data Testing Tool and the Rich Results Test. The latter provides a visual preview of potential display in the SERP, but Martin Splitt emphasizes a crucial point: there is no guarantee of actual display. Specifically, a validated test does not automatically mean that your rich results will appear in production — Google reserves the right to filter based on other undisclosed criteria.
What you need to understand
What is the real difference between these two testing tools?
The Structured Data Testing Tool primarily checks the syntactical compliance of your schema.org tags. It detects format errors, missing or incorrectly typed properties, and confirms that your code adheres to the technical specification. It’s a pure validator.
The Rich Results Test goes further by simulating the visual rendering in search results. It shows you what your product card, recipe, or FAQ would look like if Google decided to display it. But beware — and this is where Splitt sets up an essential safeguard — this preview remains hypothetical.
Why doesn't Google guarantee the display of validated rich results?
Because technical validation is just one step in a much more complex decision pipeline. Google applies post-validation quality filters that are not publicly documented. Your markup can be perfect and never appear in production for reasons of relevance, content quality, or user behavior.
There are also variations based on query type, geographical location, and even the device (mobile vs desktop). A validated test on a Tuesday may yield different display on Thursday — not because your code has changed, but because the selection algorithm has evolved or competition on the SERP has intensified.
Do these tools actually detect all blocking errors?
No, and this is a point rarely highlighted. The tools test the markup in isolation, not the full context of the page. A structured data point can be technically valid but contradict visible content — which may trigger a manual or algorithmic filter that the tools do not signal.
Moreover, some of Google's rules regarding excessive promotional content or misleading information cannot be detected by an automatic parser. A product with a blatantly incorrect price may pass the technical test but be filtered out in Search Console.
- The tools validate syntax and structure, not relevance or truthfulness of content.
- A preview in the Rich Results Test does not constitute a commitment to display in production.
- Google applies quality and contextual filters after technical validation.
- Display variations can be geographical, temporal, or dependent on query type.
- Some strategic errors (misleading content, manipulation) escape automated validators.
SEO Expert opinion
Is this statement consistent with real-world observations?
Absolutely, and it's even one of the rare cases where Google is transparent about the limits of its tools. We regularly see in Search Console pages with perfect markup that never display rich results. Conversely, some pages with minor warnings enjoy them fully — confirming that the final decision rests on non-technical criteria.
Real A/B tests show that the rich snippets display rate varies between 30% and 80% depending on verticals, even with identical markup. This suggests that Google adjusts aggressively based on SERP competition, historical user behavior on the query, and probably quality metrics known only to the algorithm.
What nuances should be added to this statement?
Splitt does not specify which tool to prioritize according to the use case. In practice, the Rich Results Test is more restrictive: it only tests types of rich results eligible for visual display. If you're implementing schema.org for Knowledge Graph or semantic understanding reasons (without expecting a visual card), the Structured Data Testing Tool remains essential.
Another point: Google has deprecated and then partially reintegrated the Structured Data Testing Tool after protests from the SEO community. This back-and-forth shows that even internally, the tooling strategy is not stabilized. Therefore, one must cross-reference the two tools and never rely on a single verdict. [To be verified]: Google has never published official statistics on actual post-validation display rates — all market data comes from third-party studies.
In what cases does this rule not apply or become counterproductive?
If you are optimizing for highly competitive markets (fashion e-commerce, high-tech), relying solely on technical validation is naive. Competitors all have their own markup — the difference lies in price freshness, real-time stock availability, and behavioral signals (CTR, return rates).
Conversely, in low-competition niches, even imperfect markup can trigger rich snippets simply because Google has no credible alternative. The tool test then becomes a luxury — the essential element is to have a minimal, usable structure.
Practical impact and recommendations
What should you do concretely to maximize display chances?
First, validate with both tools — not just one. The Structured Data Testing Tool detects compliance errors that the Rich Results Test may overlook, and vice versa. Then, regularly monitor the Search Console: it indicates the actual indexing status and any potential issues post-crawl.
But above all, do not stop at technical validation. Work on semantic consistency: if you mark a price at €9.99, ensure it matches the hard-coded price displayed in the visible HTML. If you report an average rating of 4.5/5, make sure that this rating is calculated on a real and updated basis. Google cross-references sources.
What mistakes should be absolutely avoided?
Never consider that a validated test = guarantee of display. That's the classic pitfall of beginners who celebrate too early. The real test is the SERP in real conditions — and even there, display can fluctuate based on query context and user profile.
Also avoid overloading your pages with all types of schema available in the hopes of increasing your chances. Google prefers a targeted and relevant markup to a disorganized bidding. An Article with Person, Organization, BreadcrumbList, and Review nested together may technically validate, but create confusion that harms algorithmic interpretation.
How can I check that my site is really benefiting from rich results?
Use the Search Console: go to the "Enhancements" section then "Rich Results". It lists the detected types, the number of eligible URLs, actual impressions, and blocking errors. This is your source of truth — not the testing tools.
Then, conduct incognito real queries on your target keywords. Note the display variations based on location, device, and time. Document the patterns: some types of rich snippets (FAQ, How-to) appear more on mobile, while others (Product) favor desktop.
- Validate the markup with the Structured Data Testing Tool AND the Rich Results Test.
- Monitor the Search Console "Enhancements" section weekly.
- Check the consistency between structured data and visible content (price, ratings, availability).
- Test SERP display in real conditions (incognito, multiple devices, varied geolocations).
- Avoid schema overload: prioritize relevance over quantity.
- Document display fluctuations to identify algorithmic patterns.
❓ Frequently Asked Questions
Le Rich Results Test remplace-t-il définitivement le Structured Data Testing Tool ?
Un markup validé dans les outils mais absent de la SERP indique-t-il une pénalité ?
Les données structurées ont-elles un impact direct sur le ranking organique ?
Faut-il implémenter JSON-LD, Microdata ou RDFa ?
Combien de temps faut-il attendre après validation pour voir les résultats enrichis en SERP ?
🎥 From the same video 1
Other SEO insights extracted from this same Google Search Central video · duration 4 min · published on 29/05/2019
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.