Official statement
Other statements from this video 25 ▾
- 4:14 Pourquoi Search Console n'affiche-t-elle pas toutes les données de vos sitemaps indexés ?
- 4:47 Les erreurs serveur tuent-elles vraiment votre crawl budget ?
- 5:48 Le temps de réponse serveur ralentit-il vraiment le crawl Google plus que la vitesse de rendu ?
- 7:24 Google reconnaît-il vraiment le contenu syndiqué et privilégie-t-il l'original ?
- 10:36 Google privilégie-t-il vraiment la géolocalisation pour classer le contenu syndiqué ?
- 14:28 Comment Google gère-t-il vraiment la canonicalisation et le hreflang sur les sites multilingues ?
- 16:33 Pourquoi Google affiche-t-il l'URL canonique au lieu de l'URL locale dans Search Console ?
- 18:37 Faut-il vraiment localiser chaque page produit pour éviter le duplicate content ?
- 20:11 Pourquoi Google peine-t-il à comprendre vos balises hreflang sur les gros sites internationaux ?
- 20:44 Faut-il vraiment afficher une bannière de sélection pays sur un site multilingue ?
- 21:45 Comment identifier et corriger le contenu de faible qualité après une Core Update ?
- 23:55 Le passage ranking est-il vraiment indépendant des featured snippets ?
- 24:56 Les liens en nofollow dans les guest posts sont-ils vraiment obligatoires pour Google ?
- 25:59 Les PBN sont-ils vraiment détectés et neutralisés par Google ?
- 27:33 Le nombre de backlinks est-il vraiment sans importance pour Google ?
- 28:37 Le duplicate content est-il vraiment sans danger pour votre SEO ?
- 29:09 Faut-il vraiment s'inquiéter si la page d'accueil surclasse les pages internes ?
- 29:40 Le maillage interne est-il vraiment le signal prioritaire pour hiérarchiser vos pages ?
- 31:47 Faut-il encore désavouer les liens spammy en SEO ?
- 32:51 Le fichier disavow peut-il pénaliser votre site ?
- 35:30 Les Core Web Vitals affectent-ils déjà votre classement ou faut-il attendre leur activation ?
- 36:13 Pourquoi Google peine-t-il à comprendre les pages saturées de publicités ?
- 37:05 Faut-il vraiment indexer moins de pages pour éviter le thin content ?
- 52:23 Le trafic et les signaux sociaux influencent-ils vraiment le référencement naturel ?
- 53:57 La longueur d'un article influence-t-elle vraiment son classement Google ?
Google confirms that CrUX data — and thus Core Web Vitals — are measured at the full origin level: protocol + subdomain + domain. Each subdomain is evaluated in isolation. Specifically, if blog.example.com showcases excellent performance while www.example.com is slow, only the fast subdomain benefits from the boost. This granularity allows for performance segmentation… but complicates overall diagnosis.
What you need to understand
What do we actually mean by 'origin' in the context of Core Web Vitals?
In the world of browsers and web security, origin is defined by three components: the protocol (http or https), the full hostname (including the subdomain), and the port. For Core Web Vitals, Google strictly applies this logic through the Chrome User Experience Report (CrUX).
This means that https://www.example.com, https://blog.example.com, and https://shop.example.com are three distinct origins. Each collects its own user experience metrics, independently of the others. This separation is not a decision made by Google — it is a technical constraint of the web security model.
Why is there granularity at the subdomain level instead of the root domain?
The CrUX collects real user data from Chrome users worldwide. This data is aggregated by origin to comply with privacy standards and the segmentation logic of browsers. Google did not build CrUX specifically for SEO — it is primarily a generic user experience measurement tool.
As a result, each subdomain inherits its own technical stack, sometimes its own hosting, and therefore may have potentially very different performances. A WordPress blog hosted on a modest VPS and a React application on a premium CDN cannot be judged with the same metrics. Granularity by origin reflects this technical reality.
How does this separation affect Google search rankings?
Google uses CrUX data as a ranking signal within the "Page Experience" framework. Each origin is evaluated separately, so one subdomain can receive a ranking boost related to Core Web Vitals while another subdomain within the same root domain does not benefit from this.
Let’s be honest: this is not a dominant signal. However, in competitive contexts where other factors are equivalent, this difference can tip the scales of a position. A high-performing subdomain does not compensate for a lagging main subdomain if the majority of SEO traffic goes through the latter.
- Each subdomain is measured as a distinct entity for Core Web Vitals
- The protocol (http vs https) also matters — two different origins
- CrUX data is never aggregated at the root domain level
- A subdomain without sufficient Chrome traffic will not have actionable CrUX data
- This logic applies to both desktop and mobile, with separate datasets
SEO Expert opinion
Is this statement consistent with what’s observed on the ground?
Yes, and it's easily verifiable. If you query the CrUX API or check the PageSpeed Insights report for different subdomains of the same site, you will find totally independent metrics. One subdomain may show "Good" for LCP while another is "Needs Improvement" — and this is the case across thousands of e-commerce sites where the blog is on a separate subdomain.
What’s less clear is the actual impact on ranking. Google remains very vague about the exact weight of the Page Experience signal. In my audits, I have seen sites with catastrophic Core Web Vitals on their main subdomain continue to rank well due to other strong signals (authority, content, backlinks). The boost exists, but it’s not miraculous. [To be confirmed]: Google has never published numerical data on the exact weighting of this signal.
What nuances should we add to this rule?
First point: if a subdomain does not receive enough Chrome traffic, it simply will not have CrUX data. Google requires a minimum threshold of data collected over a rolling 28 days. In this case, the subdomain is neither penalized nor favored — it’s just absent from the dataset.
Second nuance: individual pages can also have their own CrUX data if they generate enough traffic. However, the ranking signal generally applies at the origin level. If your origin is “Poor,” an isolated page with good performance does not suffice to overturn the trend for the entire subdomain.
When does this logic pose problems?
For multi-subdomain architectures, it’s a headache. Imagine an e-commerce site with www for the catalog, account for customer space, blog for editorial content, and m for mobile. Each has its own stack, its own hosting, and its own third-party scripts. The result: heterogeneous performances that translate into fragmented ranking signals.
This is where it gets tricky: you cannot compensate for a slow origin with a fast origin. If 80% of your SEO traffic comes through www and that subdomain is failing on Core Web Vitals, the fact that your blog is impeccable won’t save you. Each origin needs to be treated as a distinct optimization project.
Practical impact and recommendations
What should be done practically to optimize Core Web Vitals by subdomain?
Start by mapping your origins. List all subdomains that receive significant SEO traffic. For each one, query the CrUX API or use PageSpeed Insights to retrieve actual metrics. Focus first on the subdomains generating the most organic page views — that’s where the impact will be most direct.
Then, treat each origin as a distinct technical project. A slow WordPress blog requires specific optimizations (lazy loading, server cache, CDN for images). A React application on another subdomain will need code-splitting, strategic preloading, and JavaScript optimization. Do not look for a one-size-fits-all solution — each stack has its own bottlenecks.
What mistakes should be avoided in this context?
A classic mistake: blindly optimizing the wrong subdomain. I’ve seen teams spend weeks improving a staging subdomain or an old abandoned blog that receives no traffic, while the main subdomain remained catastrophic. Always check the distribution of organic traffic before prioritizing your efforts.
Another pitfall: believing that a good Lighthouse score is sufficient. Lighthouse tests in a controlled environment — Core Web Vitals are measured with real-world CrUX data. A site can score 95/100 on Lighthouse and be "Poor" in CrUX if real users have slow connections, weak devices, or interact with heavy elements that Lighthouse does not test.
How to ensure each subdomain is optimally configured?
Use the Search Console — it aggregates CrUX data by origin and shows you which URLs are "Good", "Needs Improvement", or "Poor". This is your prioritization dashboard. For each failing origin, identify the problematic metrics (LCP, INP, CLS) and their root causes through tools like WebPageTest or Chrome DevTools in throttling mode.
Also monitor trends over 28 days. Core Web Vitals are rolling averages — an improvement is not reflected instantly. If you deploy a fix, wait at least 2-3 weeks before judging its effectiveness in CrUX. And keep an eye on regressions: a third-party script added by marketing can ruin your efforts in 24 hours.
- Map all subdomains receiving significant organic traffic
- Query the CrUX API or PageSpeed Insights for each distinct origin
- Prioritize optimizations based on organic traffic volume by subdomain
- Treat each technical stack separately — no one-size-fits-all multi-origin solution
- Monitor the Search Console for origin-level regressions
- Wait 28 days after a deployment to assess real impact in CrUX
❓ Frequently Asked Questions
Si je migre mon contenu d'un sous-domaine lent vers un sous-domaine rapide, est-ce que je récupère immédiatement les bonnes métriques ?
Un sous-domaine sans données CrUX est-il pénalisé pour les Core Web Vitals ?
Est-ce que http://exemple.com et https://exemple.com sont considérés comme deux origines distinctes ?
Peut-on agréger les données CrUX de plusieurs sous-domaines pour avoir une vue d'ensemble du domaine racine ?
Si mon sous-domaine principal a de mauvais Core Web Vitals mais que mes pages les plus importantes sont bonnes individuellement, suis-je protégé ?
🎥 From the same video 25
Other SEO insights extracted from this same Google Search Central video · duration 55 min · published on 19/02/2021
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.