What does Google say about SEO? /

Official statement

FCP and LCP do not necessarily happen simultaneously, especially on devices with slow CPUs or slow network connections. Lighthouse run locally can yield different results than PageSpeed Insights executing in the cloud because the testing conditions vary.
18:18
🎥 Source video

Extracted from a Google Search Central video

⏱ 28:49 💬 EN 📅 01/07/2020 ✂ 23 statements
Watch on YouTube (18:18) →
Other statements from this video 22
  1. 0:33 Why does Googlebot ignore your cookies and how can you adapt your personalized content strategy?
  2. 1:02 Does Googlebot crawl with cookies enabled or does it ignore your personalized content?
  3. 1:02 Can logged-in users be redirected to different URLs without facing SEO penalties?
  4. 1:35 Does changing your JavaScript framework lead to a drop in Google rankings?
  5. 1:35 Does switching JavaScript frameworks really ruin your SEO?
  6. 4:46 Does rendered HTML really ensure JavaScript indexing?
  7. 4:46 How can you verify if your JavaScript content is truly indexable by Google?
  8. 5:48 Is content behind login really invisible to Google?
  9. 5:48 Is the content behind a login really invisible to Google?
  10. 6:47 Should you really redirect Googlebot to www to bypass CORB errors?
  11. 8:42 Should you treat Googlebot differently from users to manage redirects?
  12. 11:20 Should you really hide consent banners from Googlebot to enhance its crawling?
  13. 11:20 Should you really show consent screens to Googlebot to avoid possible cloaking penalties?
  14. 14:00 How can you precisely identify the elements that degrade your Cumulative Layout Shift?
  15. 19:51 Why will your hash (#) URLs never be indexed by Google?
  16. 20:23 Should you really remove hashes from sports event URLs to get them indexed?
  17. 23:32 Is it true that Googlebot can do without pre-rendering?
  18. 24:02 Should you really disable JavaScript on your pre-rendered pages for Googlebot?
  19. 26:42 Does JSON-LD Really Slow Down Your Loading Time?
  20. 26:42 Is the FAQ Schema markup actually useless for your product pages?
  21. 26:42 Does JSON-LD FAQ Schema really slow down your site?
  22. 26:42 Does FAQ Schema markup hurt your conversion rate?
📅
Official statement from (5 years ago)
TL;DR

First Contentful Paint and Largest Contentful Paint do not occur at the same moment, especially on slow connections or weak CPUs. Lighthouse run locally and PageSpeed Insights in the cloud test in different environments, which explains the measurement discrepancies. For reliable diagnosis, it's essential to cross-reference multiple field data sources rather than rely on a single tool.

What you need to understand

What is the fundamental difference between FCP and LCP?

First Contentful Paint marks the moment when the browser renders the first visible content element — text, image, canvas, SVG. It's an indicator of initial responsiveness perceived by the user.

Largest Contentful Paint, on the other hand, measures when the largest visible element in the viewport finishes rendering. This element can be a hero image, a main text block, or a video. LCP better captures when the main content actually becomes viewable.

Thus, these two metrics do not measure the same thing. On a well-optimized site with progressive rendering, FCP can occur at 0.8s while LCP may take until 2.3s. On a 3G mobile network with a limited CPU, this gap widens further: the HTML parser quickly delivers lightweight content (FCP), but loading and decoding a large hero image (LCP) can take several additional seconds.

Why do Lighthouse locally and PageSpeed Insights show different results?

Lighthouse running locally uses your machine and your network connection. If you're testing from a MacBook Pro M2 on fiber, you're simulating a high-end desktop environment. Even with simulated throttling, some performance aspects aren’t accurately replicated.

PageSpeed Insights runs in the Google cloud, on standardized server configurations, with more realistic network and CPU throttling. The testing conditions are controlled but don't always reflect your actual infrastructure. As a result, PSI might show an LCP of 3.1s where your local Lighthouse indicates 1.8s.

Geographic latency also plays a role. If your server is in Paris and PSI tests from a US datacenter, the TTFB will be higher, delaying FCP and LCP. Locally, you're testing from your network, likely closer to the origin server or benefiting from an optimized edge CDN.

What does the FCP/LCP gap reveal about your page structure?

A significant gap between FCP and LCP generally indicates that your main content is heavy or rendered late. The browser quickly displays lightweight content (header, menu, text) but takes time to deliver the hero element.

A typical case: a page with FCP at 1s but LCP at 4s. This often indicates an unoptimized hero image, loaded via poorly configured lazy-load JavaScript, or served from a third-party domain without preconnect. The progressive rendering is good (correct FCP), but the user experience remains poor (degraded LCP).

On mobile, this gap mechanically widens: lower CPU, slower image decoding, and more variable network latency. A site with a 2.5s LCP on desktop can explode to 4.8s on a Moto G4 in 3G. That's why Google emphasizes CrUX field data rather than lab tests.

  • FCP and LCP measure two distinct phases of loading, not the same event.
  • Lighthouse locally tests in your machine/network environment, PageSpeed Insights in a standardized cloud environment.
  • The gaps between tools are normal and reflect different testing conditions, not a measurement error.
  • A large FCP/LCP gap signals heavy or tardy main content rendering.
  • Devices with low CPU and slow connections mechanically amplify this gap.

SEO Expert opinion

Is this statement consistent with field observations?

Yes, and this is even a point we encounter daily in audits. Clients often arrive with a local Lighthouse report showing a score of 95/100, then discover a PSI score of 62/100. Their first reaction: "The tool is buggy." But that's not the case — it's just the measurement conditions that differ.

The real issue is that many SEOs and developers only test locally, on high-end machines, with good connections. As a result, they optimize for an environment that doesn’t reflect their actual audience. CrUX data captures real user experiences — and that's often where the pain point lies.

What nuances should be added to this claim?

Martin Splitt remains intentionally vague on one point: which tool does Google trust? Neither Lighthouse nor PSI is directly used by the ranking algorithm. Google relies on CrUX data collected via Chrome in the field. If your site lacks sufficient Chrome traffic, you won’t have CrUX data, and Google resorts to other, less transparent signals. [To be verified] — Google has never clarified how sites without CrUX data are evaluated for Page Experience.

Another nuance: the FCP/LCP gap isn't always a problem. On certain editorial sites, it's normal for FCP to arrive early (light text) and LCP later (heavy editorial image). What counts is that LCP remains under 2.5s at the 75th percentile in field data. If that’s the case, the gap matters little.

In what cases does this rule not apply?

On very lightweight pages (minimalist landing page, simple text page), FCP and LCP can be nearly simultaneous. If your largest visible element is an H1 title in webfont, FCP and LCP often trigger in the same rendering frame. In this case, there's no significant gap.

Be cautious of measurement artifacts. Lighthouse may detect an LCP element different from that captured in production by CrUX, especially if content appears dynamically (carousel, A/B test, conditional lazy-load). In this scenario, FCP/LCP gaps in the lab do not reflect the field reality. That’s why it's essential to always cross-check lab and field data.

Attention: Never rely on a single local Lighthouse test to validate your Core Web Vitals. Lab scores are indicators, not absolute truths. Only CrUX data counts for ranking.

Practical impact and recommendations

What concrete steps should you take to reduce the FCP/LCP gap?

First, identify which element triggers the LCP on your strategic pages. Use Lighthouse or WebPageTest to see which element is detected (often highlighted in blue in the timeline). If it's an image, optimize it as a priority: WebP/AVIF compression, appropriate dimensions, fetchpriority="high" on the img tag.

If the LCP is a text block, ensure your webfonts are preloaded with rel="preload" and that you're using font-display: swap. A render-tree block caused by an unloaded font delays both FCP and LCP. Also consider inlining critical CSS so the browser doesn’t wait for an external file before painting.

For images, add preconnect directives to your third-party domains (image CDNs, analytics scripts). A DNS lookup + TLS handshake can add 300-800ms on mobile, mechanically delaying LCP. Example: <link rel="preconnect" href="https://cdn.example.com">.

What mistakes should you avoid when optimizing FCP/LCP?

Don’t rely solely on a single local Lighthouse test. This is the classic mistake: optimizing until you achieve a score of 100/100 locally, then finding out that CrUX data remains poor. Test on multiple devices (low-end mobile, 3G throttling), cross-check with PSI, and consult the CrUX report in Search Console.

Avoid lazy-loading your LCP element. If your hero image is set to loading="lazy", the browser waits for the image to enter the viewport before initiating the download. On mobile, this can delay LCP by 1-2s. Only lazy-load images below-the-fold.

Another pitfall: optimizing FCP at the expense of LCP. Some developers inject ultra-fast minimal content (empty skeleton, spinner) to improve FCP, but delay the loading of the main content. The outcome: FCP at 0.6s, LCP at 4.2s, high CLS. Google values consistent progressive rendering, not artificially inflated FCP.

How do you check if your optimizations work in production?

Regularly check the Core Web Vitals report in Search Console. It’s the truth source for Google. If your URLs are classified as "Poor" or "Needs Improvement" despite your optimizations, it means that the CrUX field data doesn’t yet reflect your changes — or that your actual audience encounters more degraded conditions than your tests.

Also use WebPageTest in public testing with realistic mobile profiles (Moto G4, 3G throttling). Compare filmstrips and waterfalls before/after optimization. Ensure the LCP element appears earlier in the timeline. If the gain is visible in testing but not in CrUX, it may be a caching issue, geographic latency, or server performance problem.

  • Identify the exact LCP element via Lighthouse/WebPageTest and optimize it as a priority.
  • Preload critical webfonts with rel="preload" and font-display: swap.
  • Add fetchpriority="high" on the hero image if it triggers the LCP.
  • Avoid lazy-loading on above-the-fold elements, especially the LCP element.
  • Test on multiple devices and network conditions, cross-check lab and field data.
  • Consult the Core Web Vitals report in Search Console to validate optimizations in production.
The FCP/LCP gap often reveals heavy or poorly prioritized main content. Optimizing LCP requires a multi-tool approach, realistic testing, and field monitoring via CrUX. These technical optimizations can quickly become complex, especially on modern stacks with JS frameworks, multi-zone CDNs, and international audiences. If you lack the time or internal expertise to diagnose and fix these friction points, a technical SEO agency can assist you with a Core Web Vitals audit and a prioritized action plan.

❓ Frequently Asked Questions

Quel outil de test PageSpeed dois-je privilégier pour mes audits ?
Croisez plusieurs sources : Lighthouse en local pour des tests rapides, PageSpeed Insights pour un environnement standardisé, et surtout les données CrUX dans Search Console pour la réalité terrain. Aucun outil lab ne remplace les données utilisateurs réels.
Un écart important entre FCP et LCP est-il toujours un problème ?
Pas nécessairement. L'essentiel est que le LCP reste sous 2,5s au 75e percentile en données terrain. Un écart peut être normal si votre contenu principal est visuellement riche, tant que le chargement reste rapide.
Pourquoi mon score Lighthouse local est excellent mais mon score PSI est mauvais ?
Lighthouse local teste sur votre machine et connexion, souvent bien meilleures que l'environnement standardisé de PSI. PSI simule des conditions plus réalistes avec throttling CPU/réseau, ce qui explique l'écart.
Comment identifier précisément l'élément qui déclenche le LCP sur ma page ?
Utilisez Lighthouse ou WebPageTest : l'élément LCP est généralement marqué en bleu dans la timeline. Vous pouvez aussi inspecter le rapport JSON Lighthouse pour voir quel nœud DOM est détecté comme largest-contentful-paint.
Les données CrUX mettent combien de temps à refléter mes optimisations ?
CrUX agrège les données sur 28 jours glissants. Si vous déployez une optimisation aujourd'hui, il faut compter 3-4 semaines pour que l'impact soit visible dans Search Console, à condition d'avoir un trafic Chrome suffisant.
🏷 Related Topics
Domain Age & History Content AI & SEO Web Performance Local Search

🎥 From the same video 22

Other SEO insights extracted from this same Google Search Central video · duration 28 min · published on 01/07/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.