What does Google say about SEO? /
Quick SEO Quiz

Test your SEO knowledge in 3 questions

Less than 30 seconds. Find out how much you really know about Google search.

🕒 ~30s 🎯 3 questions 📚 SEO Google

Official statement

It is possible to have an excellent First Input Delay but poor Time to Interactive and Total Blocking Time, probably due to JavaScript blocks that do not affect the FID. If the actual user experience is good, these metrics may not reflect reality. This is an edge case to report to the Lighthouse team.
21:22
🎥 Source video

Extracted from a Google Search Central video

⏱ 39:51 💬 EN 📅 17/06/2020 ✂ 51 statements
Watch on YouTube (21:22) →
Other statements from this video 50
  1. 0:33 Google voit-il vraiment le HTML que vous croyez optimiser ?
  2. 0:33 Le HTML rendu dans la Search Console reflète-t-il vraiment ce que Googlebot indexe ?
  3. 1:47 Le JavaScript tardif nuit-il vraiment à votre indexation Google ?
  4. 1:47 Pourquoi Googlebot rate-t-il vos modifications JavaScript critiques ?
  5. 2:23 Google réécrit vos balises title et meta description : faut-il encore les optimiser ?
  6. 3:03 Google réécrit-il vos balises title et meta description à volonté ?
  7. 3:45 DOMContentLoaded vs événement load : pourquoi cette différence change-t-elle tout pour le rendu côté Google ?
  8. 3:45 DOMContentLoaded vs load : quel événement Googlebot attend-il réellement pour indexer votre contenu ?
  9. 6:23 Comment prioriser le rendu hybride serveur/client sans pénaliser votre SEO ?
  10. 6:23 Faut-il vraiment rendre le contenu principal côté serveur avant les métadonnées en SSR ?
  11. 7:27 Faut-il éviter la balise canonical côté serveur si elle n'est pas correcte au premier rendu ?
  12. 8:00 Faut-il supprimer la balise canonical plutôt que d'en servir une incorrecte corrigée en JavaScript ?
  13. 9:06 Comment vérifier quelle canonical Google a vraiment retenue pour vos pages ?
  14. 9:38 L'URL Inspection révèle-t-elle vraiment les conflits de canonical ?
  15. 10:08 Faut-il vraiment ignorer le noindex sur vos fichiers JS et CSS ?
  16. 10:08 Faut-il ajouter un noindex sur les fichiers JavaScript et CSS ?
  17. 10:39 Peut-on vraiment se fier au cache: de Google pour diagnostiquer un problème SEO ?
  18. 10:39 Pourquoi le cache: de Google est-il un piège pour tester le rendu de vos pages ?
  19. 11:10 Faut-il vraiment se préoccuper de la capture d'écran dans Search Console ?
  20. 11:10 Les screenshots ratés dans Google Search Console bloquent-ils vraiment l'indexation ?
  21. 12:14 Le lazy loading natif est-il vraiment crawlé par Googlebot ?
  22. 12:14 Faut-il encore s'inquiéter du lazy loading natif pour le référencement ?
  23. 12:26 Faut-il vraiment découper son JavaScript par page pour optimiser le crawl ?
  24. 12:26 Le code splitting JavaScript peut-il réellement améliorer votre crawl budget et vos Core Web Vitals ?
  25. 12:46 Pourquoi vos scores Lighthouse mobile sont-ils systématiquement plus bas que sur desktop ?
  26. 12:46 Pourquoi vos scores Lighthouse mobile sont-ils systématiquement plus bas que desktop ?
  27. 13:50 Votre lazy loading bloque-t-il la détection de vos images par Google ?
  28. 13:50 Le lazy loading peut-il vraiment rendre vos images invisibles aux yeux de Google ?
  29. 16:36 Le rendu côté client fonctionne-t-il vraiment avec Googlebot ?
  30. 16:58 Le rendu JavaScript côté client nuit-il vraiment à l'indexation Google ?
  31. 17:23 Où trouver la documentation officielle JavaScript SEO de Google ?
  32. 18:37 Faut-il vraiment aligner les comportements desktop, mobile et AMP pour éviter les pièges SEO ?
  33. 19:17 Faut-il vraiment unifier l'expérience mobile, desktop et AMP pour éviter les pénalités ?
  34. 19:48 Faut-il vraiment corriger un thème WordPress bourré de JavaScript si Google l'indexe correctement ?
  35. 19:48 Faut-il vraiment éviter JavaScript pour le SEO ou est-ce un mythe persistant ?
  36. 21:22 Peut-on avoir un bon FID avec un TTI catastrophique ?
  37. 23:23 Le FOUC ruine-t-il vraiment vos performances Core Web Vitals ?
  38. 23:23 Le FOUC pénalise-t-il vraiment votre référencement naturel ?
  39. 25:01 Le JavaScript consomme-t-il vraiment votre crawl budget ?
  40. 25:01 Le JavaScript consomme-t-il vraiment plus de crawl budget que le HTML classique ?
  41. 28:43 Faut-il bloquer l'accès aux utilisateurs sans JavaScript pour protéger son SEO ?
  42. 28:43 Bloquer un site sans JavaScript risque-t-il une pénalité SEO ?
  43. 30:10 Pourquoi vos scores Lighthouse ne reflètent-ils jamais la vraie expérience de vos utilisateurs ?
  44. 30:16 Pourquoi vos scores Lighthouse ne reflètent-ils pas la vraie performance de votre site ?
  45. 34:02 Le render tree de Google rend-il vos outils de test SEO obsolètes ?
  46. 34:34 Le render tree de Google : faut-il vraiment s'en préoccuper en SEO ?
  47. 35:38 Faut-il vraiment s'inquiéter des ressources non chargées dans Search Console ?
  48. 36:08 Faut-il vraiment s'inquiéter des erreurs de chargement dans Search Console ?
  49. 37:23 Pourquoi Google n'a-t-il pas besoin de télécharger vos images pour les indexer ?
  50. 38:14 Googlebot télécharge-t-il vraiment les images lors du crawl principal ?
📅
Official statement from (5 years ago)
TL;DR

Google acknowledges that a site can show excellent First Input Delay while accumulating poor scores on Time to Interactive and Total Blocking Time, without impacting the actual experience. This discrepancy arises from JavaScript blocks that do not interfere with the FID but degrade other metrics. For SEO practitioners, this means prioritizing real user experience over blindly focusing on all Lighthouse indicators.

What you need to understand

Why can these three metrics diverge so radically?

The First Input Delay measures the response time to the first user interaction — a click, a tap, a keystroke. It is a very targeted metric, capturing a specific moment: when the user wants to act and when the browser responds.

The Time to Interactive and Total Blocking Time, on the other hand, cover a much broader time window. TTI waits for the page to be completely stable and responsive for 5 consecutive seconds. TBT aggregates all the long JavaScript task blocks that could have blocked the interface between the First Contentful Paint and TTI.

In practical terms? A heavy script that runs after the user clicked — for instance, an analytics SDK, a chat widget, an advertising tracking script — will drag down TTI and TBT without touching FID. The user has already interacted, the browser responded quickly, and FID is excellent. However, technically, the page remains unstable for several seconds due to these asynchronous JS blocks.

Google says that 'the real user experience is good' — what does that really mean?

Google here refers to the field data from the Chrome User Experience Report (CrUX), which aggregates metrics measured on real browsers, under real network conditions, with real users. If your CrUX FID is green, it means your actual visitors are not experiencing noticeable latency when they interact.

The trap is that Lighthouse — which runs in a lab, in a controlled environment — can show you a catastrophic TTI and a bright red TBT while your users are completely unaware. Why? Because Lighthouse simulates a CPU that is 4x slower and a 3G network, and it executes all the scripts in a linear sequence. In real life, a modern CPU processes these JS blocks in parallel, and the user has already scrolled or clicked before the last widget finishes loading.

Should we ignore TTI and TBT if FID is good?

No. But we must contextualize. If your CrUX FID is excellent and your users raise no complaints about a blocked interface, then yes, a mediocre TTI in the lab could be a false positive. This is what Martin Splitt refers to as an 'edge case'.

Conversely, if your FID is good but your TTI is catastrophic because of JS blocks running while the user scrolls, you risk micro-freezes, choppy animations, and clicks that don't respond immediately. These are not FID issues — this is about continuous responsiveness, which FID does not measure. And that degrades the experience, even if Lighthouse doesn't capture it perfectly.

  • FID measures initial input latency, not the overall fluidity of the interface after the first click.
  • TTI and TBT capture CPU load during the loading phase but can be misleading if JS blocks occur after user interaction.
  • CrUX data is the benchmark — if your field metrics are good, the lab Lighthouse scores can be contextualized.
  • A high TTI may signal a real problem if the interface remains blocked while the user interacts — do not systematically ignore it.
  • Google recommends reporting these edge cases to the Lighthouse team to refine lab scoring models.

SEO Expert opinion

Is this statement consistent with field observations?

Absolutely. I have seen dozens of sites showing a CrUX FID of 100 ms and a Lighthouse TTI of 12 seconds. The most frequent culprit? Third-party scripts — Google Tag Manager triggering 15 cascading tags, advertising SDKs parsing the DOM, chat widgets injecting dynamic HTML.

The issue is that these scripts often run after the user has scrolled, clicked a link, or started reading. FID remains excellent because the browser responded to the first interaction before the chaos began. But if you open the DevTools and record a performance trace, you see blocks of 200-300 ms blocking the main thread for several seconds. And Lighthouse detects this — rightly so.

Now, does it impact ranking? [To be verified]. Google says it uses Core Web Vitals (including FID, which will be replaced by INP in March 2024) as a ranking signal, but we do not know the weighting given to TTI or TBT. My hypothesis — and it’s just a hypothesis — is that Google aggregates multiple experience signals, and an excellent FID may offset a mediocre TTI, especially if CrUX data shows a smooth experience. But Google does not confirm this anywhere officially.

What nuances should be added to this logic?

Let's be honest: FID is a limited metric. It only measures the latency of the first interaction, and it does not capture subsequent clicks, animations, or scroll jank. Google has actually replaced it with Interaction to Next Paint (INP) in 2024, precisely because FID was too permissive.

So, if you rely solely on FID to say 'my site is fast', you’re missing half the picture. A site can have a 50 ms FID and a 400 ms INP if each subsequent interaction triggers a heavy re-render or blocking API fetch. And users feel that — even if FID doesn’t show it.

Another nuance: TTI and TBT are lab metrics, not field metrics. They are useful for diagnosing CPU load issues, but they do not necessarily reflect what your users experience on a modern CPU, a 4G connection, and a browser that optimizes JS execution in the background. If Lighthouse is screaming at you with a TTI of 15 seconds but your CrUX FID is at 90 ms, then yes, it’s probably an edge case. But if your CrUX INP is in the red, then your lab TTI is telling you a truth — your JS is too heavy.

In what cases does this rule not apply?

If your site is an interactive web application — a SaaS, an online editor, a dashboard — then TTI and TBT are much more relevant. Why? Because your users aren’t just making a single click and leaving. They interact continuously, open modals, fill out forms, and trigger asynchronous actions.

In this case, a high TTI signals that your interface remains blocked for several seconds after the initial load, and that each subsequent interaction may run into unresolved JS. FID can be good — because the first click arrives before the chaos — but the overall experience will be catastrophic. And here, Lighthouse is right to signal the problem.

Warning: If your site relies on heavy frameworks (React, Vue, Angular) with a JS bundle > 500 KB, then your TTI will likely be poor by default, even if your FID is correct. This is not an edge case — it’s a structural front-end architecture issue.

Practical impact and recommendations

What should you actually do if you find yourself in this situation?

First, prioritize CrUX data. Open Google Search Console, check the Core Web Vitals report, and look at what your real users' browsers are reporting. If your FID (or INP, from 2024) is green and your URLs pass the 75% good experience threshold, then your lab TTI can wait.

Next, identify the responsible JavaScript blocks. Open Lighthouse, expand the 'Diagnostics' section, and look for 'Avoid long main-thread tasks'. You will see a list of scripts blocking the main thread. If they are third-party scripts — analytics, ads, widgets — ask yourself if they are truly indispensable above the fold. Often, they can be deferred with a simple defer or async, or even lazy-loaded after the first scroll.

If the JS blocks come from your own code — for example, a front-end framework parsing a large initial state — then it’s a classic optimization project: code splitting, tree shaking, reducing bundle size, partial server-side rendering. No miracles, just engineering.

What mistakes should be avoided when interpreting these metrics?

First mistake: confusing lab metrics with field metrics. Lighthouse gives you a snapshot under degraded conditions — slow CPU, slow network, no cache. It’s useful for detecting regressions, but it’s not your users' reality. CrUX data, on the other hand, aggregates millions of real sessions. If Lighthouse shows red and CrUX shows green, trust CrUX.

Second mistake: optimizing for a Lighthouse score at the expense of real experience. I have seen developers remove useful features — a support chat, a search tool, a product carousel — just to get the TTI down from 8 to 4 seconds. The result: the score goes up, but conversions drop. If a feature adds value and users don’t complain about it, keep it. Optimize it, yes, but don’t sacrifice it.

Third mistake: completely ignoring TTI and TBT just because FID is good. These metrics capture a form of technical debt. If your TTI is catastrophic, it means your front-end is executing too much JS on load. Sooner or later, it will impact the experience — either via INP, or via micro-freezes, or via an increase in bounce rate. Treat these signals as early alerts, not noise.

How do I check that my site is not in this edge case?

Compare your CrUX data with your Lighthouse scores. If your CrUX FID is < 100 ms and your Lighthouse TTI is > 7 seconds, then yes, you are probably in the edge case described by Martin Splitt. Then check INP — if the CrUX INP is good too (< 200 ms), then your site responds well to actual interactions, and the lab TTI is just a partial false positive.

Now, if your CrUX INP is in the orange or red, then your lab TTI tells you a truth: your JS is blocking the interface, and your users feel it. In that case, you need to investigate — open a performance trace in Chrome DevTools, record 5-6 seconds of loading, and identify the long blocks. Often, it’s a misconfigured third-party script, a too-heavy JS bundle, or an unnecessary React re-render.

  • Check the Core Web Vitals report in Google Search Console for real CrUX data.
  • Compare field FID/INP with lab TTI/TBT — strong divergence may indicate an edge case.
  • Identify responsible JavaScript blocks via Lighthouse (section 'Avoid long main-thread tasks').
  • Defer non-critical third-party scripts with defer, async, or lazy loading after the first scroll.
  • Monitor INP in addition to FID — it captures continuous responsiveness, not just the first interaction.
  • If your TTI remains poor despite good field FID/INP, report the case to the Lighthouse team to contribute to model improvements.
These front-end performance optimizations — analyzing JS blocks, balancing lab and field metrics, implementing targeted lazy loading strategies — require sharp technical expertise and a deep understanding of diagnostic tools. If your team does not have a solid grasp of Chrome DevTools, performance traces, or interpreting CrUX reports, it may be prudent to engage a specialized SEO agency in Core Web Vitals for personalized support and tailored recommendations.

❓ Frequently Asked Questions

Pourquoi mon FID est excellent alors que mon TTI est catastrophique ?
Le FID mesure la latence de la première interaction, qui survient souvent avant que les scripts lourds ne s'exécutent. Le TTI, lui, attend que la page soit complètement stable, ce qui peut prendre plusieurs secondes si du JavaScript continue de tourner en arrière-plan après le premier clic.
Dois-je prioriser les données CrUX ou les scores Lighthouse ?
Priorise toujours les données CrUX, qui reflètent l'expérience réelle de tes utilisateurs. Lighthouse est utile pour diagnostiquer des problèmes en labo, mais ne remplace pas les métriques terrain.
Un mauvais TTI peut-il impacter mon ranking Google ?
Google ne communique pas officiellement sur la pondération du TTI dans le ranking. Les Core Web Vitals officielles (LCP, FID/INP, CLS) sont confirmées comme signaux, mais le TTI reste une métrique de diagnostic, pas un critère de classement déclaré.
Comment savoir si je suis dans un edge case comme celui décrit par Martin Splitt ?
Compare ton FID CrUX (< 100 ms) avec ton TTI Lighthouse (> 7 secondes). Si l'écart est énorme et que tes utilisateurs ne remontent aucun problème de réactivité, tu es probablement dans l'edge case. Vérifie aussi l'INP pour confirmer.
Quels scripts sont les plus souvent responsables de cette divergence ?
Les scripts tiers — Google Tag Manager, SDK publicitaires, widgets de chat, outils d'A/B testing — sont les coupables habituels. Ils s'exécutent après le premier clic et gonflent le TTI sans toucher au FID.
🏷 Related Topics
AI & SEO JavaScript & Technical SEO Web Performance

🎥 From the same video 50

Other SEO insights extracted from this same Google Search Central video · duration 39 min · published on 17/06/2020

🎥 Watch the full video on YouTube →

Related statements

💬 Comments (0)

Be the first to comment.

2000 characters remaining
🔔

Get real-time analysis of the latest Google SEO declarations

Be the first to know every time a new official Google statement drops — with full expert analysis.

No spam. Unsubscribe in one click.