Official statement
Google states that Googlebot typically follows the guidelines set in the robots.txt file. If a site appears in search results despite a robots.txt block, it's because the URL is known through external links, but the content is not crawled.
27:07
Other statements from this video 12 ▾
- 3:17 Is it true that Google claims there’s no widespread indexing problem?
- 3:17 Why does Google quickly announce major indexing issues?
- 3:17 Why does Google limit indexing based on a site's perceived quality?
- 3:17 Is Content the Key to Solving Your Indexing Issues?
- 20:02 Why is it essential to report spam through Google's specific tool?
- 23:07 Should You Still Use 301 Redirects Months After an URL Change?
- 28:07 Does typing a URL in Chrome really guarantee its indexing?
- 31:48 Is it true that showing different content on desktop and mobile can harm your SEO?
- 38:05 Why are hreflang tags crucial for multilingual SEO success?
- 43:53 How are Chrome Data in Discover Changing Your SEO Insights?
- 51:35 Why should you use the nofollow attribute as a fallback solution in SEO?
- 53:00 Why should you ditch separate URLs for mobile and desktop?
Official statement from
(4 years ago)
⚠ A more recent statement exists on this topic
Why do so many SEO professionals still confuse robots.txt and no-index? Here's w...
View statement →
TL;DR
Googlebot generally adheres to the robots.txt file. If a site is listed despite a block, it's due to external links. The content isn't crawled.
🎥 From the same video 12
Other SEO insights extracted from this same Google Search Central video · published on 25/11/2021
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.