Official statement
Crawling is blocked using the robots.txt file. Random users on the web can still access the website while well-behaved search engines will know not to access it.
Other statements from this video 6 ▾
- □ Comment masquer votre site des résultats de recherche Google en restant accessible ?
- □ Faut-il vraiment utiliser un mot de passe pour protéger le contenu privé?
- □ Pourquoi le robots.txt ne garantit pas l'absence d'indexation ?
- □ Comment la balise noindex affecte-t-elle votre stratégie d'indexation SEO ?
- □ Comment la protection par mot de passe influence-t-elle le SEO du contenu privé ?
- □ Quand bloquer le crawl ou l'indexation pour optimiser votre SEO ?
Official statement from
(4 years ago)
⚠ A more recent statement exists on this topic
Should You Really Block the GoogleOther Crawler in Your Robots.txt?
View statement →
TL;DR
The robots.txt file prevents disciplined search engines from crawling, but it doesn't restrict access for users or less cooperative tools. Implication: Ensure that sensitive content is protected by other means if necessary.
🎥 From the same video 6
Other SEO insights extracted from this same Google Search Central video · published on 24/11/2021
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.