Official statement
Blocking crawling or indexing can be good options when the content is not private or if it simply involves parts of a website that you wish to prevent from appearing in search results.
Other statements from this video 6 ▾
- □ How can you conceal your site from Google search results while keeping it accessible?
- □ Is using a password the most effective way to safeguard private content?
- □ Can you really rely on the robots.txt file to stop search engines from crawling your site?
- □ Is it true that robots.txt doesn't guarantee your URLs won't be indexed?
- □ How does the noindex tag influence your SEO indexing strategy?
- □ How does password protection impact the SEO of private content?
Official statement from
(4 years ago)
⚠ A more recent statement exists on this topic
Should You Really Block the GoogleOther Crawler in Your Robots.txt?
View statement →
TL;DR
According to Google, it makes sense to block crawling or indexing for non-private content or certain parts of the site that you want to exclude from search results. This prevents the unwanted appearance of such content in the SERPs.
❓ Frequently Asked Questions
Bloquer le crawl impacte-t-il le SEO ?
Oui, cela réduit la visibilité de certaines pages qui ne doivent pas apparaître dans les résultats de recherche.
Peut-on empêcher l'indexation via robots.txt ?
Non, robots.txt gère le crawl, mais utilisez meta 'noindex' pour l'indexation.
Quand bloquer des parties de site ?
Quand elles ne sont pas essentielles au SEO ou concernent des tests, duplications.
🎥 From the same video 6
Other SEO insights extracted from this same Google Search Central video · published on 24/11/2021
🎥 Watch the full video on YouTube →
💬 Comments (0)
Be the first to comment.