Crawlability
Crawlability is whether crawlers can fetch your pages and resources (robots.txt, auth walls, and status codes can block it).
Definition
Crawlability describes whether search engine bots can successfully fetch your pages and the resources needed to render them. Blocking key paths or assets via robots.txt, authentication, or bad status codes can prevent proper rendering and understanding.
Why it matters
- No crawling means no indexing and no rankings
- Blocking CSS/JS can break rendering-based indexing decisions
- Better crawl efficiency reduces crawl waste
How to implement
- Review robots.txt to avoid blocking important paths/assets
- Return HTTP 200 for key pages and avoid long redirect chains
- Use sitemaps and internal links to improve discovery
Related
FAQ
Common questions about this term.