Skip to main content

    Crawl-delay

    Crawl-delay is a robots.txt directive to control crawler request frequency. Learn which search engines support crawl-delay, how to configure it correctly, and alternatives when Google doesn't support it.

    Definition

    Crawl-delay is a non-standard robots.txt directive that suggests search engine crawlers wait a specified number of seconds between consecutive requests, reducing server load. Syntax: 'Crawl-delay: N' where N is seconds. Important: Googlebot does NOT support this directive! Bing, Yandex, and Yahoo (partially) do support it. For Google, use Search Console crawl rate settings or improve server performance to let Google auto-adjust. Use crawl-delay cautiously—excessive delays can severely impact indexing speed.

    Why it matters

    • Reduce non-Google crawler (Bing, Yandex) load on servers
    • Protect small or shared hosting servers from crawler traffic overload
    • Ensure real user experience priority when server resources are limited
    • Prevent aggressive crawlers from consuming excessive bandwidth and CPU
    • Provide buffer time for database-intensive pages
    • Temporarily slow crawling during traffic peaks
    • Build comprehensive crawler management with WAF/Rate Limiting

    How to implement

    • First verify if target crawler supports it: Googlebot doesn't, Bingbot does, Yandex does
    • Set per User-agent in robots.txt: User-agent: Bingbot \n Crawl-delay: 10
    • Recommended values: 1-10 seconds; over 30 seconds severely impacts indexing
    • For Google: Use Search Console > Settings > Crawl rate, or upgrade servers
    • Monitor server logs to verify crawler behavior changes
    • Combine with caching strategies (CDN, page cache) to reduce server load
    • Don't set crawl-delay for all User-agents, only target problematic crawlers

    Examples

    text
    # robots.txt example: Different delays for different crawlers
    
    # Googlebot doesn't support crawl-delay, no need to set
    User-agent: Googlebot
    Allow: /
    
    # Bingbot supports crawl-delay
    User-agent: Bingbot
    Crawl-delay: 5
    Allow: /
    
    # Yandex supports crawl-delay
    User-agent: Yandex
    Crawl-delay: 10
    Allow: /
    
    # Default for other crawlers
    User-agent: *
    Crawl-delay: 2
    Allow: /
    
    Sitemap: https://example.com/sitemap.xml
    text
    # Anti-patterns: Don't do this!
    
    # ❌ Wrong 1: Setting for Googlebot (ineffective)
    User-agent: Googlebot
    Crawl-delay: 30
    
    # ❌ Wrong 2: Delay too long (severely impacts indexing)
    User-agent: *
    Crawl-delay: 60
    
    # ❌ Wrong 3: Using crawl-delay instead of proper Disallow
    # If you don't want a path crawled, use Disallow, not delay

    Related

    FAQ

    Common questions about this term.

    Back to glossary