robots.txt Generator

Create a valid robots.txt file to guide search engines.

1. Default Policy

2. Templates (Optional)

3. Add Specific Rules

4. Sitemap Location (Optional)

`robots.txt` Result

What is a `robots.txt` file?

The `robots.txt` file is a robots exclusion standard that tells web crawlers (like Googlebot) which pages or files they can or cannot request access to on your site. It is a fundamental part of technical SEO.

Key Directives

  • User-agent: The name of the bot the rule applies to (e.g., `*` for all, `Googlebot` for Google).
  • Disallow: Asks the bot not to crawl a specific path. It's useful to prevent bots from wasting time on unimportant sections (like admin pages).
  • Allow: Explicitly allows crawling of a sub-path within a blocked directory.

Important: Blocking a URL in `robots.txt` does not prevent it from being indexed if it is linked from other sites. To securely prevent indexing, you must use the <meta name="robots" content="noindex" /> meta tag in the page's HTML.