Generate a robots.txt file to control search engine crawler access to your website pages
Generators
Generator
How to Use
1Set the User-agent (use * for all bots, or a specific bot name like Googlebot)
2Enter paths to disallow, one per line (e.g. /admin, /private)
3Optionally add paths to explicitly allow
4Add your sitemap URL if you have one
5Click Generate and copy the output to a file named robots.txt at your site root
Frequently Asked Questions
A robots.txt file is a text file placed at the root of your website that instructs search engine crawlers which pages or sections they are allowed or not allowed to crawl and index.
No. Robots.txt is a directive, not a hard block. Well-behaved bots like Googlebot will respect it, but malicious bots may ignore it. To guarantee a page is not indexed, use a "noindex" meta tag or HTTP header.
User-agent: * applies the rules to all web crawlers. You can specify individual bots like Googlebot or Bingbot to create bot-specific rules.
The robots.txt file must be placed at the root of your domain, accessible at https://yourdomain.com/robots.txt. Subdirectory robots.txt files are not valid and will be ignored by crawlers.