Enter a website URL to fetch its robots.txt file, then check whether
specific paths are allowed or disallowed for web crawlers.
Note: Due to CORS restrictions, this tool may not work
with all websites. For best results, use it with sites that have
permissive CORS policies or test with your own development servers.
Enter the base URL of the website (e.g., https://example.com)
Robots.txt Content
Check URL Permissions
Select the user-agent (bot) to check permissions for
Enter a path (e.g., /about) or a full URL to check