Control which AI crawlers can access your content. Block training bots, allow browsing bots, or configure each one individually. Live preview, download, and deploy.
Choose a starting point, then fine-tune below
These bots collect your content to train AI models
Collects data for AI model training
Feeds content into Gemini AI training
Builds open dataset used by many AI labs
TikTok's crawler for AI training
These bots fetch pages live when users ask AI assistants questions
Browses pages live when ChatGPT users ask
Powers Perplexity's AI search answers
Fetches pages for Claude AI web access
These bots feed AI-powered features like Siri and Google AI
Feeds Apple Intelligence, Siri, Spotlight
Google's catch-all for non-search AI tasks
Points crawlers to your XML sitemap.
# Your robots.txt will appear here as you configure botsEnter a path to see which bots would be allowed or blocked.
We won't spam you, just a heads up when everything is live.
Ooty SEO shows which bots visit every page, how often they return, and which content they skip entirely. Query your crawl data and AI visibility inside ChatGPT, Gemini, or Claude.
See Ooty SEO14-day money-back guarantee. No questions asked.
Copy and deploy your robots.txt
Save the generated file as robots.txt at your domain root (e.g. yoursite.com/robots.txt). Every bot checks this location.
Test it in Google Search Console
Use the robots.txt Tester in Google Search Console to confirm Googlebot can reach the pages you intend to allow.
Check AI crawler access
Run the AI Readiness Checker to confirm GPTBot, ClaudeBot, and PerplexityBot have the access your robots.txt intends to grant.
Validate your sitemap
Make sure your sitemap is referenced in your robots.txt and accessible. Run the Sitemap Validator to check it is well-formed and indexed.
AI Readiness Checker
Check if AI crawlers can access your site
SEO Content Analyzer
44-check SEO audit for any URL
Schema Markup Validator
Validate JSON-LD and check rich result eligibility
Meta Tag Analyzer
Analyze title, description, and OG tags
Sitemap Validator
Validate XML sitemap structure and URL count
Topic Cluster Analyzer
Visualize your site's topic distribution
HTTP Status Checker
Bulk check with AI crawler user-agent testing
Nine AI crawlers now visit websites regularly. They fall into two categories: training crawlers that collect data to improve AI models, and browsing crawlers that fetch pages in real time when users ask AI assistants questions.
GPTBot (OpenAI), CCBot (Common Crawl), Bytespider (ByteDance), Google-Extended (Google). These collect content to train language models. Blocking them means your content is not used for training, but also not available when the AI needs to reference it from memory.
ChatGPT-User (OpenAI), PerplexityBot (Perplexity). These fetch pages live during conversations. If a user asks ChatGPT to check your pricing page, ChatGPT-User visits it in real time. Blocking these bots makes your site invisible during live AI interactions.
GoogleOther, ClaudeBot, and Applebot-Extended serve multiple purposes. GoogleOther feeds experimental Google products. ClaudeBot collects training data and powers search. Applebot-Extended feeds Apple Intelligence.
Use the AI Readiness Checker to verify your robots.txt is working as intended and that crawler access matches your strategy.
robots.txt at the root of your domain (e.g., https://example.com/robots.txt).Content-Type: text/plain.For specific CMS platforms: WordPress places it at the site root automatically (but check for conflicting plugin rules). Shopify manages robots.txt through the admin panel. For static sites (Next.js, Gatsby, Hugo), place the file in your public directory.
Disallow: / to block the entire site.example.com/robots.txt and read your rules.After generating your robots.txt, validate your XML sitemap to make sure it is referenced correctly. Run the SEO Analyzer on key pages to check crawlability, and use the HTTP Status Checker to confirm pages return the expected status codes.