OotyOoty
SEOComing soonSocialComing soonVideoComing soonAdsComing soonAnalyticsComing soonCommerceComing soonCRMComing soonCreatorsComing soon
Join the waitlist
FeaturesToolsPricingDocs

Products

SEOComing soonSocialComing soonVideoComing soonAdsComing soonAnalyticsComing soonCommerceComing soonCRMComing soonCreatorsComing soon
FeaturesToolsPricingDocs
Log in
Join the Waitlist

Launching soon

OotyOoty

AI native tools that replace expensive dashboards. SEO, Amazon, YouTube, and social analytics inside your AI assistant.

Product

  • Features
  • Pricing
  • Get started

Resources

  • Free Tools
  • Docs
  • About
  • Blog
  • Contact

Legal

  • Privacy
  • Terms
  • Refund Policy
  • Security
OotyOoty

AI native tools that replace expensive dashboards. SEO, Amazon, YouTube, and social analytics inside your AI assistant.

Product

  • Features
  • Pricing
  • Get started

Resources

  • Free Tools
  • Docs
  • About
  • Blog
  • Contact

Legal

  • Privacy
  • Terms
  • Refund Policy
  • Security

Stay in the loop

Get updates on new tools, integrations, and guides. No spam.

© 2026 Ooty. All rights reserved.

All systems operational
  1. Home
  2. /
  3. Blog
  4. /
  5. seo
  6. /
  7. ChatGPT SEO Audit: How to Audit Your Site with AI (Step by Step)
15 April 2026·14 min read

ChatGPT SEO Audit: How to Audit Your Site with AI (Step by Step)

Run an SEO audit using ChatGPT. Covers technical checks, content quality, Core Web Vitals, and schema validation with specific prompts for each step.

By Maya Torres

A ChatGPT SEO audit is a manual site review where you feed page data, crawl output, or Google Search Console exports into ChatGPT and use targeted prompts to identify technical issues, content gaps, and ranking opportunities. It works best for analysis and prioritization, not for live crawling or real-time metric pulls.

This guide gives you the exact prompts for each audit step, explains where ChatGPT adds genuine value, and flags the points where you need dedicated tools instead. If you want a broader checklist that covers the full audit workflow, start with our SEO audit checklist.

What a ChatGPT SEO audit can and cannot do

ChatGPT is strong at pattern recognition, text analysis, and structured reasoning. That makes it useful for reviewing title tags in bulk, spotting thin content, evaluating schema markup, and prioritizing a list of 200 issues into the ten that actually matter. It can process a CSV of crawl data and surface the anomalies you would miss scanning rows manually.

It cannot crawl your site. It cannot pull live PageSpeed data, check your robots.txt in real time, or verify that Google has indexed a specific URL. Every technical check requires you to supply the data first, either by pasting it directly or uploading an export from a crawler like Screaming Frog, Sitebulb, or Google Search Console.

This distinction matters because skipping it leads to hallucinated audit findings. ChatGPT will confidently tell you your robots.txt blocks JavaScript crawling if you ask without providing the file. Always supply the raw data, then ask for analysis.

Technical SEO checks

Start with the foundation. If search engines cannot crawl and index your pages correctly, nothing else in the audit matters.

Robots.txt review

Copy your robots.txt file and paste it into ChatGPT with this prompt:

Here is my robots.txt file:

[paste contents]

Review it for:
1. Are any important page types accidentally blocked?
2. Is the sitemap URL declared?
3. Are there conflicting rules between user-agents?
4. Any security-sensitive paths that should be blocked but are not?

List issues by severity: critical, moderate, minor.

Keyword data, site audits, and rankings from Google APIs inside your AI assistant.

Try Ooty SEOView pricing
Share
Maya Torres
Maya Torres

SEO Strategist at Ooty. Covers search strategy, GEO, and agentic SEO.

Continue reading

29 Apr 2026

ChatGPT for SEO Strategy: Build a Quarterly Plan with AI

ChatGPT for SEO strategy means using the model to accelerate the research, analysis, and planning stages of SEO, not to replace the strategic thinking that makes a plan worth executing. You can build a complete quarterly SEO plan in a few hours instead of a fe

1 Apr 2026

ChatGPT for SEO: The Complete Guide to AI-Powered Search Optimization

How to use ChatGPT for SEO across keyword research, content optimization, and technical audits. Specific prompts and real performance data included.

1 Apr 2026

ChatGPT for SEO: The Complete Guide to AI-Powered Search Optimization

ChatGPT for SEO means using OpenAI's language model to speed up keyword research, write and optimize content, generate technical audit recommendations, draft outreach emails, and analyze ranking data. It works best as an accelerator for tasks you already know

On this page

  • What a ChatGPT SEO audit can and cannot do
  • Technical SEO checks
    • Robots.txt review
    • XML sitemap analysis
    • Crawl data analysis
  • On-page SEO audit prompts
    • Title tag and meta description review
    • Heading structure analysis
    • Content depth assessment
  • Core Web Vitals analysis
    • Why CWV data requires context
  • Schema markup validation
    • Product and FAQ schema
  • Internal linking audit
  • Content quality assessment
    • Detecting keyword cannibalization
  • Mobile-first audit considerations
    • Why mobile performance varies so dramatically
    • Mobile rendering check
  • Building an automated audit workflow
  • Limitations and when to use dedicated tools

Common findings: blocking CSS or JS files (which prevents Google from rendering pages), missing sitemap declarations, and overly broad disallow rules that catch pages you actually want indexed. For a deeper look at how search engines handle JavaScript rendering, see our JavaScript SEO guide.

XML sitemap analysis

Export your sitemap (or paste it if it is short enough) and use this prompt:

Here is my XML sitemap:

[paste or upload]

Analyze it for:
1. Total URL count vs expected page count
2. Any URLs returning non-200 status codes
3. URLs that should not be in the sitemap (redirects, noindex pages, parameter URLs)
4. Missing high-priority pages
5. Last modification dates: are they realistic or all the same?

Flag anything that could hurt crawl efficiency.

A sitemap with 5,000 URLs where 800 are redirects and 200 return 404s wastes crawl budget and signals poor site maintenance.

Crawl data analysis

This is where ChatGPT earns its value. Export your crawl data from Screaming Frog or any crawler as a CSV, upload it, and use:

I have uploaded a crawl export. Analyze it for:
1. Pages with missing or duplicate title tags
2. Pages with missing or duplicate meta descriptions
3. Orphan pages (no internal links pointing to them)
4. Redirect chains longer than 2 hops
5. Pages with response times over 1 second
6. Broken internal links (4xx status codes)

Prioritize findings by: number of affected pages x likely traffic impact.
Give me a table with columns: issue, affected URLs count, severity, recommended fix.

Uploading the data is the key step. Without it, you get generic advice. With it, you get specific findings tied to your actual pages.

On-page SEO audit prompts

Once the technical foundation checks out, move to on-page elements. Export a list of your top 50 pages by traffic (from Google Analytics or Search Console) with their title tags, meta descriptions, H1s, and word counts.

Title tag and meta description review

Here are my top 50 pages with their title tags and meta descriptions:

[paste or upload CSV]

For each page, evaluate:
1. Is the title tag under 60 characters? If over, what gets truncated?
2. Does the title include the primary keyword near the front?
3. Is the meta description under 155 characters and does it contain a clear value proposition?
4. Are any titles or descriptions duplicated across pages?
5. Do any titles use filler words that waste character space?

Output a table: URL, current title, issue, suggested revision.

Heading structure analysis

Here is the heading structure for [URL]:

[paste H1-H3 hierarchy]

Review for:
1. Is there exactly one H1?
2. Does the H1 match the search intent for the target keyword?
3. Is the heading hierarchy logical (H2s under H1, H3s under H2s)?
4. Are headings descriptive or generic ("Our Services", "Why Choose Us")?
5. Do headings include relevant secondary keywords naturally?

Generic headings like "Our Approach" or "Learn More" waste opportunities to signal relevance. Every H2 should tell both readers and search engines what that section covers.

Content depth assessment

Here is the full text content of [URL], targeting the keyword "[keyword]":

[paste content]

Evaluate:
1. Does the content fully answer the search intent behind this keyword?
2. What subtopics do top-ranking competitors cover that this page misses?
3. Is there thin content (sections under 50 words that say nothing specific)?
4. Are claims supported with data, examples, or citations?
5. Reading level: is it appropriate for the target audience?

Suggest specific additions, not vague "add more detail" recommendations.

Core Web Vitals analysis

ChatGPT cannot pull live Core Web Vitals data, but it can analyze PageSpeed Insights JSON exports or CrUX data you provide. Run your key pages through PageSpeed Insights, copy the results, and paste them in.

Here are my PageSpeed Insights results for 10 pages:

[paste data]

For each page:
1. Which Core Web Vital metric is failing or closest to failing?
2. What are the top 3 specific recommendations from the diagnostics?
3. Across all pages, what is the most common performance issue?

Prioritize fixes that would improve the most pages at once.

Why CWV data requires context

Raw CWV numbers mean nothing without understanding the user base. CrUX (Chrome User Experience Report) data varies enormously by geography and connection quality.

Consider these real CrUX figures by country. Austria has 82.8% 4G connectivity and an average round-trip time of 106ms, resulting in 46.2% of origins passing all three Core Web Vitals. Liechtenstein has similar 4G rates (80.2%) with a faster 59ms RTT, and hits 45.0% good CWV. Both seem like they should perform well on paper.

Then look at Moldova: 82.0% 4G connectivity and 117ms RTT. Nearly identical infrastructure numbers to Austria. But only 29.9% of origins pass all three CWV thresholds. Network speed alone does not determine user experience. Server locations, popular CMS platforms, and local web development practices all play a role.

The Faroe Islands are the counterexample. Only 68.3% 4G connectivity with a 113ms RTT, yet 47.1% of origins pass all CWV. That is the best CWV pass rate in this group, despite the weakest connection infrastructure. Smaller origin pools and lighter page weights can outperform raw bandwidth.

When ChatGPT analyzes your CWV data, give it your audience geography. A 3.2 second LCP might be acceptable if 80% of your users are on fast European connections. It is a critical problem if your audience is in the Philippines, where only 17.6% of origins pass all three CWV metrics, with LCP pass rates at just 38.7%.

Schema markup validation

Schema validation is one of ChatGPT's strongest audit areas because it is pure structured data analysis with no live crawling required.

Here is the JSON-LD schema markup from my homepage:

[paste JSON-LD]

Validate against schema.org specifications:
1. Are all required properties present for each schema type?
2. Are there any deprecated properties?
3. Does the markup accurately represent what is on the page?
4. Would Google's Rich Results Test likely flag any errors?
5. Are there schema types I should add based on the page content?

Show me the corrected JSON-LD if changes are needed.

For pages with multiple schema types (Article + Organization + BreadcrumbList), paste all of them. ChatGPT can check for conflicts between overlapping schemas that tools sometimes miss. Our schema markup types guide covers which types apply to which page types.

Product and FAQ schema

If you run an e-commerce site or have FAQ sections:

Review this Product schema for completeness:

[paste JSON-LD]

Check:
1. Are price, availability, and currency specified correctly?
2. Is the review/rating markup connected to actual reviews on the page?
3. Are offers structured with valid priceValidUntil dates?
4. Does anything violate Google's structured data guidelines
   (e.g., marking up content not visible to users)?

Google penalizes invisible schema. If your FAQ schema references answers that are not on the page, or your product schema claims a rating based on reviews that do not exist, you risk a manual action.

Internal linking audit

Export your internal link data from a crawler and upload it. This is another area where ChatGPT's pattern matching shines.

I have uploaded my internal linking data (source URL, destination URL, anchor text).

Analyze for:
1. Pages with fewer than 3 internal links pointing to them
2. Top pages by traffic that link to very few other pages (hoarding link equity)
3. Anchor text distribution: are we over-using exact match anchors?
4. Orphan pages with zero internal links
5. Cluster analysis: group pages by topic and show where
   cross-links between related topics are missing

Suggest 10 specific internal links to add, with source page, destination, and anchor text.

Internal linking is the one ranking factor entirely within your control. Most sites under-link their money pages from their informational content. If you have a guide about a topic that ranks well but does not link to your product or service page for that topic, you are leaving value on the table.

Content quality assessment

Upload a batch of pages (or a content inventory spreadsheet) for a systematic review.

Here is a content inventory with columns: URL, title, word count, publish date,
last updated, monthly organic sessions.

Identify:
1. Thin content: pages under 500 words on topics that need depth
2. Stale content: pages not updated in over 18 months with declining traffic
3. Cannibalizing pages: multiple URLs targeting the same primary keyword
4. High-word-count, low-traffic pages (possible quality or intent mismatch)
5. Content gaps: topics where we have related content but no dedicated page

For stale content, recommend: update, consolidate, or remove.

Content decay is real. A post that ranked well 18 months ago has probably been outranked by fresher competitors. The fix is often a content refresh rather than a new page. See our content refresh strategy for the full workflow.

Detecting keyword cannibalization

Here is a Search Console export showing queries, pages, impressions, clicks,
and positions:

[upload CSV]

Find queries where multiple pages from my site appear, especially where:
1. Both pages rank between positions 5-20 (neither dominant)
2. Impressions are split roughly evenly
3. Neither page has strong click-through rates

For each cannibalization instance, recommend whether to:
consolidate into one page, differentiate the targeting, or add canonical tags.

Cannibalization is one of the most overlooked issues in SEO. Two mediocre rankings are almost always worse than one strong one.

Mobile-first audit considerations

Google indexes the mobile version of your site. A desktop-only audit misses what Google actually evaluates.

Why mobile performance varies so dramatically

The gap between mobile experiences across markets is far larger than most auditors assume. CLS (Cumulative Layout Shift) is generally the easiest Core Web Vital to pass, with global pass rates between 60% and 75%. INP (Interaction to Next Paint) is the newest bottleneck, with pass rates ranging from 15% to 46% depending on the country.

In Saudi Arabia, 40.2% of origins pass LCP thresholds, but only 20.0% pass all three CWV combined. That gap tells you INP and CLS are dragging down overall scores even where loading speed is acceptable. In the Philippines, the numbers are starker: 38.7% pass LCP, 73.5% pass CLS, but only 35.1% pass INP, pulling the combined pass rate to 17.6%.

The pattern is consistent across developing markets. CLS is usually fine because it is a layout problem, not a speed problem. LCP depends on connection speed and server infrastructure. INP depends on JavaScript complexity, and heavy JavaScript frameworks punish mobile devices with slower processors hardest.

At the extremes, the variance is enormous. Congo shows just 5.2% of origins passing all three CWV, while top-performing countries clear 45%. If your audience spans multiple regions, a single PageSpeed score is meaningless. You need CrUX data segmented by country.

Here are my Google Analytics audience demographics by country,
and here are my CrUX scores:

[paste both]

1. Which countries represent significant traffic but have poor CWV pass rates?
2. For those countries, which specific CWV metric is the bottleneck?
3. What server-side or CDN changes would improve scores for those regions?
4. Are there pages where mobile and desktop scores diverge by more than 20 points?

Mobile rendering check

Here is the mobile viewport HTML source for [URL]:

[paste source]

Check for:
1. Viewport meta tag present and configured correctly
2. Touch targets smaller than 48x48 pixels
3. Horizontal scrolling caused by fixed-width elements
4. Font sizes below 16px (causes zoom on iOS)
5. Images without responsive sizing (no srcset or sizes attributes)
6. Intrusive interstitials or popups that cover content on load

Building an automated audit workflow

Individual prompts are useful for one-off audits. For ongoing monitoring, build a repeatable workflow.

Step 1: Create a custom GPT or saved prompt chain. Save your audit prompts as a sequence. Start with technical checks, then on-page, then content quality. Each step's output feeds into the next.

Step 2: Standardize your data exports. Create a recurring export schedule: Search Console data weekly, crawl data monthly, CrUX data quarterly. Use consistent CSV formats so your prompts work without modification.

Step 3: Build a comparison template. The real value is trend analysis. Ask ChatGPT to compare this month's crawl data against last month's:

I have two crawl exports: one from March and one from April.

Compare and report:
1. New 4xx errors that appeared since March
2. Pages where response time increased by more than 500ms
3. New duplicate title tags or meta descriptions
4. Pages that lost internal links
5. Any new redirect chains

Show only changes, not the full audit. Prioritize regressions.

Step 4: Set severity thresholds. Define what matters for your site. A 4xx error on a page with 10,000 monthly visits is urgent. The same error on a page with 2 visits is not. Tell ChatGPT your traffic data so it can weight findings accordingly.

Limitations and when to use dedicated tools

ChatGPT is a general-purpose language model. It is not an SEO crawler, a rank tracker, or a site monitoring tool. Knowing the boundaries prevents wasted effort and false confidence.

ChatGPT cannot crawl your site live. It analyzes data you provide. If you skip a page in your export, that page does not exist in the audit. Dedicated crawlers like Screaming Frog or Sitebulb discover pages you forgot about, including ones you did not know existed.

It cannot access real-time search data. Rankings change daily. ChatGPT works with whatever snapshot you feed it. For ongoing rank tracking, position monitoring tools are necessary.

It hallucinates technical details. Ask ChatGPT about a specific Google algorithm update and it may invent dates, thresholds, or impacts. Always verify technical claims against official documentation. This is not unique to SEO. It is a limitation of all large language models.

It misses visual layout issues. CLS problems, overlapping elements, broken responsive layouts. These need visual inspection. A PageSpeed screenshot or manual mobile check catches what text analysis cannot.

When to use dedicated tools instead:

  • Live crawling and monitoring: Screaming Frog, Sitebulb, or Ahrefs Site Audit.
  • Rank tracking over time: Any rank tracking tool with daily position monitoring.
  • Automated CWV monitoring: CrUX API integrations or tools that pull PageSpeed data programmatically.
  • Log file analysis: Screaming Frog Log Analyzer or similar. ChatGPT cannot process raw server logs at the scale needed for meaningful crawl analysis.

For teams that want AI-powered SEO analysis connected to live data sources (Search Console, PageSpeed, CrUX) rather than manual CSV exports, tools like Ooty SEO bridge that gap by giving AI assistants direct API access to your real metrics. And if you are comparing ChatGPT with other AI approaches, our guide on how to use AI for SEO covers the broader landscape.

The best audit workflow combines both. Use dedicated tools for data collection and live monitoring. Use ChatGPT (or any capable LLM) for analysis, pattern detection, and prioritization. The data gathering is the commodity. The interpretation is where AI adds real leverage.