Find Wasted Google Ads Spend in 15 Minutes with AI
Step-by-step: use ChatGPT, Gemini, or Claude to audit your Google Ads account. Find wasted keywords, fix budget pacing, and get a specific action list in one session.
Most Google Ads accounts waste 15-30% of budget on search terms that will never convert. Not "underperforming" terms. Terms with zero purchase intent that are burning money every single day. You can find them in 15 minutes with ChatGPT, Gemini, or Claude. Export your reports, upload them to any AI assistant, and ask five diagnostic questions. No spreadsheets. No VLOOKUP. Just a conversation that surfaces what the Google Ads interface deliberately buries across dozens of report views. The technique works because AI cross-references spend, conversion rate, ROAS, and search term quality simultaneously. A human reviewing 200+ keywords sorts by one column at a time. ChatGPT Google Ads analysis handles all columns at once and catches patterns that single-column sorting physically cannot reveal.
Here's the thing about "wasted" spend that nobody wants to hear. The waste isn't usually where people look. Most PPC managers open the account, sort by cost descending, and start pausing expensive keywords. That's backwards. The expensive keywords are expensive because they get traffic. Sometimes that traffic converts. The real waste hides in the medium-spend keywords, the ones burning $15-30/day with terrible search term quality that nobody checks because they're not expensive enough to trigger alarm bells.
I've seen accounts where the top 10 keywords by spend had a 4x ROAS. Perfectly healthy. But 40 keywords in the $10-25/day range had a combined ROAS of 0.3x. That's where $12,000/month was disappearing. Nobody noticed because no single keyword looked catastrophic.
The AI Ads Analysis Workflow
Nine steps from account overview to prioritised action list -- in one session
1
Account Overview
Spend, CTR, conversions, ROAS across all campaigns
2
Keyword Performance
Best and worst keywords by ROAS and spend
3
Search Term Audit
Find irrelevant queries burning budget
4
Budget Pacing
Identify campaigns hitting limits or under-spending
5
Period Comparison
What changed and why it changed
6
Ad Copy Review
Which variants win and which drag CTR
7
Diagnostic Deep-Dive
Trace a symptom to its root cause
8
Report Summary
Plain-language summary for stakeholders
9
Action List
Prioritised changes with expected impact
What you'll have at the end: A short list of specific changes (keywords to pause, negatives to add, budgets to adjust) that'll immediately improve your account efficiency.
What you'll need
A Google Ads account with at least 30 days of campaign data
An AI assistant (ChatGPT, Gemini, or Claude)
Your Google Ads reports (exportable for free as CSV)
Optional: Ooty Ads for live API access (skips the export step)
Two ways to follow this tutorial
The manual way: Export your Google Ads reports as CSV (campaign performance, keyword report, search terms report). Upload them to ChatGPT, Gemini, or Claude. Ask the same diagnostic questions from this tutorial. It works and costs nothing beyond your AI subscription. The limitation: you're working with a snapshot, so you'll need to re-export for follow-up questions.
How to use ChatGPT for Google Ads campaigns. Ad copy generation, keyword grouping, negative keyword lists, and performance analysis with real benchmarks.
ChatGPT for Google Ads means using the model to generate ad copy variations, group keywords into tight ad groups, build negative keyword lists, analyze campaign performance data, and align landing page messaging with ad text. Out of the box it cannot adjust bi
ChatGPT for PPC means using the model to plan campaign structures, generate ad copy, build keyword and audience strategies, analyze bid performance, allocate budgets across platforms, detect ad fraud patterns, and produce optimization reports for Google Ads, M
The connected way: Use Ooty Ads (or similar MCP tools) to connect the Google Ads API directly to your AI assistant. Same prompts, but the AI pulls live data. Faster for follow-up questions because the AI can query the API on the fly. See the getting started guide if you need setup help.
This tutorial uses the connected approach, but every prompt works with exported CSVs too.
Minute 1-3: The account-level scan (and why most people skip it)
Everyone jumps straight to keyword reports. It feels productive. You see numbers. You make decisions. But you're making decisions without context.
An account-level scan takes 90 seconds and answers the question that keyword analysis can't: where is the money actually going? Not at the keyword level. At the campaign level. Because campaign-level budget allocation is where most accounts go wrong, and no amount of keyword optimization fixes a bad budget split.
Give me an overview of my Google Ads account for the last 30 days.
Total spend, impressions, clicks, CTR, conversions, CPA, ROAS.
Flag anything unusual: campaigns with high spend and zero
conversions, very low CTR, or anomalous spend patterns.
ChatGPT (or whichever assistant you're using) pulls your campaign performance and highlights what matters: "Your Search campaigns are spending $1,240 with a 4.2x ROAS. One campaign, 'Brand Protection', consumed 18% of your budget with zero conversions this period."
That finding alone would take 15-20 minutes to reach manually. You'd need to open the Campaigns tab, add the right columns, export, sort, calculate percentages. The AI does it in one pass.
The pattern I see constantly: branded campaigns eating budget that should go to non-brand prospecting. Brand traffic converts well, so the ROAS looks great. But those people were going to convert anyway. You're paying for clicks you'd get organically. Meanwhile, the non-brand campaigns that actually grow the business are budget-limited and missing conversions every afternoon.
C
Prompt to Claude
“My overall ROAS has dropped from about 5x last month to 3.2x this month. Walk me through a diagnostic -- what are the most likely causes and how do we find which one is actually happening?”
Minute 4-7: The keyword audit (where the real money is)
This is where the waste lives. Not in your campaign structure. Not in your bid strategy settings. In the keywords themselves.
Pull keyword performance for the last 30 days. Show me:
- Keywords with the best ROAS
- Keywords with the worst ROAS
- Keywords spending more than $20 with zero conversions
Two patterns show up in almost every account. First: keywords burning budget with no conversions. The instinct is to pause everything with zero conversions. Don't. Some keywords need 60-90 days of data before you can call them dead. The ones to pause immediately are the ones with 50+ clicks and zero conversions. At that point, the keyword isn't unlucky. It's bad.
Second: high-performing keywords that are under-bid. These are the ones generating conversions at a strong ROAS, but impression share is below 50%. You're leaving money on the table. If you run Instagram or YouTube ads alongside search, the same principle applies. Find what's working. Cut what isn't. Fund the winners.
Now check what's actually triggering those keywords:
For the keywords with high spend and no conversions, show me
the actual search terms triggering them. Are people searching
for what I think they're searching for?
This is the moment that changes how most people think about ChatGPT Google Ads analysis. Because the keyword looks fine. "Project management software" seems perfectly relevant if you sell project management software. Then you look at the search terms, and the actual queries triggering that keyword are "free project management templates," "project management certification courses," and "what is project management." Different intent entirely.
Broad match does this constantly. It expands into adjacent topics that share words but not intent. The keyword report looks clean. The search terms report tells the real story. Most people never look.
Minute 8-11: The search terms cleanup (highest ROI activity in PPC)
Search term analysis is the single highest-ROI activity in Google Ads management. It's also the one that gets deprioritized first when things get busy. Every irrelevant term you block saves money on every future impression. It compounds.
Here's the counterintuitive part. Well-managed accounts need search term audits more than neglected accounts. An account that hasn't been touched in six months has obvious waste you can see from the campaign tab. But an account that's been optimized at the keyword level? The waste is hidden in the search terms. Match types broaden silently. Google's algorithm gets "creative" with your targeting. You won't see it unless you look.
Show me my top search terms by spend for the last 30 days.
I sell [your product/service]. Flag anything irrelevant.
You'll almost certainly find irrelevant queries burning money. One agency found 23% of a client's spend going to terms that had zero purchase intent. The account manager had been optimizing keywords diligently for months. Never once pulled the search terms report. That's not unusual. It's the norm.
From those search terms, which ones should I add as negative
keywords? Give me a list I can paste directly into Google Ads.
That's a 30-minute manual task done in 2 minutes. ChatGPT Google Ads workflows are especially good at categorizing negatives into themed lists (competitors, informational queries, wrong geography) so you can apply them at the right campaign level. This matters more than people think. A negative keyword at the wrong level blocks traffic you actually want.
The other thing the AI catches that humans consistently miss: near-duplicate search terms splitting your data. "Project management tool" and "project management tools" might both be triggering clicks, but Google treats them as separate terms. Your conversion data is split across two entries, so neither one looks strong enough to bid up. The AI spots these instantly because it's reading the entire list at once.
Minute 12-14: Budget pacing (the silent performance killer)
Budget pacing is the problem nobody talks about because it's boring. It's not a clever optimization. It's plumbing. But it kills performance more reliably than bad keywords.
A campaign that hits its daily limit by 2pm is leaving afternoon conversions on the table. A campaign that under-spends is either too narrowly targeted or bidding too conservatively. Both problems are invisible unless you check pacing explicitly. And the Google Ads interface doesn't surface pacing issues in any standard report.
Check budget pacing across all active campaigns. Are any hitting
their daily limit before end of day? Are any significantly
under-spending? Should I reallocate?
Budget-limited campaigns miss afternoon and evening conversions, which often convert at higher rates for B2C. This is especially true for impulse-purchase categories. Someone browsing at 9pm on their phone is in a different buying mode than someone at 10am on their laptop. If your budget runs out at 2pm, you never reach those evening buyers.
Under-spending campaigns are the other silent problem. If a campaign consistently spends 40% of its daily budget, something is wrong. The targeting is too narrow, the bids are too low, or the ad copy isn't generating clicks. The AI can diagnose which one by cross-referencing impression share, average CPC, and CTR. Each cause has a different signature.
Diagnostic Reasoning
How Claude traces a ROAS drop through multiple data points to find the root cause
Symptom
ROAS dropped
Which campaigns dropped?
Shopping campaigns, not Search
Did CPAs increase?
Yes, +45% in Shopping
Did click volume change?
No -- similar clicks, higher cost
Search term quality?
Stable -- not the cause
Root Cause
Bid strategy or product feed issue in Shopping campaigns
Minute 15: The action list
Everything above was diagnosis. Now the AI synthesizes the full conversation into specific, executable actions. This is where the ChatGPT Google Ads workflow pays off most. The AI has context from every question you asked. It doesn't just list problems. It prioritizes by impact.
Based on everything we've reviewed, give me the 3 most impactful
changes I should make today. Specific actions, not general advice.
A typical output: "1. Add 23 negative keywords to Non-Brand Search (stops roughly $200/week in wasted spend). 2. Pause the 3 keyword variants with $50+ spend and zero conversions. 3. Increase daily budget for Top Products Shopping by 30%, it hits its limit by 2pm and leaves conversions on the table."
Three actions. Executable in 10 minutes. No 40-page audit report. No consulting fee. No two-week turnaround.
The uncomfortable truth about most Google Ads audits: agencies charge $2,000-5,000 for what is essentially this same analysis done manually over two weeks. The data is the same. The questions are the same. The output is the same. The difference is speed and who keeps the insight. When you run the audit yourself, you learn the account. When someone else runs it, you get a PDF.
Analysis Time Comparison
Minutes per task: manual Google Ads workflow vs Claude + Ads
145m
Manual
14m
With Ads
90%
Time saved
Account overview15m1m
Keyword analysis25m2m
Search term audit30m3m
Budget pacing check10m1m
Period comparison20m2m
Ad copy review15m2m
Report writing30m3m
Manual workflowClaude + Ads
Going deeper (when you have more time)
The 15-minute version catches the obvious waste. When you have more time, these prompts dig into the structural problems that separate accounts that plateau from accounts that scale.
Period comparison:
Compare my campaign performance this month vs last month.
What changed? Which campaigns improved, which declined,
and what's the most likely cause of the biggest changes?
This is the prompt most PPC managers should run weekly and almost none of them do. Performance doesn't decline because of one bad keyword. It declines because of compounding shifts: a bid strategy that changed behavior, search terms that drifted, a competitor that entered the auction. The AI traces the decline to its source because it can check all the variables simultaneously.
Ad copy audit:
Show me ad performance by copy variation for my top 3 campaigns
by spend. CTR, conversion rate, ROAS per variant. Flag anything
significantly underperforming the top variant.
Most accounts have ad copy variants that have been running for months with statistically significant performance differences. Nobody paused the loser because it wasn't losing badly enough to notice. Across 10 campaigns, those "small" differences add up to 10-15% wasted spend on inferior copy.
Diagnostic mode (when something specific went wrong):
My ROAS dropped from 5x to 3.2x this month. Walk me through
a diagnostic. What are the most likely causes, and which one
is actually happening?
The AI runs multiple data pulls in sequence without you directing each one. It checks campaign-level changes, budget pacing, search term quality, bid strategy shifts, and tells you where the drop is concentrated. Instead of spending an hour testing hypotheses one at a time, you get the answer in one question. For broader PPC strategy across platforms, see the ChatGPT for PPC guide.
Client reporting:
Generate a weekly performance summary, this week vs last week.
Format as bullet points for a client: overall spend and ROAS,
top campaign, biggest opportunity, one thing we're fixing.
Plain language a non-technical stakeholder can read in 60 seconds. This alone saves most agency account managers 30-45 minutes per client per week. Multiply that across a portfolio and it's a full day back.
Why this works better than the way you're doing it now
Google Ads spreads performance data across dozens of views. Campaign performance, keyword reports, search terms, auction insights, budget pacing, ad copy metrics. Each one requires a separate export or tab. Manual analysis means opening five reports, exporting to Excel, writing VLOOKUP formulas, and spending 45-60 minutes before finding anything actionable. By the time you find something, you've forgotten the context from the first report.
AI assistants collapse that into a single conversation. You ask a question, the AI pulls the relevant data, cross-references it, and gives you the answer with context from everything discussed before. The time savings compound when you do this weekly instead of quarterly. Weekly audits catch problems before they become expensive. Quarterly audits find problems that have been burning money for months.
The limitation is real and worth stating clearly: AI recommendations need human judgment. The AI might suggest pausing a keyword that has brand-strategic value. It might miss context like a recent landing page change or a competitor entering the market. It doesn't know your business the way you do. The data is accurate (pulled directly from the Google Ads API). The interpretation needs a human who understands what the numbers mean for this specific business.
But here's the thing. The AI doesn't replace your judgment. It replaces the 45 minutes of data gathering that happens before you can exercise judgment. You still make the decisions. You just make them faster, with more context, and without the cognitive fatigue of switching between 12 report tabs.
Tips for better results
Tell the AI your goals. "I prioritize ROAS over volume" vs "I'm in growth mode and care about conversions more than CPA" changes the analysis significantly. Without this context, the AI defaults to efficiency recommendations, which may not match your business stage.
Don't analyze in isolation. If you changed landing pages, ran a promotion, or a competitor entered the market, mention it. Context changes how to interpret the numbers. A 20% ROAS drop after a landing page change means something completely different than a 20% ROAS drop with no changes.
Repeat the search terms audit regularly. Match types broaden over time. Google actively expands what triggers your keywords, and the pace of that expansion has accelerated. Adding negatives is an ongoing process, not a one-time task. Monthly for active accounts, quarterly at minimum.
Layer in quality score data. Ask the AI to pull quality scores alongside keyword performance. Low quality scores mean you're paying more per click than competitors for the same position. The gap is bigger than people realize. A Quality Score of 5 versus 8 can mean 30-40% higher CPCs. The Google Ads quality score guide explains why this matters and how to fix it.
Run this weekly, not quarterly. The 15-minute version is fast enough to do weekly. Quarterly audits are post-mortems. Weekly audits are preventive maintenance. The compounding effect of catching waste early, before it runs for three months, is significant.
Check your AI readiness. Not sure if your marketing stack is ready for AI-assisted analysis? Run the AI readiness assessment to find out where you stand.