For a while now, something in our Google Search Console data has been bugging me. We’re seeing AI-style search queries mixed in with normal traffic, on our own site and across client accounts. Long, structured strings that look like prompts rather than how humans actually search.
Things like:
- “difference between $5000 website and $20000 website features functionality cost breakdown”
- “what roas should I target for google ads ecommerce business 40% profit margin realistic expectations”
- “average cac customer acquisition cost e-commerce coffee tea kitchen appliances benchmarks”
These are 15-20 word queries. They read like someone talking to ChatGPT, not typing into Google. And they’re showing repeated impressions over time. It’s hard to believe these are all manual searches. The phrasing is too consistent and too prompt-like.
If you want to spot these in your own data, here’s what to look for: multi-clause queries with constraints (profit margins, benchmarks, “realistic expectations”), structured “difference between X and Y” comparisons, queries that read like a brief to a consultant (scenario + industry + metric), and often impressions with low or zero clicks.
So I finally decided to dig in and figure out what’s actually going on. What I found changed how I think about this data entirely.
Where these queries come from

After some research and conversations with other SEO practitioners, I think there are a few things happening here, possibly all at once.
People are searching differently now. Google’s Year in Search 2025 data shows conversational queries like “Tell me about…” are up 70% year over year. People are treating Google more like an AI assistant than a keyword machine. Some of these long queries might genuinely be humans typing the way they’d talk to ChatGPT.
Monitoring tools are generating queries. We use a tool called Peec.ai to track how our brand shows up in AI search results across ChatGPT, Perplexity, Claude, and others. Tools like this (Otterly, Profound, and others do similar things) run the same prompts repeatedly to monitor outputs. Depending on how those systems retrieve information, some of that activity can leak into web search analytics.
AI systems might be evaluating content. Google’s documentation is explicit that traffic from AI features (AI Overviews and AI Mode) is included in Search Console’s Performance reporting under “Web.” Some of these queries look like evaluation-style prompts and show impressions with zero clicks. I can’t prove what’s generating them, but the pattern is consistent enough that I treat it as signal.
In late 2025, this got more concrete. Developers started noticing full ChatGPT prompts appearing in their Search Console data. Not summaries or search terms. Actual conversational prompts that people had typed into ChatGPT. Researchers traced it to a bug where ChatGPT was sending raw prompts to Google instead of cleaned-up search queries. OpenAI acknowledged and fixed it, but the incident confirmed something important: AI assistants can route user prompts through traditional search in ways that leave footprints in analytics.
One caveat worth noting: you won’t always be able to isolate AI Mode queries cleanly in Search Console. Google rolls AI Mode and AI Overview data into your standard Performance reporting. The query patterns still show up if you know what to look for, but you’re working with blended data.
I can’t tell you definitively which explanation accounts for any specific query in our data. Probably all three are happening. But I stopped treating this like a forensic problem and started treating it like strategy input.
A window into how people actually prompt AI
Whether these queries come from humans searching conversationally, marketers monitoring AI visibility, or AI systems evaluating content, they represent something valuable: a proxy for how people are actually using AI to find information.
Think about the monitoring tool angle for a second. If a marketer is tracking whether AI cites content about “average CAC for e-commerce benchmarks,” they chose that prompt because they believe it represents real user intent. They’re not picking random strings. They’re tracking the questions they think matter in their space. That’s market intelligence.
And if humans are actually typing these long, specific queries into Google? That tells us how search behavior is evolving. People are asking questions the way they’d ask a consultant, not the way they’d query a search engine.
Either way, these prompt-style queries are signals. They tell us what questions are being asked in our space, what level of specificity people expect, and what kind of content would actually be useful.
The question is what to do with that information.
Two ways to use this data

I’ve started treating these weird queries as a strategic tool rather than noise to filter out. There are two specific applications that have been useful.
First: content gap identification.
When you see a prompt-style query related to your expertise, ask yourself: do we have content that would answer this well?
Not “do we have a blog post that touches on this topic.” Do we have something substantive and specific enough that an AI system would confidently cite it as a source?
Take “difference between $5000 website and $20000 website features functionality cost breakdown.” That’s a real question people have. If we don’t have content that addresses it with actual specifics (not vague “it depends” answers, but real frameworks and numbers), that’s a gap.
The prompt-style queries showing up in your Search Console are essentially a list of questions being asked in your space. If those questions resonate with your expertise and you don’t have content that answers them well, you’ve found your next content priorities.
This is different from traditional keyword research. Keyword tools show you search volume for terms. This shows you the actual questions people are bringing to AI, phrased the way they’d phrase them.
Second: citation source research.
Take a prompt-style query from your Search Console and run it through ChatGPT, Perplexity, or Google’s AI Mode. See who gets cited.
The sources that show up in AI-generated answers aren’t random. They’re the sites and publications that AI systems have determined are trustworthy and relevant for that type of question. That’s your hit list.
If a particular publication consistently gets cited for queries in your space, getting mentioned there has compounding value. It’s not just the direct traffic or backlink. It’s that AI systems will see that mention as a signal of your credibility, which influences whether they cite you in the future.
We’ve started keeping a running list: queries that matter to us, who’s currently getting cited for them, and where the gaps are. It’s become part of how we think about both content strategy and outreach.
What makes content citable
Running these queries through AI tools has also taught us something about what kind of content actually gets cited.
It’s not the generic explainer posts. When I run “average cac customer acquisition cost e-commerce benchmarks” through Perplexity, it’s not citing “What is Customer Acquisition Cost: A Complete Guide.” It’s citing specific benchmark reports with actual numbers, industry-specific breakdowns, and methodology explanations.
The queries themselves tell you what’s needed. Look at them:
- “difference between $5000 website and $20000 website features functionality cost breakdown” wants specifics and structure
- “what roas should I target for google ads ecommerce business 40% profit margin realistic expectations” wants actual benchmarks for a specific scenario
- “average cac customer acquisition cost e-commerce coffee tea kitchen appliances benchmarks” wants industry-specific data
These are consultant-level questions. They want consultant-level answers.
The content that gets cited is the content that actually provides that depth: real numbers, specific frameworks, clear methodology, evidence of expertise. And here’s what I’ve noticed: if ten sites say roughly the same thing, AI picks the one with the unique data point or the clearest breakdown.
In a world where AI can synthesize ten identical blog posts in seconds, it prioritizes Information Gain: new data, a unique framework, or a first-person case study that doesn’t exist elsewhere. Being “comprehensive” in the sense of covering everything superficially doesn’t cut it. Being comprehensive in the sense of actually answering the question with substance that nobody else provides? That’s what gets cited.
The practical workflow

Here’s how we’re actually using this:
Monthly, we export Search Console queries and filter for length. Anything over 8-10 words gets flagged for review. We’re looking for the prompt-style queries that read like AI prompts rather than traditional searches.
We categorize by relevance. Some of these queries are completely outside our expertise. We ignore those. But the ones that relate to what we do? Those go on a list.
For each relevant query, we ask two questions. Do we have content that would answer this well? If not, it’s a content opportunity. If we do have content, is it actually getting cited? We test by running the query through AI tools.
We track who’s getting cited instead of us. If there’s a query we should be winning and we’re not, we look at who is. That tells us either what our content is missing or where we need to pursue mentions.
We update the list as new queries appear. This isn’t a one-time audit. The prompt-style queries in Search Console keep evolving as AI usage patterns change.
Worth noting: Search Console caps query reporting at 1,000 queries and filters rare queries for privacy. So what you’re seeing is the visible portion of a much bigger pattern.
It’s not complicated. It’s just treating a weird data artifact as the signal it actually is.
Here’s a concrete example: we found “difference between $5000 website and $20000 website” showing up in our data. We had blog content that touched on pricing, but nothing that actually broke down what you get at different investment levels with real specifics. So we briefed a new piece designed to be the answer to that exact question. Actual deliverables at each tier, what drives cost up or down, the trade-offs people should understand. We structured it to be citable: specific enough that an AI could confidently reference it, not so hedged that it says nothing.
That’s the shift. From “we should write about website pricing sometime” to “people are asking this specific question and we should be the source AI points to.”
The bigger picture
Traditional SEO gave us keyword research tools to understand what people search for. AI is creating a new behavior pattern: conversational prompts asking specific questions. And we’re getting a glimpse of it through these strange Search Console queries.
The companies that will do well in AI-influenced search aren’t just optimizing for rankings. They’re paying attention to what questions are being asked, creating content substantive enough to be cited as a source, and building the kind of credibility signals that AI systems trust.
The queries no human would type are actually pretty useful once you know what to do with them. They’re a proxy for AI prompt behavior. They’re a content gap finder. They’re a citation hit list.
We’re still early in figuring this out. But the weird stuff in your Search Console data isn’t noise. It’s signal.
If you’re seeing similar patterns in your data and want to think through what it means for your strategy, let’s talk. This stuff is genuinely interesting to me, and sometimes a quick conversation can clarify what to pay attention to and what to ignore.







