AI / LLM Visibility Audit
ChatGPT, Perplexity, Gemini, and AI Overviews. They send traffic now, and they recommend brands. This audit figures out whether yours is one of them, and what to do about it if not.
Why this matters (and where I'm cautious)
I genuinely don't know how big this gets long-term. What I can tell you is what I see in the data right now: 5 to 15% of referral traffic on the sites I work with is starting to come from LLMs and AI search. That's not nothing. It's also not a reason to redo your whole content strategy.
Most of the "GEO" content out there is hype and guesswork. This audit is the unhyped version: what's actually happening on your site, what major LLMs say about your brand today, and the small set of things that genuinely move the needle.
What's included
Bot crawl data, prompt testing, and the structural fixes that actually help
AI bot crawl analysis
GPTBot, ClaudeBot, PerplexityBot, Google-Extended, Bytespider, and the rest. From your server logs: which bots crawl, how often, what they fetch, what they ignore. Plus whether you're accidentally blocking ones you shouldn't.
Brand mention testing
I run a real set of prompts in ChatGPT, Perplexity, Gemini, and Claude. What do they say about your brand? Which competitors get mentioned ahead of you? Which sources are they citing? Repeated runs to catch variance.
llms.txt setup
A working llms.txt file tuned to what you actually want LLMs to use. Adoption is still patchy, but the spec is becoming the de-facto entry point and it's cheap to get right.
Structured data review
FAQ, HowTo, Product, Organization, Author. The schema types that LLMs and AI Overviews actually pull from. What you have, what's broken, and what's worth adding.
Citation-friendly content review
Why some content gets quoted by LLMs and some gets ignored. Specific data, named experts, original analysis, clear definitions. We pick the pages worth retrofitting and leave the rest alone.
AI Overviews coverage
Where you currently appear in Google's AI Overviews, where competitors do, and the queries with overview boxes you're missing. Pulled from a sample of your real GSC queries.
What you get
A document showing how you currently appear in the major LLMs (with screenshots), what's pulling traffic from AI bots, and a prioritised list of fixes. No 50-page deck. The signal-to-noise ratio in this space is already bad enough.
A 30-minute call to walk through it. Optional quarterly re-runs if you want to track how this evolves, since the LLM landscape is moving fast and any snapshot is going to age.
Common questions
Is this real or just hype?
Both, honestly. The traffic is real but small for most sites. Whether it's worth the work depends on your audience. B2B SaaS, software comparison, and high-consideration purchases are where I'm seeing the most movement. Generic e-commerce, much less.
Do you actually run prompts in ChatGPT and Perplexity?
Yes. That's the only way to see what they say about your brand. I run a structured set of queries (informational, comparison, recommendation, troubleshooting) and capture the outputs across multiple runs to catch variance.
Should I block AI bots?
It depends. If your business is content licensing or paywalled IP, blocking GPTBot makes sense. If you're trying to be discovered in AI assistants, blocking is shooting yourself in the foot. The audit gives you a stance per bot rather than a blanket answer.
Is this a one-off or ongoing?
One-off audit by default. Some clients add quarterly re-runs because the space is moving fast and they want to track changes. You can decide that after the first one.
Curious how LLMs see your brand?
Send me your site and the categories or competitors you'd want me to test prompts around. I'll come back with a scope and a realistic take on whether this audit is worth it for you.