Skip to main content
Below are some agent analytics use cases to improve your visibility and performance in AI search.

Standalone log file-based benefits & use cases

  1. Know which AI bots are actually visiting your site: Server logs tell you which bots are visiting (GPTBot, ClaudeBot, PerplexityBot, etc.), how frequently, and which parts of your site. This is pure observability - no prompt tracking needed to answer “is my site being indexed by AI crawlers at all?”
  2. Detect crawl anomalies and technical access problems: Spikes, drops, or 4xx/5xx errors on bot requests are visible in logs. If a bot suddenly stops requesting, or is hitting errors on key pages, you can catch and fix that before it ever becomes a source-visibility problem.
  3. Audit robots.txt compliance: Logs let you verify that bots are actually respecting your directives, or flag when they aren’t. The Crawlability feature helps you understand your robots.txt directives. This is a technical governance use case that stands entirely on its own.
  4. Understand crawl depth and site structure discovery: Which sections of your site are bots exploring deeply vs. skimming? Logs show frequency patterns across your folder structure - useful for understanding how AI bots navigate your site architecture, independent of what ends up in answers.
  5. Establish a baseline before you do anything else: Log data is available from day one (if you connect to a source or upload a file), before you’ve even set up any prompt tracking. It can serve as a starting point for any AI visibility program, giving you a factual picture of bot activity.
  6. Monitor the impact of technical changes independently: When you update your sitemap, fix crawl errors, or adjust your robots.txt, logs let you observe whether bot behavior changed - without needing to wait for prompt tracking data to reflect downstream effects.

Combined log file and prompt tracking benefits

  1. Separate access from impact: Logs tell you what bots requested. Prompt tracking tells you what actually surfaced in AI answers. Together, they answer the question that logs alone can’t: “Is the bot’s interest in this content translating into real AI visibility?”
  2. Diagnose underperformance more precisely: When a heavily crawled page rarely appears as a source, the combined view helps you distinguish between three very different problems: a technical access issue, a content quality issue, or simply a gap in your prompt tracking setup. Each has a different fix.
  3. Use log activity to pressure-test your tracking coverage: Prompt tracking only reflects the topics you’ve chosen to monitor, so it has an inherent blind spot. Log data can surface content areas receiving significant bot attention that your currently tracked prompts don’t cover, giving you a concrete signal to expand your tracking before drawing conclusions about content performance.
  4. Make optimization decisions with stronger evidence: Either signal alone might mislead. High request volume without source data looks like success. Low source usage without log data looks like a failure. Together, they let you prioritize with more confidence - and evaluate whether changes you make are actually moving the needle on both sides.