The Peec AI MCP Server has 8 read-only tools. Your AI assistant calls these automatically based on your questions. You don’t need to invoke them directly, but this reference helps you understand what data is available.
All tools except list_projects require a project_id. Your AI assistant handles this automatically after you pick a project.
list_projects
Lists all projects your account has access to. This is always called first.
Returns: {data: [{id, name, status}]}
list_brands
List brands (your brand and tracked competitors) in a project.
| Parameter | Type | Required | Description |
|---|
project_id | string | Yes | The project ID |
limit | number | No | Max results (default: 100) |
offset | number | No | Results to skip (default: 0) |
Returns: {data: [{id, name, domains, is_own}]}
The is_own field indicates whether this is your brand (true) or a competitor (false).
list_topics
Lists topic groupings in a project. Each prompt belongs to one topic.
| Parameter | Type | Required | Description |
|---|
project_id | string | Yes | The project ID |
limit | number | No | Max results (default: 100) |
offset | number | No | Results to skip (default: 0) |
Returns: {data: [{id, name}]}
Lists tags (cross-cutting labels) in a project.
| Parameter | Type | Required | Description |
|---|
project_id | string | Yes | The project ID |
limit | number | No | Max results (default: 100) |
offset | number | No | Results to skip (default: 0) |
Returns: {data: [{id, name}]}
list_prompts
Lists prompts in a project. You can filter by topic or tag.
| Parameter | Type | Required | Description |
|---|
project_id | string | Yes | The project ID |
topic_id | string | No | Filter by topic ID |
tag_id | string | No | Filter by tag ID |
limit | number | No | Max results (default: 100) |
offset | number | No | Results to skip (default: 0) |
Returns: {data: [{id, text, tags: [{id}], topic: {id} | null}]}
get_brand_report
Returns brand visibility, sentiment, position, and share of voice across AI search engines.
Parameters
| Parameter | Type | Required | Description |
|---|
project_id | string | Yes | The project ID |
start_date | string | Yes | Start date (YYYY-MM-DD) |
end_date | string | Yes | End date (YYYY-MM-DD) |
limit | number | No | Max results (default: 100) |
offset | number | No | Results to skip (default: 0) |
dimensions | string[] | No | Break down by: prompt_id, model_id, tag_id, topic_id, date, country_code, chat_id |
filters | object[] | No | Filter results (see Filtering) |
Response fields
| Field | Type | Description |
|---|
brand | object | {id, name} |
visibility | number | 0 to 1. Fraction of AI responses that mention the brand |
mention_count | number | Total times the brand was mentioned |
share_of_voice | number | 0 to 1. Brand’s share of total mentions across all brands |
sentiment | number | 0 to 100. How positively AI platforms describe the brand. Most brands score 65 to 85 |
position | number | Average rank when mentioned. Lower is better (1 = mentioned first) |
The response also includes raw aggregation fields (visibility_count, visibility_total, sentiment_sum, sentiment_count, position_sum, position_count) for custom calculations across segments.
get_domain_report
Returns source domain retrieval and citation metrics across AI search engines.
Parameters
| Parameter | Type | Required | Description |
|---|
project_id | string | Yes | The project ID |
start_date | string | Yes | Start date (YYYY-MM-DD) |
end_date | string | Yes | End date (YYYY-MM-DD) |
limit | number | No | Max results (default: 100) |
offset | number | No | Results to skip (default: 0) |
dimensions | string[] | No | Break down by: prompt_id, model_id, tag_id, topic_id, date, country_code, chat_id |
filters | object[] | No | Filter results (see Filtering) |
Response fields
| Field | Type | Description |
|---|
domain | string | The source domain (e.g. example.com) |
classification | string | Domain type: OWN, CORPORATE, EDITORIAL, INSTITUTIONAL, UGC, REFERENCE, COMPETITOR, or OTHER |
retrieved_percentage | number | 0 to 1. Fraction of chats that retrieved this domain |
retrieval_rate | number | Average URLs retrieved per chat. Can exceed 1.0 (this is an average, not a percentage) |
citation_rate | number | Average citations when retrieved. Can exceed 1.0 |
get_url_report
Returns URL-level retrieval and citation metrics across AI search engines.
Parameters
| Parameter | Type | Required | Description |
|---|
project_id | string | Yes | The project ID |
start_date | string | Yes | Start date (YYYY-MM-DD) |
end_date | string | Yes | End date (YYYY-MM-DD) |
limit | number | No | Max results (default: 100) |
offset | number | No | Results to skip (default: 0) |
dimensions | string[] | No | Break down by: prompt_id, model_id, tag_id, topic_id, date, country_code, chat_id |
filters | object[] | No | Filter results (see Filtering) |
Response fields
| Field | Type | Description |
|---|
url | string | The full source URL |
classification | string | Page type: HOMEPAGE, CATEGORY_PAGE, PRODUCT_PAGE, LISTICLE, COMPARISON, PROFILE, ALTERNATIVE, DISCUSSION, HOW_TO_GUIDE, ARTICLE, or OTHER |
title | string | Page title (if available) |
retrievals | number | Total times this URL was retrieved |
citation_count | number | Total citations across all chats |
citation_rate | number | Average citations per retrieval. Can exceed 1.0 |
Filtering
The three report tools support filters to narrow results. Each filter looks like this:
{
"field": "model_id",
"operator": "in",
"values": ["chatgpt-scraper", "perplexity-scraper"]
}
Filter fields
| Field | Available in | Description |
|---|
model_id | All reports | AI search engine (e.g. chatgpt-scraper, perplexity-scraper, gemini-scraper, google-ai-overview-scraper, google-ai-mode-scraper, gpt-4o-search, claude-sonnet-4, microsoft-copilot-scraper, grok-scraper, deepseek-r1) |
topic_id | All reports | Topic grouping ID |
tag_id | All reports | Tag ID |
prompt_id | All reports | Individual prompt ID |
country_code | All reports | ISO 3166-1 alpha-2 code (e.g. US, DE, GB) |
brand_id | Brand report | Brand ID |
domain | Domain and URL reports | Domain name |
url | Domain and URL reports | Full URL |
chat_id | All reports | Individual chat/conversation ID |
Operators
| Operator | Description |
|---|
in | Include only matching values |
not_in | Exclude matching values |
You can combine multiple filters. They’re joined with AND logic.
Dimensions
Dimensions break down results into rows grouped by a specific field. Without dimensions, results are totals for the entire date range.
| Dimension | What it does | When to use it |
|---|
date | Daily breakdown (YYYY-MM-DD) | Tracking trends over time |
model_id | Per AI search engine | Comparing performance across ChatGPT, Perplexity, etc. |
topic_id | Per topic grouping | Finding your strongest and weakest topic areas |
tag_id | Per tag | Analyzing custom segments |
prompt_id | Per individual prompt | Drilling into specific queries |
country_code | Per country | Checking geographic differences |
chat_id | Per individual AI conversation | Inspecting specific responses |
You can combine dimensions. For example, ["date", "model_id"] gives you daily trends per AI model.