Skip to main content

Prove MCP overview

The Prove Model Context Protocol (MCP) server exposes tools to connected AI clients so assistants can search Prove documentation and read full pages from a virtual documentation filesystem. It complements other doc access patterns such as plain Markdown and llms.txt. If your team uses AI-powered editors (for example Cursor, VS Code with MCP, Claude Code, or similar), you can point your client at Prove’s hosted MCP endpoint—no separate Prove package to run on your infrastructure for this service.

Hosted endpoint

Prove hosts an HTTP MCP server at: https://developer.prove.com/mcp Register this URL in your MCP client using that product’s workflow for HTTP MCP servers. Exact steps depend on the vendor; follow their documentation for adding or enabling MCP servers, ensure your network allows HTTPS to developer.prove.com, then confirm the prove (or equivalent) server appears connected in the client’s MCP or tools UI.

Tools available to assistants

Tools are exposed to connected AI clients as follows.

search_prove

Search across the Prove knowledge base to find relevant information, code examples, API references, and guides. Use this tool when you need to answer questions about Prove, find specific documentation, understand how features work, or locate implementation details. The search returns contextual content with titles and direct links to the documentation pages. If you need the full content of a specific page, use the query_docs_filesystem_prove tool to head or cat the page path (append .mdx to the path returned from search — for example, head -200 /api-reference/create-customer.mdx). Optional parameters such as version and language can narrow results.

query_docs_filesystem_prove

Run a read-only shell-like query against a virtualized, in-memory filesystem rooted at / that contains only the Prove documentation pages and OpenAPI specs. This is not a shell on any real machine — nothing runs on the user’s computer, the server host, or any network. The filesystem is a sandbox backed by documentation chunks. This is how you read documentation pages: there is no separate “get page” tool. To read a page, pass its .mdx path (for example, /quickstart.mdx, /api-reference/create-customer.mdx) to head or cat. To search the docs with exact keyword or regex matches, use rg. To understand the docs structure, use tree or ls. Workflow: Start with the search tool for broad or conceptual queries like “how to authenticate” or “rate limiting”. Use query_docs_filesystem_prove when you need exact keyword or regex matching, structural exploration, or to read the full content of a specific page by path. Supported commands: rg (ripgrep), grep, find, tree, ls, cat, head, tail, stat, wc, sort, uniq, cut, sed, awk, jq, plus basic text utilities. No writes, no network, no process control. Run --help on any command for usage. Stateless calls: Each call is stateless: the working directory always resets to / and no shell variables, aliases, or history carry over between calls. If you need to operate in a subdirectory, chain commands in one call with && or pass absolute paths (for example, cd /api-reference && ls or ls /api-reference). Do not assume that cd in one call affects the next call. Examples
  • tree / -L 2 — see the top-level directory layout
  • rg -il "rate limit" / — find all files mentioning “rate limit”
  • rg -C 3 "apiKey" /api-reference/ — show matches with three lines of context around each hit
  • head -80 /quickstart.mdx — read the top 80 lines of a specific page
  • head -80 /quickstart.mdx /installation.mdx /guides/first-deploy.mdx — read multiple pages in one call
  • cat /api-reference/create-customer.mdx — read a full page when you need everything
  • cat /openapi/spec.json | jq '.paths | keys' — list OpenAPI endpoints
Output is truncated to 30KB per call. Prefer targeted rg -C or head -N over broad cat on large files. To read only the relevant sections of a large file, use rg -C 3 "pattern" /path/file.mdx. Batch multiple file reads into a single head or cat call whenever possible. For how Prove structures documentation for AI and humans, see Build Prove with LLMs.