Concepts and definitions
To better understand the technical infrastructure of Prove’s documentation, here are the key concepts and terms used:- Machine Readability: The design of content in a format that can be processed and “understood” by computer programs or AI, rather than just being optimized for human visual consumption.
- LLM (Large Language Model): AI systems, such as GPT-4 or Claude, that process and generate human-like text.
Plain text docs
Prove designs its documentation for AI consumption. Every page maintains a parallel Markdown representation accessible via the.md extension. This helps AI tools and agents consume Prove content.
| Feature | Markdown (.md) | HTML/JS Rendered Pages |
|---|---|---|
| Token Efficiency | High. Minimal syntax means more actual content fits in the context window. | Low. Dense with tags (<div>, <span>) and scripts that waste tokens. |
| Data Extraction | Direct. Content is ready to be parsed as-is without extra processing. | Complex. Requires a browser engine to execute JS before content is visible. |
| Content Visibility | Complete. Includes all text, including content hidden in UI tabs or toggles. | Partial. Hidden or “lazy-loaded” content is often missing from the initial scrape. |
| Contextual Hierarchy | Explicit. Headers (#, ##) signal the importance and relationship of data. | Inferred. AI must guess hierarchy based on nested tags or CSS classes. |
| Formatting Noise | Minimal. Focuses on the data, reducing the risk of the AI getting distracted. | Significant. Inline styles and attributes add “noise” to the signal. |
Contextual menu
Plain Markdown and/llms.txt help tools find content. The next problem is getting the page you are reading into an assistant or agent without retyping or fragile copy-paste. The contextual menu exists for that handoff: one place, at the top of every page, to treat the current doc as grounding context—whether you paste Markdown into a chat, open the same page in an external LLM app, or connect your editor to Prove’s hosted tooling.
From that menu you can:
- Copy page — copy the current page as Markdown for use as context in AI tools
- View as Markdown — open the Markdown representation of the current page
- Ask assistant — open the assistant with the current page as context
- Open in ChatGPT, Claude, Perplexity, Grok, Google AI Studio, Devin, or Windsurf — start a session in that tool with this documentation loaded as context
- MCP and editor shortcuts — for example Copy MCP server URL, Copy MCP install command, Connect to Cursor, Connect to VS Code, and Connect to Devin for the hosted MCP server
The Prove Model Context Protocol (MCP) exposes tools that AI agents can use to search Prove’s documentation and read full pages from a virtual documentation filesystem.

