Skip to main content

Concepts and definitions

To better understand the technical infrastructure of Prove’s documentation, here are the key concepts and terms used:
  • Machine Readability: The design of content in a format that can be processed and “understood” by computer programs or AI, rather than just being optimized for human visual consumption.
  • LLM (Large Language Model): AI systems, such as GPT-4 or Claude, that process and generate human-like text.

Plain text docs

Prove designs its documentation for AI consumption. Every page maintains a parallel Markdown representation accessible via the .md extension. This helps AI tools and agents consume Prove content.
FeatureMarkdown (.md)HTML/JS Rendered Pages
Token EfficiencyHigh. Minimal syntax means more actual content fits in the context window.Low. Dense with tags (<div>, <span>) and scripts that waste tokens.
Data ExtractionDirect. Content is ready to be parsed as-is without extra processing.Complex. Requires a browser engine to execute JS before content is visible.
Content VisibilityComplete. Includes all text, including content hidden in UI tabs or toggles.Partial. Hidden or “lazy-loaded” content is often missing from the initial scrape.
Contextual HierarchyExplicit. Headers (#, ##) signal the importance and relationship of data.Inferred. AI must guess hierarchy based on nested tags or CSS classes.
Formatting NoiseMinimal. Focuses on the data, reducing the risk of the AI getting distracted.Significant. Inline styles and attributes add “noise” to the signal.
Prove hosts /llms.txt and /llms-full.txt files which instruct AI tools and agents how to retrieve the plain text versions of Prove pages. The /llms.txt file is an emerging standard for making websites and content more accessible to LLMs.

Contextual menu

Plain Markdown and /llms.txt help tools find content. The next problem is getting the page you are reading into an assistant or agent without retyping or fragile copy-paste. The contextual menu exists for that handoff: one place, at the top of every page, to treat the current doc as grounding context—whether you paste Markdown into a chat, open the same page in an external LLM app, or connect your editor to Prove’s hosted tooling. From that menu you can:
  • Copy page — copy the current page as Markdown for use as context in AI tools
  • View as Markdown — open the Markdown representation of the current page
  • Ask assistant — open the assistant with the current page as context
  • Open in ChatGPT, Claude, Perplexity, Grok, Google AI Studio, Devin, or Windsurf — start a session in that tool with this documentation loaded as context
  • MCP and editor shortcuts — for example Copy MCP server URL, Copy MCP install command, Connect to Cursor, Connect to VS Code, and Connect to Devin for the hosted MCP server
The Prove Model Context Protocol (MCP) exposes tools that AI agents can use to search Prove’s documentation and read full pages from a virtual documentation filesystem.