Concepts and definitions
To better understand the technical infrastructure of Prove’s documentation, here are the key concepts and terms used:- Machine Readability: The design of content in a format that can be processed and “understood” by computer programs or AI, rather than just being optimized for human visual consumption.
- LLM (Large Language Model): AI systems, such as GPT-4 or Claude, that process and generate human-like text.
Plain text docs
Prove’s designs it’s documentation for AI consumption. Every page maintains a parallel Markdown representation accessible via the.md extension. This helps AI tools and agents consume Prove content.
| Feature | Markdown (.md) | HTML/JS Rendered Pages |
|---|---|---|
| Token Efficiency | High. Minimal syntax means more actual content fits in the context window. | Low. Dense with tags (<div>, <span>) and scripts that waste tokens. |
| Data Extraction | Direct. Content is ready to be parsed as-is without extra processing. | Complex. Requires a browser engine to execute JS before content is visible. |
| Content Visibility | Complete. Includes all text, including content hidden in UI tabs or toggles. | Partial. Hidden or “lazy-loaded” content is often missing from the initial scrape. |
| Contextual Hierarchy | Explicit. Headers (#, ##) clearly signal the importance and relationship of data. | Inferred. AI must guess hierarchy based on nested tags or CSS classes. |
| Formatting Noise | Minimal. Focuses on the data, reducing the risk of the AI getting distracted. | Significant. Inline styles and attributes add “noise” to the signal. |
Context menu
The documentation provides a drop-down menu to give an array of relevant links and information for LLMs. These include:- copying the current page as markdown.
- viewing the markdown version of the current page.
- opening popular LLM tools to ask a question about the current page.

