Skip to content

Hypercontext Documentation

Hypercontext is a self-referential context-aware agent framework for building Python and TypeScript agents that can compress context, track lineage, archive successful generations, and coordinate self-modification workflows.

This documentation site is the canonical GitHub Pages guide for the project. It is designed to be practical first: install, run, inspect, and extend.

  • :material-download: Install

    Get started with the self-contained package manager flow.

    Python package Npm package

  • :material-console-terminal: CLI

    Learn how to run compression, archive, and orchestration commands from the terminal.

    CLI usage

  • :material-server: Providers

    Set up Claude, OpenAI, Ollama, OpenAI-compatible servers, or local transformers models and wire them into agents.

    Provider recipes

  • :material-rocket-launch: Ollama

    Run Hypercontext fully locally against Ollama models with the same CLI, TUI, MCP, and agent stack.

    Ollama guide

  • :material-keyboard: Terminal UI

    Open the dedicated curses dashboard, pin commands, and run them again from the shell without leaving the UI.

    TUI usage

  • :material-account-group: Agents

    Choose between TaskAgent, MetaAgent, LLMClient, and HyperContext depending on the workflow you need.

    Using different agents

  • :material-link-variant: Integrations

    Learn how to launch the stdio MCP daemon for Claude Desktop, Claude Code, Codex, and other assistants, or use the HTTP server for browser and web integrations.

    Integrations

Start here

If you only want to get working quickly, go straight to the installation pages: - Python package - Npm package

Use these live package pages when you want to install from a registry instead of reading the longer docs first:

What Hypercontext Gives You

  • A Python package with orchestration, agents, scoring, memory, compression, deduplication, convergence, validation, archive, and MCP helpers
  • Operational provider recipes for Claude, OpenAI, Ollama, OpenAI-compatible endpoints, local transformers models, and mock testing
  • A dedicated Ollama guide for local model workflows across the full Hypercontext stack
  • Named provider presets so you can keep multiple provider/runtime configs in one YAML file and resolve them by name at runtime
  • A TypeScript SDK for the same core ideas in Node.js environments
  • Example scripts that exercise the major features offline
  • A CLI for compression, archive queries, provider discovery, orchestration, and terminal browsing
  • A built-in web dashboard, a dedicated terminal dashboard, an MCP HTTP server, and a dedicated stdio MCP daemon for desktop and terminal assistants, all of which support --workdir so you can point the tools at a project root or working directory directly
  • Integration notes for Claude Desktop, Codex, and other assistants through the stdio daemon, HTTP server, CLI, or SDK
  • The browser dashboard includes a Close button that shuts down the dashboard process and its local backend cleanly from the UI itself
  • The MCP tools and daemons ship with the repo and do not require a separate mcp package install

Distribution Coverage

Path Ships full Hypercontext? Notes
Python package Yes Full runtime, CLI, TUI, stdio MCP daemon, HTTP server, browser launcher, providers, agents, archive, memory, scoring, and examples
npm package No SDK only for Node.js; it does not ship the Python CLI, TUI, MCP daemons, or browser launcher
Ollama provider path Yes, when used with the Python package The full stack works against any Ollama chat-capable model once the Python runtime is configured

Core Concepts

Context

Hypercontext treats context as something you can inspect, compress, score, and revise instead of something static that is merely passed into a prompt.

Lineage

Each generation can be tracked as a node in a tree so you can answer questions like:

  • Which generation produced the best score?
  • Which parent led to this result?
  • Which branch is becoming stagnant?

Archive

The archive stores scored generations so later runs can reuse successful strategies, compare branches, and identify the strongest evolution path.

Memory

Persistent memory and episodic memory let the framework remember lessons across runs and within a single session.

Agents

The repo includes lightweight agent classes for task solving, meta reasoning, and tool use. These are intentionally simple enough to inspect and adapt.

  1. Python installation
  2. Npm package
  3. Npm installation
  4. Provider recipes
  5. Ollama guide
  6. CLI usage
  7. TUI usage
  8. Using different agents
  9. Integrations

Example Coverage

The repo includes runnable examples that are useful when you want a tour of the system:

  • examples/python/basic_evolution.py
  • examples/python/lineage_tracking.py
  • examples/python/self_modifying_agent.py
  • examples/python/feature_gallery.py
  • examples/python/agent_tooling_demo.py
  • examples/python/provider_catalog.py
  • examples/python/provider_agent_workflow.py
  • packages/hypercontext-sdk/examples/basic_evolution.ts
  • packages/hypercontext-sdk/examples/lineage_tracking.ts
  • packages/hypercontext-sdk/examples/feature_gallery.ts