Skip to content

CLI Usage

Hypercontext ships a standard-library command-line interface. It is the fastest way to inspect the package, run example workflows, and exercise the core evolution loop from the terminal.

Entry Point

Show the full command list:

python -m hypercontext --help

Show version information:

python -m hypercontext version

Inspect provider availability:

python -m hypercontext providers

Command Reference

The CLI currently exposes these commands:

  • version
  • providers
  • run
  • compress
  • validate
  • evaluate
  • archive
  • benchmark
  • mcp
  • serve
  • ui
  • tui
  • docker

Each command below includes a practical example, what it does, and the most useful options or caveats.

version

Print the installed package version.

python -m hypercontext version

Use this when you want to confirm that the local environment is pointing at the expected environment or installed release.

providers

List the provider backends registered in the current environment.

python -m hypercontext providers

This is the quickest way to verify whether the mock provider is available and whether optional provider SDKs are installed.

run

Run the evolutionary self-improvement loop.

python -m hypercontext run --generations 5 --output-dir ./runs/exp1 --workdir .

Useful flags:

  • --config to load YAML or JSON settings
  • --generations to control the run length
  • --domains to pass comma-separated benchmark domains
  • --output-dir to choose where the run artifacts land
  • --parent-selection to choose a parent-selection strategy
  • --resume to continue from a checkpoint directory
  • --workdir to run relative to a specific repo or project root

Recommended workflow:

  1. Start with a small generation count.
  2. Use a dedicated runs/ directory.
  3. Review the archive and lineage output before scaling up.

compress

Compress a text file to reduce prompt size.

python -m hypercontext compress path/to/input.txt --intensity adaptive

Compression levels:

  • light
  • medium
  • heavy
  • aggressive
  • adaptive
  • caveman

Useful flags:

  • --budget to force a maximum token budget
  • --output to write to a file instead of stdout

The adaptive mode chooses a compression level based on the input size. The legacy browser/MCP compressor also accepts the newer aliases aggressive, adaptive, and caveman so the dashboard and the CLI stay in sync.

validate

Validate that a compressed file still preserves the important content from the original.

python -m hypercontext validate original.txt compressed.txt

This is useful after compress if you want to confirm that code blocks, headings, URLs, or important terms were not lost.

evaluate

Evaluate code against one or more benchmark domains.

python -m hypercontext evaluate path/to/code.py --domains mmlu,gsm8k --workdir .

The current build prints the configured provider from the active environment and then shows the evaluation scaffold. Use HYPERCONTEXT_PROVIDER and the other provider variables in your shell or .env file to steer which backend the command reports.

archive

Inspect archive data from prior generations.

List entries:

python -m hypercontext archive list

Show the best generation:

python -m hypercontext archive best

Inspect lineage:

python -m hypercontext archive lineage

Show one generation:

python -m hypercontext archive show gen_001

Export the archive:

python -m hypercontext archive export --output archive_export.json

Use this command family when you want to inspect the results of a run without opening the underlying JSON files manually. The lineage view is rendered from the stored lineage state when available, or reconstructed from the archive checkpoint when only the checkpoint is present. The archive best-score view uses the saved generation scores, and the CLI falls back to checkpoint.json when the append-only archive log has not been written yet.

benchmark

Run the benchmark scaffolding across one or more domains.

python -m hypercontext benchmark --domains search --generations 3 --workdir .

This is the command to use when you want repeatable comparisons across domains, generations, or selection strategies. It also reports the currently configured provider from the environment so you can confirm which backend a run will use.

mcp

Start the stdio MCP daemon for desktop and terminal assistants.

python -m hypercontext mcp --workdir .

Useful flags:

  • --output-dir to choose where archive and evolution artifacts are written
  • --domains to set the default evaluation domains for tools such as evaluate, evolution_start, and benchmark_run
  • --parent-selection to choose the default evolution parent selector
  • --workdir to point the daemon at a repo or project root

Use mcp when:

  • an assistant launches Hypercontext as an MCP stdio subprocess
  • you want Claude Desktop or Claude Code to talk to Hypercontext directly
  • you want the same tool/resource surface without the HTTP server

serve

Start the HTTP MCP server.

python -m hypercontext serve --port 8080 --workdir .

The server exposes Hypercontext capabilities over HTTP for the browser UI and other tools that want JSON endpoints. The server is bundled with the repository and does not require a separate mcp package install. Like the web dashboard, the CLI launches it as a background process and prints the process ID so you can keep using the shell.

ui

Start the built-in web UI.

python -m hypercontext ui --port 3000 --workdir .

Current behavior:

  • The command prints Starting web UI on 0.0.0.0:3000...
  • It then prints Web UI started in the background process (pid=1234).
  • It then prints Open the browser-based dashboard at the printed address.
  • The built-in dashboard starts a local MCP backend in the same launcher process, so the compression, archive, evaluation, and sandbox tabs can talk to real JSON endpoints immediately
  • No extra frontend package install is needed for the core local workflow
  • The benchmark tab includes selectable code presets with a live preview so you can switch between benchmark snippets before running evaluation
  • The benchmark tab also loads the repository's available domain catalog so you can pick real benchmark domains instead of typing them from memory
  • The sandbox tab uses the bundled local sandbox backend and shows a readable error if execution fails instead of leaving fields blank; it also falls back to the current Python interpreter when Docker is unavailable
  • --workdir lets you launch the dashboard from a project root so relative paths stay anchored to the repo you want to work in
  • When an evolution run completes, the dashboard refreshes generation history and archive size from the persisted archive store, and archive rows open a real generation detail view with code, mutations, lineage, and domain scores

Use the UI command when you want the dashboard entry point. It returns control to the shell immediately after starting the background process.

The dashboard header includes a Close button that shuts down the browser UI and its local backend so port 3000 is released cleanly.

tui

Start the dedicated terminal dashboard.

python -m hypercontext tui --workdir .

This launches the full-screen curses interface that lets you browse the major commands, inspect usage examples, pin a command, and run the pinned command again with a second Enter. If you move to another command and press Enter, the new highlight becomes the active pin. Press r to run the highlighted command immediately without pinning it. The output is captured and rendered in the detail pane so you can see the result without leaving the UI.

The TUI is useful when:

  • you are working over SSH
  • you want a quick command reference without leaving the terminal
  • you prefer keyboard navigation over a browser dashboard
  • you want to run commands relative to a repo or project root with --workdir
  • you want to run commands relative to a repo or project root using --workdir

docker

Manage the Docker sandbox entry points.

Build the sandbox image:

python -m hypercontext docker build

Run a script in the sandbox:

python -m hypercontext docker run path/to/script.py

The Docker path is useful when you want to isolate code execution from your main environment.

End-To-End CLI Workflow

If you are just getting started, use this sequence:

  1. python -m hypercontext version
  2. python -m hypercontext providers
  3. python -m hypercontext compress ...
  4. python -m hypercontext validate ...
  5. python -m hypercontext run ...
  6. python -m hypercontext archive list

That path covers discovery, compression, validation, execution, and result inspection in the same order most people will use them.