Skip to content

capotej/harness

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

136 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

harness harness

Run agents in a sandboxed container — ready to drop into any project.

Harness conveniently wraps Docker around three open-source coding agents — pi, opencode, and hermes — so you can point one at a directory (or file) without giving it access to your entire machine.

Features

  • Sandboxed by default — capability-dropped container with no-new-privileges; the agent only sees the directory (or file) you mount.
  • Three agents, one CLI — switch between pi, opencode, and hermes with -a. Same flags, same flow.
  • Supply-chain hardened — the image is signed and verified with cosign and SLSA provenance on every run; dependencies installed inside the container are always pinned and verified where possible and a 7-day "cooldown" is used to mitigate supply-chain attacks.
  • Local-first — defaults to LM Studio with gemma-4-e4b. Drop in an --env-file to use Anthropic, OpenRouter, OpenAI, Gemini, and others.
  • Stateful or one-shot — interactive runs persist agent state under .harness/<agent>/; one-shot prompts (-p or piped stdin) stay ephemeral.
  • User skills — automatically mounts ~/.agents/skills and ~/.claude/skills into the container so agents can discover and use custom skills. Disable with --no-skills.
  • Zero installnpx @capotej/harness just works.

Quickstart

Docker is required. By default, harness uses LM Studio locally:

lms daemon up
lms get google/gemma-4-e4b

The container is preconfigured to use gemma-4-e4b via LM Studio's local API.

You can also specify a different local model with -m. HuggingFace-style names with slashes (e.g. qwen/qwen3.5-9b) work correctly in local mode:

npx @capotej/harness -m "qwen/qwen3.5-9b" -p "write a fizzbuzz in Go"

Using a cloud provider instead

pi (default agent)

If you pass an API key for a supported provider via --env-file, pi will use that provider instead of the local LM Studio setup. Supported keys:

Provider Environment Variable
Anthropic ANTHROPIC_API_KEY
OpenRouter OPENROUTER_API_KEY
OpenAI OPENAI_API_KEY
Google Gemini GEMINI_API_KEY
Mistral MISTRAL_API_KEY
Groq GROQ_API_KEY
Cerebras CEREBRAS_API_KEY
xAI XAI_API_KEY
Hugging Face HF_TOKEN

See the full list of supported providers for more options. When using LM Studio locally, 16k context is sufficient.

opencode agent

opencode uses LM Studio by default. To use OpenRouter instead, pass an env file containing OPENROUTER_API_KEY:

npx @capotej/harness -p "write me a fizzbuzz in Go"

That's it. Your current directory is mounted at /workspace inside the container and the agent works against it.

Contents

Examples

# One-shot prompt
npx @capotej/harness -p "write me a fizzbuzz in Go"

# Pipe via stdin
echo "write me a fizzbuzz in Go" | npx @capotej/harness

# Interactive session (no -p, no piped stdin) — state persists under .harness/
npx @capotej/harness

# Use a cloud provider via env file
npx @capotej/harness -e .env -p "add a login endpoint"

# Override the model
npx @capotej/harness -m anthropic/claude-sonnet-4-5 -p "refactor the auth module"

# Mount a single file instead of the whole directory
npx @capotej/harness -f ./script.py -p "add type hints"

# Switch agents
npx @capotej/harness -a opencode -p "write me a fizzbuzz in Go"
npx @capotej/harness -a hermes -e .env -p "add tests"

npx, bunx, and pnpm dlx are interchangeable. Or install globally:

npm install -g @capotej/harness
# or
pnpm add -g @capotej/harness
# or
bun add -g @capotej/harness

Agents

Pick an agent with -a. Default is pi.

pi (default)

pi defaults to LM Studio with google/gemma-4-e4b (16k context is enough). Pass an --env-file containing any of the keys below and pi switches to that provider:

Provider Environment Variable
Anthropic ANTHROPIC_API_KEY
OpenRouter OPENROUTER_API_KEY
OpenAI OPENAI_API_KEY
Google Gemini GEMINI_API_KEY
Mistral MISTRAL_API_KEY
Groq GROQ_API_KEY
Cerebras CEREBRAS_API_KEY
xAI XAI_API_KEY
Hugging Face HF_TOKEN

See the full provider list. The -m flag is forwarded directly.

opencode

opencode defaults to LM Studio. Drop OPENROUTER_API_KEY into your env file and it switches to OpenRouter automatically (openrouter/auto if no -m). The -m flag takes a bare model name; the provider prefix is added for you.

npx @capotej/harness -a opencode -e .env -p "refactor the auth module"
npx @capotej/harness -a opencode -e .env -m anthropic/claude-sonnet-4-5 -p "add tests"

When using LM Studio locally, set the model's context length to at least 32k tokens.

hermes

hermes by NousResearch supports many providers. Pass an env file with your key and a provider/model to -m:

npx @capotej/harness -a hermes -e .env -m anthropic/claude-sonnet-4-5 -p "add tests"
npx @capotej/harness -a hermes -e .env -m openrouter/auto -p "add tests"

Common keys: ANTHROPIC_API_KEY, OPENROUTER_API_KEY, OPENAI_API_KEY, GOOGLE_API_KEY, and others. LM Studio context length should be at least 64k tokens.

Security model

Harness layers protections at runtime, image, and dependency level.

Sandbox

Each run starts the container with:

  • --cap-drop=ALL --cap-add=NET_RAW — minimal capability set
  • --security-opt no-new-privileges:true — block privilege escalation
  • Only your mounted directory (or single file with -f) is visible to the agent

Image verification

By default, harness verifies that the container image was signed by the official CI workflow and carries a valid SLSA provenance attestation. This requires cosign:

brew install cosign

Verified digests are cached at ~/.cache/harness/cosign-verified.json so verification only runs once per image. Skip with --no-verify (or by setting HARNESS_IMAGE_TAG, which implies skip):

npx @capotej/harness --no-verify -p "write me a fizzbuzz in Go"

Dependency cooldown

The image build enforces a 7-day cooldown on dependency resolution — a guard against supply-chain compromises that are typically discovered and yanked within hours.

  • pnpm: PNPM_MINIMUM_RELEASE_AGE=10080 (minutes) via environment variable
  • uv: --exclude-newer set to 7 days ago at image build time

The cooldown applies to transitive dependencies too. Older packages install normally.

Persistence

Interactive runs (no -p and no piped stdin) bind-mount .harness/<agent>/ from your working directory into the container. This lets agents resume sessions, skip database migrations on repeat runs, and retain memories across invocations.

One-shot runs (-p or piped stdin) are implicitly ephemeral — no .harness/ directory is created. Use --ephemeral to force-disable persistence on interactive runs.

Add .harness/ to your .gitignore.

Reference

CLI flags

Flag Alias Description
--prompt -p Pass a prompt directly to the agent
--env-file -e Load environment variables into the container
--file -f Mount a single file instead of the current directory
--model -m Override the model used by the agent
--agent -a Select agent: pi, opencode, hermes (default: pi)
--volumes -v Additional volume mount (host:container[:opts]); may be repeated
--no-verify Skip cosign signature and provenance verification
--no-skills Disable mounting user skills directories (~/.agents/skills, ~/.claude/skills)
--ephemeral Disable session persistence (implied by -p and piped stdin)
--help -h Show help

Environment variables

Variable Description
HARNESS_IMAGE_TAG Override the Docker image tag (defaults to the package version). Setting this implies --no-verify.

Agent-specific behavior

  • pi-m is passed straight to the binary as --model.
  • opencode-m is passed via the OPENCODE_MODEL env var. Provider is auto-detected from the env file (OPENROUTER_API_KEY → OpenRouter, otherwise LM Studio).
  • hermes-m is passed as --model in provider/model form. Provider is auto-detected from whichever API key is present.

Deploying hermes as a claw

You can run hermes as a long-running "claw" — a persistent agent process reachable over a messaging gateway (e.g. Telegram). Two deployment targets are documented:

Developing

Link your local checkout globally:

pnpm link --global
# unlink with:
pnpm unlink --global @capotej/harness

Building the image

make image

Builds ghcr.io/capotej/harness with Debian stable-slim, Node.js v24, @mariozechner/pi-coding-agent, opencode-ai, hermes-agent, fd, ripgrep, jq, and curl. The hermes variant also includes the MCP Python SDK for connecting to MCP servers, and faster-whisper for local speech-to-text.

The base image is pinned by manifest-list digest (the OCI image index, not a per-platform manifest) for reproducible multi-arch builds. To bump it:

docker buildx imagetools inspect debian:stable-slim --format '{{.Manifest.Digest}}'

Linting

pnpm lint           # all
pnpm lint:ts        # Biome
pnpm lint:md        # markdownlint
pnpm lint:sh        # shellcheck
pnpm lint:docker    # hadolint
pnpm lint:actions   # actionlint
pnpm format         # auto-format with Biome

shellcheck, hadolint, and actionlint are system binaries. Install with mise (recommended):

mise install

Or install manually:

brew install shellcheck hadolint actionlint

About

Easy containerized agent environments

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors