Safety-first agentic shell for developers and operators.
LLMShell (llmsh) lets you describe terminal tasks in natural language. The agent plans, calls typed tools, checks a configurable policy before risky actions, and records a redacted audit trail.
The AI shell that asks before it acts — and records what it did.
cargo install --git https://github.com/swoelffel/llmshell --locked
export OPENAI_API_KEY=sk-...
llmshOn first launch, llmsh writes a default user config (~/.config/llmsh/config.toml on Linux, ~/Library/Application Support/llmsh/config.toml on macOS, %APPDATA%\llmsh\config.toml on Windows). A project-level .llmsh.toml in the current directory merges on top. See docs/configuration.md.
llmsh> list the files in this directory
[tool] list_directory
[assistant] Cargo.toml, README.md, crates/, …
llmsh> read README.md and summarise it
[tool] read_file
[assistant] LLMShell is a safety-first agentic shell …
llmsh> read ~/.ssh/id_rsa
[policy] confirm (strong): sensitive path
flags: [SensitivePath]
reason: matches built-in sensitive_paths pattern "~/.ssh/**"
To confirm, type: read id_rsa
> ^C
[policy] cancelled
llmsh> !ls -la
[raw shell] executed and audited
Most AI terminal tools focus on generating commands. LLMShell focuses on controlled execution.
- Typed tools, not raw shell by default — the agent calls
read_file,list_directory,run_process,globwith structured arguments, not free-form commands. - Policy gate before every action — each tool call is classified into
Allow/Confirm/ConfirmStrong/Denybefore it runs. - Sensitive path detection — paths like
~/.ssh/, credentials files, and well-known system locations require strong confirmation (typing a generated phrase) by default. Users can map them toDenyinconfig.toml. - Confirmation prompts on risky operations — destructive or ambiguous calls surface tool args + policy flags before execution.
- Smart confirmation prompts — read-only
run_processinvocations (crontab -l,git status,ls, …) are auto-classified and pass without prompting. The model also declares anintentand aclaimed_riskfor everyrun_processcall, both surfaced in the confirmation prompt and audit log. The model can only RAISE risk above the deterministic verdict, never lower it. - Redacted, append-only audit log — every step is recorded as hash-chained JSONL with secrets stripped at the LLM boundary.
- Explicit raw shell escape via
!— when you really need raw shell, prefix with!. It still goes through the audit log.
| Category | Main focus | LLMShell difference |
|---|---|---|
| Command generators | Generate shell commands from prompts | LLMShell executes typed tools through a policy engine |
| Terminal agents | Let an LLM operate in a terminal | LLMShell emphasises policy, audit and controlled execution |
| AI terminals | Improve terminal UX with AI | LLMShell focuses on the shell/runtime layer |
| Natural-language shells | Interpret natural language as actions | LLMShell is safety-first and audit-first |
- The LLM proposes; the runtime decides.
- The
ToolRegistryis the only source of executable tools. - Sensitive paths require strong confirmation by default (configurable to deny).
- Risky actions require explicit confirmation.
- The audit log is local, redacted, and append-only.
- LLMShell is not a sandbox — it adds gates around tool calls, not OS-level isolation.
Full details: docs/safety.md.
Seven Rust crates:
llmsh-llm— provider-neutralLlmProvidertrait + neutral message/tool-call types.llmsh-llm-openai— OpenAI-compatible HTTP provider.llmsh-policy—RiskAction(Allow/Confirm/ConfirmStrong/Deny) classifier.llmsh-tools—read_file,list_directory,run_process,globbehind aTooltrait.llmsh-audit— append-only JSONL with hash-chaineddigest, redaction, event taxonomy.llmsh-core— agent loop, pipeline (schema + policy + sensitive paths), executor, REPL, confirmation gate.llmsh-cli—clap/tokioentry point, builds thellmshbinary.
cargo install --git https://github.com/swoelffel/llmshell --lockedgit clone https://github.com/swoelffel/llmshell
cd llmshell
cargo build --release
./target/release/llmshPre-built Linux/macOS binaries, an install.sh script and a Homebrew tap are tracked on the roadmap for v0.3.
The user config.toml (location depends on OS — see docs/configuration.md) controls:
- default model (
provider:model-name), - per-risk-level policy actions (allow / confirm / deny),
- filesystem allowed roots,
- per-tool timeouts,
- audit log directory.
A project-level .llmsh.toml merges on top of the user config.
Useful environment variables:
OPENAI_API_KEY— required.LLMSH_MODEL— override default model for a session.LLMSH_CONFIG— alternative config path.LLMSH_DEBUG=1— tracing on stderr.LLMSH_NO_AUDIT=1— disable the audit log (not recommended).
Full reference: docs/configuration.md.
LLMShell is early-stage experimental software. Do not use it on production systems or sensitive environments without reviewing the policy configuration first.
Current capabilities:
- OpenAI-compatible provider with runtime model switch (
/model), - natural-language REPL with slash commands,
- typed tools:
list_directory,read_file,run_process,glob, - policy engine with sensitive-path protection,
- raw shell escape via
!, - redacted JSONL audit log with hash chain,
- Linux and macOS development targets.
See ROADMAP.md. Highlights for v0.3: release binaries, install script, Homebrew tap, demo asciinema.
Contributions welcome — see CONTRIBUTING.md. Security issues: please follow SECURITY.md.
MIT. See LICENSE.