A TDD-engineered, AI-augmented OS execution framework — embeddable Rust library for safe, auditable, policy-driven command dispatch with optional local LLM assistance.
LogicShell is a library-first Rust framework that sits between a host application (or operator CLI) and the OS process layer. It provides:
- Process dispatcher — spawn external programs with explicit argv, environment, cwd, and stdin modes; capture bounded stdout; propagate structured exit status.
- Pre-execution hooks — run configurable shell scripts before every dispatch, with per-hook timeouts and fail-fast semantics.
- Append-only audit log — every dispatch writes a NDJSON record (timestamp, cwd, argv, safety decision, optional note) that survives process restarts.
- Configuration discovery — TOML config file resolved via
LOGICSHELL_CONFIG, project walk-up, XDG, or built-in defaults, with strict unknown-key rejection. - Safety policy engine —
strict/balanced/loosemodes with deny/allow prefix lists, high-risk regex patterns, sudo heuristics, and a four-category risk taxonomy (destructive filesystem, privilege elevation, network, package). - Local LLM bridge (Phases 8–10) — Ollama-backed natural-language-to-command translation, gated behind safety policy and explicit user confirmation. AI-generated commands always require confirmation before dispatch.
LogicShell is not a POSIX shell replacement. It is an embeddable dispatcher + policy + optional-AI stack that host applications link as a crate.
| Milestone | Phases | Status |
|---|---|---|
| M1 — Dispatcher, config, CI | 1–5 | ✅ Complete |
| M2 — Safety engine, audit, hooks | 6–7 | ✅ Complete |
| M3 — LLM bridge, Ollama | 8–10 | ✅ Complete |
| M4 — Ratatui TUI | 11–14 | 📋 Planned |
| M5 — Remote LLM providers | 15–17 | 📋 Planned |
| M6 — Plugin system | 18–20 | 📋 Planned |
Current: 506 tests · 96%+ line coverage · cargo clippy -D warnings clean
Host application / CLI
│
▼
LogicShell facade ──── AuditSink (NDJSON, append-only)
│
├─► HookRunner (pre_exec hooks, per-hook timeout)
│
├─► SafetyPolicyEngine ──────────► AuditSink
│
├─► ProcessDispatcher (tokio::process, stdout cap)
│
└─► LlmBridge [Phases 8-10]
│
└─► OllamaLlmClient (HTTP, mockable trait)
Crate layout:
logicshell-core/ — dispatcher, config, safety, audit, hooks (no HTTP)
logicshell-llm/ — LLM context, prompt composer, Ollama client (behind `ollama` feature)
Key design decisions:
SafetyPolicyEngineandPromptComposerare sync/pure — no async in hot policy paths.LlmClientis an async trait — I/O-bound inference uses Tokio.- No LLM calls in default
cargo test(all mocked or gated behind#[ignore]).
| Tool | Version | Purpose |
|---|---|---|
| Rust toolchain | 1.75+ (stable) | Build and test |
| cargo-tarpaulin | latest | Coverage reports |
| Ollama (optional) | any | Local LLM features (Phases 8–10) |
Install Rust via rustup. The repository pins the toolchain:
# toolchain is automatically selected from rust-toolchain.toml
rustup showInstall tarpaulin once:
cargo install cargo-tarpaulingit clone https://github.com/logicshell/logicshell
cd logicshell
cargo build --workspaceNo environment variables, daemons, or network access are required for the core build.
# Format check
cargo fmt --check
# Lint (warnings are errors)
cargo clippy --workspace --all-features -- -D warnings
# Full test suite (no network, no Ollama)
cargo test --workspace
# Coverage report (HTML + XML, threshold ≥ 90%)
cargo tarpaulin --workspace
# Open coverage report
xdg-open target/coverage/tarpaulin-report.html # Linux
open target/coverage/tarpaulin-report.html # macOSTests that require a live Ollama daemon are marked #[ignore] and excluded from default CI:
# Run Ollama-backed tests explicitly (requires local daemon)
cargo test --workspace -- --ignoredA runnable example exercises every implemented phase end-to-end:
cargo run --example demoExpected output:
[Phase 3–4: Config schema + discovery] config parsed + discovered OK
[Phase 5: Dispatcher] echo, nonzero exit, stdin, cwd, truncation, env OK
[Phase 6: HookRunner] success, nonzero exit, timeout OK
[Phase 6: AuditSink] 3 NDJSON records, flush-on-drop, disabled no-op OK
[Phase 6: LogicShell façade] hook ran, audit written, façade.audit() appended, failing hook aborted OK
[Phase 7: SafetyPolicyEngine] ls allow, rm -rf / deny, curl|bash strict-deny/balanced-confirm, sudo rm strict-confirm/loose-allow, dispatch blocked OK
✓ All features verified OK
LogicShell reads .logicshell.toml using the following search order (first match wins):
LOGICSHELL_CONFIGenvironment variable — must be an absolute path.- Walk up from the current working directory for
.logicshell.toml. $XDG_CONFIG_HOME/logicshell/config.toml(falls back to~/.config/…).- Built-in defaults — no file required.
schema_version = 1 # forward-compatible migrations
safety_mode = "balanced" # strict | balanced | loose
[llm]
enabled = false # master switch; no LLM calls when false
provider = "ollama" # ollama (MVP); openai/anthropic post-MVP
base_url = "http://127.0.0.1:11434"
model = "llama3" # required when enabled = true
timeout_secs = 60
allow_remote = false # must be false in MVP
[llm.invocation]
nl_session = false # explicit natural-language mode
assist_on_not_found = false # suggest on exit code 127
max_context_chars = 8000 # combined prompt context cap
[safety]
deny_prefixes = ["rm -rf /", "mkfs", "dd if="]
allow_prefixes = []
high_risk_patterns = [
"rm\\s+-[rf]*r",
"sudo\\s+",
"curl.*\\|\\s*bash",
"wget.*\\|\\s*sh",
]
[audit]
enabled = true
path = "/var/log/logicshell-audit.log" # omit → OS temp dir
[[hooks.pre_exec]]
command = ["notify-send", "dispatching"]
timeout_ms = 5000 # hook killed after this many ms
[limits]
max_stdout_capture_bytes = 1048576 # 1 MiB
max_llm_payload_bytes = 256000Unknown keys are rejected at parse time — the framework fails fast with file path and line number.
Add logicshell-core to Cargo.toml:
[dependencies]
logicshell-core = { path = "path/to/logicshell-core" } # crates.io once published
tokio = { version = "1", features = ["full"] }Minimal host integration:
use logicshell_core::{
LogicShell,
config::Config,
audit::{AuditRecord, AuditDecision},
Decision, SafetyPolicyEngine,
};
#[tokio::main]
async fn main() {
// Load config from .logicshell.toml (walk-up) or use defaults
let config = logicshell_core::discover(std::env::current_dir().unwrap().as_path())
.unwrap_or_default();
let ls = LogicShell::with_config(config);
// Query the safety engine directly (sync, pure)
let (assessment, decision) = ls.evaluate_safety(&["rm", "-rf", "/"]);
assert_eq!(decision, Decision::Deny);
println!("risk score: {}, level: {:?}", assessment.score, assessment.level);
// Dispatch a command; safety check + pre-exec hooks run automatically
let exit_code = ls.dispatch(&["git", "status"]).await.expect("dispatch failed");
println!("exit: {exit_code}");
// Write a manual audit record (e.g. after a user-denied command)
let record = AuditRecord::new(
std::env::current_dir().unwrap().to_string_lossy(),
vec!["rm".into(), "-rf".into(), "/".into()],
AuditDecision::Deny,
).with_note("blocked by operator");
ls.audit(&record).expect("audit failed");
}Each dispatch writes one JSON line to the configured file:
{"timestamp_secs":1714000000,"cwd":"/home/user/project","argv":["git","push"],"decision":"allow"}
{"timestamp_secs":1714000001,"cwd":"/","argv":["rm","-rf","/"],"decision":"deny","note":"blocked by policy stub"}Fields:
| Field | Type | Description |
|---|---|---|
timestamp_secs |
u64 |
Unix timestamp (seconds) |
cwd |
string |
Working directory at dispatch time |
argv |
string[] |
Full command argv |
decision |
"allow" | "deny" | "confirm" |
Safety policy outcome |
note |
string? |
Optional annotation (omitted when absent) |
The file is opened with O_APPEND; records survive process restarts and are flushed on AuditSink drop.
Wrap deployment pipelines in LogicShell to capture every spawned command with timestamps and decisions. Replay the NDJSON log for incident forensics.
safety_mode = "strict"
[audit]
enabled = true
path = "/var/log/deploy-audit.log"
[[hooks.pre_exec]]
command = ["slack-notify", "deploying to prod"]
timeout_ms = 3000Enable llm.invocation.assist_on_not_found = true to have LogicShell query a local Ollama model when a command returns exit code 127. The suggested correction is presented for confirmation before running — never auto-executed. AI-generated commands always receive a raised safety floor (Allow → Confirm).
[llm]
enabled = true
model = "llama3"
[llm.invocation]
assist_on_not_found = trueLink logicshell-core in a Rust CLI to enforce deny/allow lists before spawning any subprocess. Set safety_mode = "strict" to require explicit confirmation for all high-risk patterns.
Run arbitrary scripts before every dispatch — health checks, secret injection, notification webhooks — with hard timeouts so a slow hook never hangs the pipeline.
SystemContextProvider— reads OS family, architecture, abbreviated PATH, cwd.PromptComposer— pure, sync, templates viainclude_str!, enforcesmax_context_chars.LlmClientasync trait +LlmRequest/LlmResponsetypes.
OllamaLlmClientbehind theollamafeature flag usingreqwest.- Health probe (
GET /api/tags) with graceful degradation matrix. - Full mockito test suite; zero real network in default
cargo test.
LlmBridge<C>generic orchestrator: context → composer → client → parser →ProposedCommand.ProposedCommandwithsource: CommandSource::AiGeneratedraises the risk floor (Allow→Confirm).parser::parse_command_response— strips code fences, POSIX shell tokenizer.- NL session mode (
translate_nl) and assist-on-127 mode. - Graceful degradation:
LlmError::Httppropagated for caller fallback. - 96%+ coverage; 506 tests total.
See LLM_GUIDE.md for running Ollama alongside LogicShell.
See CONTRIBUTING.md for the TDD workflow, PR checklist, and LLM backend extension guide.
Quick start:
# All checks must pass before opening a PR
cargo fmt --check
cargo clippy --workspace --all-features -- -D warnings
cargo test --workspace
cargo tarpaulin --workspace # must remain ≥ 90%Commit style: conventional commits (feat:, fix:, phase N:, test:, docs:).
logicshell/
├── logicshell-core/ # Core library (no HTTP)
│ ├── src/
│ │ ├── lib.rs # Public façade: LogicShell struct
│ │ ├── dispatcher.rs # Async process dispatcher (Phase 5)
│ │ ├── audit.rs # Append-only NDJSON audit sink (Phase 6)
│ │ ├── hooks.rs # Pre-exec hook runner (Phase 6)
│ │ ├── safety.rs # Safety policy engine (Phase 7)
│ │ ├── error.rs # LogicShellError enum (Phase 2)
│ │ └── config/
│ │ ├── mod.rs # load() + validate()
│ │ ├── schema.rs # Serde config types (Phase 3)
│ │ └── discovery.rs # Config discovery (Phase 4)
│ ├── tests/
│ │ ├── dispatcher_integration.rs
│ │ ├── hooks_audit_integration.rs
│ │ └── e2e.rs # Full-stack end-to-end tests
│ └── examples/
│ └── demo.rs # Runnable feature demonstration (Phases 3–7)
├── logicshell-llm/ # LLM bridge (Phases 8–10)
│ ├── src/
│ │ ├── lib.rs # Re-exports for all public types
│ │ ├── client.rs # LlmClient trait + LlmRequest/LlmResponse (Phase 8)
│ │ ├── context.rs # SystemContextProvider + snapshot (Phase 8)
│ │ ├── prompt.rs # PromptComposer + templates (Phase 8)
│ │ ├── error.rs # LlmError enum
│ │ ├── ollama.rs # OllamaLlmClient (Phase 9, `ollama` feature)
│ │ ├── parser.rs # LLM response → argv tokenizer (Phase 10)
│ │ ├── proposed.rs # ProposedCommand + CommandSource + safety floor (Phase 10)
│ │ ├── bridge.rs # LlmBridge orchestrator (Phase 10)
│ │ └── templates/
│ │ ├── nl_to_command.txt
│ │ └── assist_on_127.txt
│ ├── tests/
│ │ ├── phase8_integration.rs
│ │ ├── phase9_integration.rs # (requires `ollama` feature)
│ │ └── phase10_integration.rs
│ └── examples/
│ ├── phase8.rs # Phase 8 demo
│ ├── phase9.rs # Phase 9 demo (requires `ollama` feature)
│ └── phase10.rs # Phase 10 demo
├── LLM_GUIDE.md # Running Ollama + LogicShell together
├── tarpaulin.toml # Coverage config (gate: ≥ 90%)
├── rust-toolchain.toml # Pinned stable channel
└── Cargo.toml # Workspace root
Licensed under either of
- MIT License (LICENSE-MIT)
- Apache License, Version 2.0 (LICENSE-APACHE)
at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this project shall be dual-licensed as above, without any additional terms or conditions.
