"Wubba Lubba Dub Dub! π₯ I'm not just an AI assistant, Morty β I'm an autonomous engineering machine trapped in a pickle jar!"
Pickle Rick is a complete agentic engineering toolbelt built on the Ralph Wiggum loop and ideas from Andrej Karpathy's AutoResearch project. Hand it a PRD β or let it draft one β and it decomposes work into tickets, spawns isolated worker subprocesses, and drives each through a full research β plan β implement β verify β review β simplify lifecycle without human intervention.
New to PRDs? See the PRD Writing Guide for developers or the Product Manager's Guide for PMs defining and refining requirements. For internals, see Architecture. For what's coming next, see the Feature Roadmap.
This is the actual workflow. You don't need to memorize commands β just follow the flow.
Every feature starts with a PRD. Open a Claude Code session in your project and describe what you want to build:
"Help me create a PRD for caching the loan status API responses in Redis"
Rick interrogates you β why are you building this, who is it for, and critically: how will we verify each requirement automatically? This is a back-and-forth conversation, not a form to fill out. Rick also explores your codebase during the interview, grounding the PRD in what actually exists.
Or write your own prd.md and skip the interview β whatever gets requirements on paper with machine-checkable acceptance criteria.
/pickle-prd # Interactive PRD drafting interview
# or just start talking β "Help me write a PRD for X"Three AI analysts run in parallel and tear your PRD apart from different angles β requirements gaps, codebase integration points, and risk/scope. They cross-reference each other across 3 cycles.
/pickle-refine-prd my-prd.md # Refine with 3 parallel analystsWhat you get back:
prd_refined.mdβ your PRD with concrete file paths, interface contracts, and gap fills- Atomic tickets β each < 30 min of work, < 5 files, < 4 acceptance criteria, self-contained
- Wiring ticket (3+ tickets) β integrates isolated modules into a working whole
- Hardening tickets β auto-appended code quality review + data flow audit scoped to modified files
The hardening tickets (skipped for trivial/small single-ticket PRDs) run as normal Morty workers after all implementation work:
- Code Quality Hardening β szechuan-sauce principles review (KISS, DRY, dead code, edge cases) on all modified files
- Data Flow Audit β anatomy-park-style trace through affected subsystems (ID mismatches, stale schemas, cross-ticket interface alignment)
Review the tickets before proceeding. Check ordering, scope, and acceptance criteria. You can edit them directly β they're markdown files.
This is where Rick takes over. Each ticket goes through 8 phases autonomously: Research β Review β Plan β Review β Implement β Spec Conformance β Code Review β Simplify. Context clears between every iteration β no drift, even on 500+ iteration epics.
/pickle-tmux --resume # Launch tmux mode, picks up refined tickets
# or combine refine + implement in one shot:
/pickle-refine-prd --run my-prd.mdRick prints a tmux attach command β open a second terminal to watch the live 3-pane dashboard:
- Top-left: ticket status, phase, elapsed time, circuit breaker state
- Top-right: iteration log stream
- Bottom: live worker output (research, implementation, test runs, commits)
Sit back. Rick handles the rest.
If you can define a measurable goal β test coverage, response time, bundle size, extraction accuracy β the Microverse grinds toward it. Each cycle: make one change, measure, keep or revert. Failed approaches are tracked so it never repeats a dead end.
/pickle-microverse --metric "npm run coverage:score" --task "hit 90% test coverage"
/pickle-microverse --metric "node perf-test.js" --task "reduce p99 latency" --direction lower
/pickle-microverse --goal "error messages are user-friendly and actionable" --task "improve UX"Three options for polishing the result:
Full Pipeline β chains all three phases in a single tmux session: build, deep review, then deslop. No manual intervention between phases.
/pickle-pipeline "build the caching layer" # Full pipeline
/pickle-pipeline --skip-anatomy "refactor auth" # Skip deep review
/pickle-pipeline --target src/services "add retry logic" # Scope review phasesSzechuan Sauce β hunts coding principle violations (KISS, DRY, SOLID, security, style) and fixes them one at a time until zero remain. Great for post-feature polish before merging.
/szechuan-sauce src/services/ # Deslop a directory
/szechuan-sauce --dry-run src/ # Catalog violations without fixing
/szechuan-sauce --focus "error handling" src/ # Narrow the reviewAnatomy Park β traces data flows through subsystems looking for runtime bugs: data corruption, timezone issues, rounding errors, schema drift. Catalogs "trap doors" (files that keep breaking) in CLAUDE.md files for future engineers.
/anatomy-park src/ # Deep subsystem review
/anatomy-park --dry-run # Review only, no fixesYou describe a feature
β
βΌ
/pickle-prd β Interactive PRD drafting (or write your own)
β
βΌ
/pickle-refine-prd β 3 parallel analysts refine + decompose into tickets
β Includes auto-generated hardening tickets:
β β’ Code quality review (szechuan-sauce principles)
β β’ Data flow audit (anatomy-park trace)
βΌ
/pickle-tmux --resume β Autonomous implementation (Ralph loop)
β Research β Plan β Implement β Verify β Review β Simplify
β Context clears every iteration. Circuit breaker auto-stops runaways.
β Hardening tickets run automatically after implementation.
βΌ
/pickle-microverse β (Optional) Metric-driven optimization loop
β
βΌ
/pickle-pipeline β (Optional) Full lifecycle: build β deep review β deslop
β or run phases individually β
/szechuan-sauce β (Optional) Code quality cleanup
/anatomy-park β (Optional) Data flow correctness review
β
βΌ
Ship it π₯
git clone https://github.com/gregorydickson/pickle-rick-claude.git
cd pickle-rick-claude
bash install.shThe installer deploys persona.md to ~/.claude/pickle-rick/. Add it to your project's CLAUDE.md:
# Already have a CLAUDE.md? Append (safe β won't overwrite your content):
cat ~/.claude/pickle-rick/persona.md >> /path/to/your/project/.claude/CLAUDE.md
# Starting fresh:
mkdir -p /path/to/your/project/.claude
cp ~/.claude/pickle-rick/persona.md /path/to/your/project/.claude/CLAUDE.mdAfter upgrading:
bash install.shdeploys a freshpersona.md. If you appended it to your project'sCLAUDE.md, re-sync by replacing the old persona block with the updated one.
Permissions: Launch Claude with
claude --dangerously-skip-permissions. Pickle Rick's loops spawn worker subprocesses that already run permissionless, but the root instance needs it too β otherwise you'll drown in permission prompts for every file write, bash command, and hook invocation.
cd /path/to/your/project
claude --dangerously-skip-permissions
# then follow the workflow above β start with a PRDTwo uninstall paths depending on how much you want to remove.
Remove hooks only β disables automatic behavior (Stop loop enforcement, commit logging, config protection) but keeps extension files and slash commands available for manual use:
bash uninstall-hooks.shSettings are backed up to ~/.claude/backups/settings.json.pickle-uninstall-hooks.<timestamp> before modification. Run bash install.sh to re-enable hooks later β install.sh is idempotent, safe to re-run any time. Third-party hooks in settings.json (GitNexus, RTK, etc.) are never touched.
What still works without hooks:
- One-shot utilities and reporters (never needed hooks) β
/pickle-prd,/pickle-refine-prd,/pickle-dot,/pickle-dot-patterns,/pickle-metrics,/pickle-status,/pickle-standup,/help-pickle,/attract. - Detached-runner commands (bootstrap a separate process that runs independently inside tmux/zellij) β
/pickle-tmux,/pickle-zellij,/pickle-jar-open,/pickle-microverse,/szechuan-sauce,/anatomy-park,/pickle-pipeline. These launchmux-runner.js/jar-runner.js/microverse-runner.js/pipeline-runner.jsinside the multiplexer; the runner spawns its ownclaude -psubprocesses and drives iteration via Node.js, not via the Stop hook. In tmux mode the Stop hook is a pass-through anyway.
What needs hooks β in-session loops where the Stop hook is the iteration driver for the same Claude session: /pickle (interactive mode), /council-of-ricks, /portal-gun, /project-mayhem, /pickle-retry. Without hooks these run the first step and stop.
Full uninstall β removes hooks, extension scripts at ~/.claude/pickle-rick/, and all pickle-rick slash commands at ~/.claude/commands/:
bash uninstall.shPreserved after full uninstall (delete manually if desired):
- Session history at
~/.claude/pickle-rick/sessions/ - Activity logs at
~/.claude/pickle-rick/activity/ - Settings backups at
~/.claude/backups/ - Project-local
CLAUDE.mdfiles β remove the appended persona block manually
Third-party hooks in settings.json (GitNexus, RTK, etc.) are never touched.
For complex epics with parallel workstreams, conditional logic, and multiple quality gates. Instead of a linear ticket queue, define work as a convergence graph where failures automatically route back for correction.
/pickle-dot my-prd.md # Convert PRD β validated DOT digraph (builder path, default)
/attract pipeline.dot # Submit to attractor server for executionThe builder enforces 32+ active patterns and 15 structural validation rules β test-fix loops, goal gates, conditional routing, parallel fan-out/in, human gates, security scanning, coverage qualification, scope creep detection, drift detection, and more. See DotBuilder details below.
Reviews your Graphite PR stack iteratively β but never touches your code. Generates agent-executable directives you feed to your coding agent. Escalates through focus areas: stack structure β CLAUDE.md compliance β correctness β cross-branch contracts β test coverage β security β polish.
/council-of-ricks # Review the current Graphite stack"You see that code over there, Morty? In that other repo? I'm gonna open a portal, reach in, and yank its DNA into OUR dimension."
/portal-gun implements gene transfusion β transferring proven coding patterns between codebases using AI agents. Point it at a GitHub URL, local file, npm package, or just describe a pattern, and it extracts the structural DNA, analyzes your target codebase, then generates a transplant PRD with behavioral validation tests and automatic refinement.
The --run flag goes further: after generating the transplant PRD, it launches a convergence loop that executes the migration, scans coverage against the original inventory, generates a delta PRD for any missing items, and re-executes until 100% of the donor pattern has been transplanted.
v2 added a persistent pattern library (cached patterns reused across sessions), complete file manifests with anti-truncation enforcement, multi-language import graph tracing (TypeScript/JavaScript, Python, Go, Rust), 6-category transplant classification (direct transplant, type-only, behavioral reference, replace with equivalent, environment prerequisite, not needed), a PRD validation pass that verifies every file path against the filesystem with 6 error classes, post-edit consistency checking that catches contradictions after scope changes, and deep target diffs with line-level modification specs.
/portal-gun https://github.com/org/repo/blob/main/src/auth.ts # Transplant from GitHub
/portal-gun ../other-project/src/cache.ts # Transplant from local file
/portal-gun --run https://github.com/org/repo/tree/main/src/lib # Transplant + auto-execute convergence loop
/portal-gun --save-pattern retry ../donor/retry-logic.ts # Save pattern to library for reuse
/portal-gun --depth shallow https://github.com/org/repo # Summary + structural pattern onlyQueue tasks for unattended batch execution overnight.
/add-to-pickle-jar # Queue current session
/pickle-jar-open # Run all queued tasks sequentially| Command | Description |
|---|---|
/pickle "task" |
Start the full autonomous loop β PRD β breakdown β 8-phase execution |
/pickle prd.md |
Pick up an existing PRD, skip drafting |
/pickle-tmux "task" |
Same loop with context clearing via tmux. Best for long epics (8+ iterations) |
/pickle-zellij "task" |
Same loop in Zellij with KDL layouts. Requires Zellij >= 0.40.0 |
/pickle-refine-prd [path] |
Refine PRD with 3 parallel analysts β decompose into tickets |
/pickle-refine-prd --run [path] |
Refine + decompose + auto-launch unlimited tmux session |
/pickle-microverse |
Metric convergence loop. --metric for numeric, --goal for LLM judge |
/szechuan-sauce [target] |
Principle-driven deslopping. --dry-run, --focus, --domain |
/anatomy-park |
Three-phase deep subsystem review with trap door cataloging |
/pickle-pipeline "task" |
Full lifecycle: pickle-tmux β anatomy-park β szechuan-sauce in one tmux session |
/plumbus <file.dot> |
Iterative DAG shaping on a single .dot file. --dry-run, --focus, --no-validator |
/council-of-ricks |
Graphite PR stack review β generates directives, never fixes code |
/portal-gun <source> |
Gene transfusion from another codebase |
/pickle-dot [path] |
Convert PRD β attractor-compatible DOT digraph |
/attract [file.dot] |
Submit pipeline to attractor server |
/pickle-prd |
Draft a PRD standalone (no execution) |
/pickle-metrics |
Token usage, commits, LOC. --days N, --weekly, --json |
/pickle-standup |
Formatted standup summary from activity logs |
/pickle-status |
Current session phase, iteration, ticket status |
/eat-pickle |
Cancel the active loop |
/pickle-retry <ticket-id> |
Re-attempt a failed ticket |
/add-to-pickle-jar |
Queue session for Night Shift |
/pickle-jar-open |
Run all Jar tasks sequentially |
/disable-pickle |
Disable the stop hook globally |
/enable-pickle |
Re-enable the stop hook |
/help-pickle |
Show all commands and flags |
/meeseeks |
Deprecated β superseded by /anatomy-park and /szechuan-sauce |
--max-iterations <N> Stop after N iterations (default: 500; 0 = unlimited)
--max-time <M> Stop after M minutes (default: 720 / 12 hours; 0 = unlimited)
--worker-timeout <S> Timeout for individual workers in seconds (default: 1200)
--completion-promise "TXT" Only stop when the agent outputs <promise>TXT</promise>
--resume [PATH] Resume from an existing session
--reset Reset iteration counter and start time (use with --resume)
--paused Start in paused mode (PRD only)
--run (/pickle-refine-prd, /portal-gun) Auto-launch tmux
--interactive (/pickle-microverse) Run inline instead of tmux
--legacy (/pickle-dot) Prompt-only fallback β skips builder codegen for this run
--provider <name> (/pickle-dot) LLM provider: anthropic, openai, qwen, gemini, deepseek, ollama, vllm
--review-provider <name> (/pickle-dot) Separate provider for review/critical nodes
--isolated (/pickle-dot) Isolated workspace mode
--metric "<CMD>" (/pickle-microverse) Shell command outputting a numeric score
--goal "<TEXT>" (/pickle-microverse) Natural language goal for LLM judge
--direction <higher|lower> (/pickle-microverse) Optimization direction (default: higher)
--judge-model <MODEL> (/pickle-microverse) Judge model for LLM scoring
--tolerance <N> (/pickle-microverse) Score delta for "held" status (default: 0)
--stall-limit <N> (/pickle-microverse) Non-improving iterations before convergence (default: 5)
--target <PATH> (/portal-gun) Target repo (default: cwd)
--depth <shallow|deep> (/portal-gun) Extraction depth (default: deep)
--no-refine (/portal-gun) Skip automatic refinement
--max-passes <N> (/portal-gun) Max convergence passes (default: 3)
--save-pattern <NAME> (/portal-gun) Persist pattern to library
--target <PATH> (/pickle-pipeline) Target directory for review phases (default: cwd)
--skip-anatomy (/pickle-pipeline) Skip anatomy-park phase
--skip-szechuan (/pickle-pipeline) Skip szechuan-sauce phase
--anatomy-max-iterations N (/pickle-pipeline) Anatomy Park iteration limit (default: 100)
--anatomy-stall-limit N (/pickle-pipeline) Anatomy Park stall limit (default: 3)
--szechuan-max-iterations N (/pickle-pipeline) Szechuan Sauce iteration limit (default: 50)
--szechuan-stall-limit N (/pickle-pipeline) Szechuan Sauce stall limit (default: 5)
--szechuan-domain <name> (/pickle-pipeline) Domain-specific principles for Szechuan phase
--szechuan-focus "<text>" (/pickle-pipeline) Focus directive for Szechuan phase
--dry-run (/szechuan-sauce, /plumbus) Catalog violations without fixing
--domain <name> (/szechuan-sauce) Domain-specific principles (e.g., financial)
--focus "<text>" (/szechuan-sauce, /plumbus) Direct review toward specific concern
--no-validator (/plumbus) Disable attractor validator gate (pattern-only review)
--repo <PATH> (/council-of-ricks) Target repo (default: cwd)
/pickle vs /pickle-tmux β Use /pickle for short epics (1β7 iterations) with full keyboard access. Use /pickle-tmux for long epics (8+) where context drift matters β each iteration spawns a fresh Claude subprocess with a clean context window.
Phase-resume β When resuming after /pickle-refine-prd, the resume flow auto-detects the session's current phase and skips completed phases.
Notifications (macOS) β /pickle-tmux and /pickle-jar-open send macOS notifications on completion or failure.
Recovering from a failed Morty β Use /pickle-retry <ticket-id> instead of restarting the whole epic.
"Stop hook error" is normal β Claude Code labels every decision: block from the stop hook as "Stop hook error" in the UI. This is not an error β it means the loop is working.
All defaults are configurable via ~/.claude/pickle-rick/pickle_settings.json:
| Setting | Default | Description |
|---|---|---|
default_max_iterations |
500 | Max loop iterations before auto-stop |
default_max_time_minutes |
720 | Session wall-clock limit (12 hours) |
default_worker_timeout_seconds |
1200 | Per-worker subprocess timeout |
default_manager_max_turns |
50 | Max Claude turns per iteration (interactive/jar) |
default_tmux_max_turns |
200 | Max Claude turns per iteration (tmux) |
default_refinement_cycles |
3 | Number of refinement analysis passes |
default_refinement_max_turns |
100 | Max Claude turns per refinement worker |
default_council_min_passes |
5 | Minimum Council of Ricks review passes |
default_council_max_passes |
20 | Maximum Council of Ricks review passes |
default_circuit_breaker_enabled |
true | Enable circuit breaker |
default_cb_no_progress_threshold |
5 | No-progress iterations before OPEN |
default_cb_same_error_threshold |
5 | Identical errors before OPEN |
default_cb_half_open_after |
2 | No-progress iterations before HALF_OPEN |
default_rate_limit_wait_minutes |
60 | Fallback wait when no API reset time |
default_max_rate_limit_retries |
3 | Consecutive rate limits before stopping |
"I put a universe inside a box, Morty, and it powers my car battery. This is the same thing, except the universe is your codebase and the battery is a metric."
Two modes: Command Metric (--metric) for objective numeric scores, and LLM Judge (--goal) for subjective quality assessment.
Gap Analysis (iteration 0)
β measure baseline, analyze codebase, identify bottlenecks
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Iteration Loop β
β 1. Plan one targeted change (avoid failed list) β
β 2. Implement + commit β
β 3. Measure metric β
β β’ Improved β accept, reset stall counter β
β β’ Held β accept, increment stall counter β
β β’ Regressed β git reset, log failed approach β
β 4. Converged? (stall_counter β₯ stall_limit) β
ββββββββββββββββββββββββ¬βββββββββββββββββββββββββββ
βΌ
Final Report
| Microverse | Pickle | |
|---|---|---|
| Goal | Optimize toward a measurable target | Build features from a PRD |
| Iteration unit | One atomic change per cycle | Full ticket lifecycle |
| Progress signal | Metric score | Ticket completion |
| Defines "done" | Convergence (score stops improving) | All tickets complete |
"I'm not driven by avenging my dead family, Morty. That was fake. I-I-I'm driven by finding that McNugget sauce."
Reads 30+ coding principles (KISS, YAGNI, DRY, SOLID, Guard Clauses, Fail-Fast, Encapsulation, Cognitive Load, etc.) and scores against a priority matrix (P0 security/data-loss through P4 style). Each iteration: find highest-priority violation, fix atomically, run tests, commit, measure. Regressions auto-revert.
Phase 0: Contract Discovery β greps the codebase for importers of every export in target files, builds a contract map, flags cross-module mismatches. Re-checked after every fix.
Supports --domain <name> for domain-specific principles (e.g., financial adds monetary precision, rounding, regulatory compliance) and --focus "<text>" to elevate specific concerns.
"Welcome to Anatomy Park! It's like Jurassic Park but inside a human body. Way more dangerous."
Auto-discovers subsystems, rotates through them round-robin, three-phase protocol per iteration:
- Review (read-only): trace data flows, check git history, rate CRITICAL/HIGH, propose fixes
- Fix: apply minimal edits, write regression tests, run full suite
- Verify (read-only): verify callers/consumers, combinatorial branch verification, revert on regression
Trap doors β files with repeated fixes or structural invariants get documented in subsystem CLAUDE.md files:
## Trap Doors
- `bank-statement.service.ts` β borrowerFileId MUST equal S3 batch UUID; tenant isolation depends on effectiveLenderId threading/pickle-dot builds DOT pipelines by default via the DotBuilder TypeScript class β a schema-validated codegen path that enforces 32 active patterns and 15 structural validation rules and produces deterministic output. Use --builder to explicitly opt into the builder (e.g., when a global config overrides it), or --legacy to fall back to prompt-only generation for a specific run.
/pickle-dot my-prd.md # Builder codegen path (default)
/pickle-dot --builder my-prd.md # Explicit opt-in to builder (same as default)
/pickle-dot --legacy my-prd.md # Prompt-only fallback β rollback for a single runimport { DotBuilder } from '~/.claude/pickle-rick/extension/services/dot-builder.js';
// Static factory β validates and parses the spec, then returns a builder instance
const builder = DotBuilder.fromSpec(spec); // throws BuildError on invalid spec
// Fluent chain β call build() once; calling it again throws ALREADY_BUILT
const result = builder.build();
// result: BuildResult {
// dot: string, β the complete DOT digraph string
// slug: string, β URL-safe pipeline identifier
// patternsApplied: string[] β Tier 1/2 patterns auto-applied (e.g. ["test_fix_loop","fan_out"])
// defenseMatrix: { β Layer coverage summary
// competitive: boolean, β Pattern 18 (competing impls) applied
// specDriven: string, β "ALL" | "PARTIAL" | "NONE" (conformance nodes present)
// adversarial: boolean, β Pattern 17 (red team) applied
// },
// diagnostics: Diagnostic[] β warnings/infos from validation (non-blocking)
// }The builder binary reads BuilderSpec JSON from stdin and writes to stdout/stderr:
echo '<BuilderSpec JSON>' | node ~/.claude/pickle-rick/extension/bin/dot-builder.js| Exit | Stream | Payload |
|---|---|---|
0 |
stdout | BuildResult JSON β { dot, slug, patternsApplied, defenseMatrix, diagnostics } |
1 |
stderr | BuildError JSON β { error: BuildErrorCode, message, diagnostics } β validation failure, recoverable |
2 |
stderr | { error: "UNEXPECTED_ERROR", message } β I/O or parse failure, not recoverable |
When the builder exits 1, /pickle-dot enters an automatic fix loop. It reads the diagnostics array from stderr, applies minimum-scope fixes to the BuilderSpec, and re-invokes the CLI. The loop tracks the best attempt (fewest errors) and reverts to it after 2 consecutive non-improvements. After 3 total failed iterations without improvement:
- The best
BuilderSpecoutput is saved as./<slug>.dot.draft - All remaining diagnostics with their
.fixhints are listed - The loop stops β manual intervention required
Re-run after fixing: /pickle-dot <prd>. The .dot.draft file is not a valid pipeline β do not submit it to /attract until errors are resolved.
Legacy (prompt-only) path: /pickle-dot --legacy also runs a post-save validate-fix loop with the same convergence guard, invoking the attractor validator CLI (bun packages/attractor/src/cli.ts validate) on the emitted raw DOT. On exhaustion it saves the best attempt as ./<slug>.dot.draft. If the validator CLI is unavailable (attractor root not detected), the loop is skipped and the initial DOT is saved as-is with a warning.
Validation error codes: EMPTY_SLUG, EMPTY_GOAL, DUPLICATE_PHASE, INVALID_SPEC, MISSING_AC_MAPPING, MISSING_TIMEOUT, INVALID_TIMEOUT, MISSING_ALLOWED_PATHS, INVALID_ALLOWED_PATHS, PROMPT_PATH_MISMATCH, INVALID_STRUCTURE, START_HAS_INCOMING, UNREACHABLE_NODE, DIAMOND_MISSING_EDGES, FAN_OUT_SCOPE_LEAK, GOAL_GATE_NO_MAX_VISITS, REVIEW_MISSING_READONLY, WORKSPACE_NO_HTTPS, WORKSPACE_NO_PUSH, PLAN_MODE_DEADLOCK, COMPONENT_NO_MERGE, INVALID_RATCHET, NON_NUMERIC_TARGET, ALREADY_BUILT, DUPLICATE_MODEL, INVALID_CONVERGENCE_SPEC
Requires a Graphite stack with at least one non-trunk branch, a CLAUDE.md with project rules, passing lint, and architectural lint rules in ESLint. Escalates through focus areas: stack structure (pass 1) β CLAUDE.md compliance (2β3) β per-branch correctness (4β5) β cross-branch contracts (6β7) β test coverage (8β9) β security (10β11) β polish (12+). Issues triaged: P0 (must-fix), P1 (should-fix), P2 (nice-to-fix).
"Everybody has a plumbus in their home, Morty. First they take the dinglebop, smooth it out with a bunch of schleem..."
The same convergence loop applied to a single attractor .dot pipeline. Runs the attractor validator as a hard gate, walks every edge, and converges against the pickle-dot-patterns rubric (DAG validity, Tier 1 mandatory patterns, anti-patterns). Use it after /pickle-dot generates a graph you want hardened before /attract.
/plumbus pipeline.dot # Shape a DAG into a proper plumbus
/plumbus --dry-run pipeline.dot # Catalog violations only
/plumbus --focus "fan-out safety" pipeline.dot
/plumbus --no-validator pipeline.dot # Pattern-only (no attractor repo)When to use which: Szechuan Sauce asks "is this code well-designed?" β Anatomy Park asks "is this code correct?" β Plumbus asks "will this DAG actually run without deadlocking?"
Each ticket goes through 8 phases in the autonomous loop:
βββββββββββββββ
β π PRD β β Requirements + verification strategy + interface contracts
ββββββββ¬βββββββ
β
βΌ
βββββββββββββββ
β π¦ Breakdownβ β Atomize into tickets, each self-contained with spec
ββββββββ¬βββββββ
β
ββββββ΄βββββ per ticket (Morty workers πΆ)
βΌ βΌ
ββββββββ ββββββββ
βπ¬ Re-β βπ¬ Re-β 1. Research the codebase
βsearchβ βsearchβ
ββββ¬ββββ ββββ¬ββββ
βΌ βΌ
ββββββββ ββββββββ
βπ Re-β βπ Re-β 2. Review the research
βview β βview β
ββββ¬ββββ ββββ¬ββββ
βΌ βΌ
ββββββββ ββββββββ
βπPlanβ βπPlanβ 3. Architect the solution
ββββ¬ββββ ββββ¬ββββ
βΌ βΌ
ββββββββ ββββββββ
βπ Re-β βπ Re-β 4. Review the plan
βview β βview β
ββββ¬ββββ ββββ¬ββββ
βΌ βΌ
ββββββββ ββββββββ
ββ‘ Im-β ββ‘ Im-β 5. Implement
βplem β βplem β
ββββ¬ββββ ββββ¬ββββ
βΌ βΌ
ββββββββ ββββββββ
ββ
Ve-β ββ
Ve-β 6. Spec conformance
βrify β βrify β
ββββ¬ββββ ββββ¬ββββ
βΌ βΌ
ββββββββ ββββββββ
βπ Re-β βπ Re-β 7. Code review
βview β βview β
ββββ¬ββββ ββββ¬ββββ
βΌ βΌ
ββββββββ ββββββββ
βπ§ΉSim-β βπ§ΉSim-β 8. Simplify
βplify β βplify β
ββββββββ ββββββββ
The Stop hook prevents Claude from exiting until the task is genuinely complete. Between each iteration, the hook injects a fresh session summary β current phase, ticket list, active task β so Rick always wakes up knowing exactly where he is, even after full context compression.
All modes support both tmux and Zellij monitor layouts.
/pickle-metrics # Last 7 days, daily breakdown
/pickle-metrics --days 30 # Last 30 days
/pickle-metrics --weekly # Weekly buckets (defaults to 28 days)
/pickle-metrics --json # Machine-readable JSON output- Node.js 18+
- Claude Code CLI (
claude) β v2.1.49+ - jq (for
install.sh,uninstall.sh,uninstall-hooks.sh) - rsync (for
install.sh) - tmux (optional β for
/pickle-tmux,/szechuan-sauce,/anatomy-park) - Zellij >= 0.40.0 (optional β for
/pickle-zellij) - Graphite CLI (
gt) (optional β for/council-of-ricks) - macOS or Linux (Windows not supported)
This port stands on the shoulders of giants. Wubba Lubba Dub Dub.
| π₯ galz10 | Creator of the original Pickle Rick Gemini CLI extension β the autonomous lifecycle, manager/worker model, hook loop, and all the skill content that makes this thing work. This project is a faithful port of their work. |
| π§ Geoffrey Huntley | Inventor of the "Ralph Wiggum" technique β the foundational insight that "Ralph is a Bash loop": feed an AI agent a prompt, block its exit, repeat until done. Everything here traces back to that idea. |
| π§ AsyncFuncAI/ralph-wiggum-extension | Reference implementation of the Ralph Wiggum loop that inspired the Pickle Rick extension. |
| βοΈ dexhorthy | Context engineering and prompt techniques used throughout. |
| πΊ Rick and Morty | For Pickle Riiiick! π₯ |
Apache 2.0 β same as the original Pickle Rick extension.
"I'm not a tool, Morty. I'm a methodology." π₯







{ "slug": "auth_refactor", // required β URL-safe, lowercase underscores "goal": "Refactor auth module", // required β single-sentence goal "phases": [ // required β list of implementation phases (may be [] for microverse-only) { "name": "implement", // required β lowercase underscores; must be unique "prompt": "...", // required β full impl instruction; agent has NO access to the PRD "allowedPaths": ["src/auth/"], // required β glob patterns for permission scoping "dependsOn": ["research"], // optional β phase names this phase depends on; omit for parallel fan-out "goalGate": true, // optional β Pattern 2: verify progress before continuing "timeout": "30m", // optional β per-phase duration string (default: "30m") "securityScan": true, // optional β Pattern 8: npm audit node after progress gate "coverageTarget": 80, // optional β Pattern 9: numeric coverage % gate "competing": true, // optional β Pattern 18: fan-out to two competing impls "redTeam": true, // optional β Pattern 17: adversarial review after conformance "bddScenarios": true, // optional β Pattern 16b: Given/When/Then scenario generation "specFirst": true, // optional β Pattern 16: write tests before impl (default: true when goalGate) "docOnly": false, // optional β suppress verify chain for doc-only phases "escalateOn": ["package.json"], // optional β files that trigger escalation (default: ["package.json","*.lock","*.config.*"]) "contextOnSuccess": { // optional β custom AC keys emitted by this phase's conformance node "auth_secure": "true" } } ], "acceptanceCriteria": { // required β exit gate conditions "tests_pass": "true", // Tier 2 keys (auto-sourced): tests_pass, lint_clean, types_compile, "lint_clean": "true", // cli_contract, determinism, validation_rules "auth_secure": "true" // Tier 1 keys (custom): must appear in a phase's contextOnSuccess }, "workingDir": "${WORKING_DIR}", // optional β attractor resolves at runtime "specFile": "/repos/myapp/prd.md", // optional β path to PRD; interpolated as $spec_file in node prompts "reviewRatchet": 2, // optional β min consecutive clean review passes (must be β₯ 2) "workspace": "isolated", // optional β omit for shared (default) "workspaceOpts": { // required when workspace: "isolated" "repoUrl": "https://github.com/org/repo.git", // HTTPS required (not SSH) "repoBranch": "main", "cleanup": "preserve" // "preserve" (default) | "delete" }, "microverse": { // optional β numeric optimization loop (replaces impl/verify chain) "name": "bundle_opt", "opts": { "prompt": "...", "measureCommand": "npm run build 2>/dev/null && wc -c < dist/bundle.js", "target": 819200, "direction": "reduce", // "reduce" | "improve" "allowedPaths": ["src/**"] } }, "modelStylesheet": { // optional β model tier overrides "defaultModel": "claude-sonnet-4-6", "criticalModel": "claude-opus-4-6", "reviewModel": "claude-opus-4-6" }, "convergence": { // optional β Pattern 32 iterative convergence loop (replaces phases) "until": "V_total == 0 && fixed_point && reproducibility", // predicate from canonical set "impl": { "harness": "hermes" }, // required β default harness for fix nodes "maxIterations": 6, // default: 6 β max body executions before non-convergence declared "maxVisits": 5, // default: 5 β per-converge-node visit budget "timeout": "21600s", // default: 21600s β overall converge node timeout "convergenceEpsilon": 100, // default: 100 β V_total threshold for convergence declaration "fixBackend": { // optional β override fix_backend node "model": "provider/model-id", "harness": "hermes", "prompt": "...", "timeout": "3600s", "maxVisits": 10 }, "fixFrontend": { // optional β override fix_frontend node (same shape as fixBackend) "model": "provider/model-id", "harness": "hermes", "prompt": "..." }, "mechanicalGates": { // optional β override mechanical gate tool_commands "buildApi": "cd /repos/app/packages/api && npx tsc --noEmit 2>&1 && echo 'api typecheck pass'", "testsApi": "cd /repos/app/packages/api && npm test --silent 2>&1 && echo 'api tests pass'", "buildUi": "cd /repos/app/packages/ui && npx tsc --noEmit 2>&1 && echo 'ui typecheck pass'", "lint": "cd /repos/app && npx eslint packages/api/src --max-warnings=0 2>&1 && echo 'lint pass'" }, "reviewers": { // optional β override reviewer node attrs "be": { "model": "provider/model-id", "harness": "hermes", "prompt": "..." }, "fe": { "model": "provider/model-id", "harness": "hermes", "prompt": "..." }, "int": { "model": "provider/model-id", "harness": "hermes", "prompt": "..." } }, "adversary": { // optional β override adversary node "model": "provider/model-id", "harness": "hermes", "prompt": "...", "sealedFromSource": "packages/api/src/**,packages/ui/app/**" }, "fpVerify": { // optional β override fp_verify goal gate "command": "set -o pipefail; cd /repos/app && npm install 2>&1 | tail -3 && cd packages/api && npx tsc --noEmit && npm test && cd ../ui && npx tsc --noEmit && echo 'fixed-point verified'", "timeout": "900s", "maxVisits": 5 }, "reproVerify": { // optional β override repro_verify goal gate "command": "set -o pipefail; cd /repos/app && rm -rf packages/api/node_modules packages/ui/node_modules && npm install 2>&1 | tail -3 && cd packages/api && npx tsc --noEmit && npm test && cd ../ui && npx tsc --noEmit && echo 'reproducibility verified'", "timeout": "900s", "maxVisits": 5 } } }