Stage 1 of the Stage 5b LLM-fallback plan — the unblock path for MCP-capable callers (Claude Code, Cursor, Zed, Continue.dev, Claude Desktop, Cline, JetBrains AI Assistant, anything stdio-MCP-capable). Depends on the Stage 0 tripwire + audit pipeline landing first.
Goal
When a deterministic scraper hits a tripwire, let an MCP-capable caller's own LLM attach chrome-devtools-mcp (or equivalent) to the daemon's Chrome port, drive the browser through the unknown step, and resume the scraper — all at zero cost to AgentKeys (caller's existing LLM quota) with no new API key on our side. Preserves Stage 5b's "no second API key" rule.
Scope
-
MCP handshake capability negotiation — crates/agentkeys-mcp/src/lib.rs:
- On MCP client connect, daemon asks "can you attach
chrome-devtools-mcp (or equivalent browser-MCP) to http://localhost:9222?".
- If client answers yes: Stage 1 becomes the caller's preferred fallback tier. Default otherwise: Stage 0
none_configured (from sibling issue).
- Advertise a
browser-control-capable capability flag in the daemon's MCP handshake.
-
New MCP tool agentkeys.resume_provision({ token, user_action_summary }):
- Caller invokes after finishing the manual step via its own
chrome-devtools-mcp.
- Daemon verifies Chrome is on a known-next-state URL (guards against arbitrary caller claims).
- Re-spawns the scraper with a
--resume-from-step <step> flag, scraper picks up and completes.
- Returns the normal success envelope:
{\"type\":\"success\",\"api_key\":\"sk-...\"}.
-
Scraper resume flag — openrouter-cdp.ts + openai-cdp.ts:
- Accept
--resume-from-step <step> argument.
- Skip past the named step using the serialized state from
~/.agentkeys/fallback/<token>.json (persisted by Stage 0).
- Validate Chrome is on the expected URL before proceeding; abort with a clear error if caller's LLM left it elsewhere.
-
Chrome port exposure — no daemon changes needed. The Chrome endpoint is already on http://localhost:9222 via scripts/reset-chrome-for-recording.sh. Stage 1 just documents that callers attach their own chrome-devtools-mcp to the same port (same pattern our repo's .mcp.json uses for dev-time work). No proxying, no new daemon surface.
-
Zero new dependencies. Stage 1 adds no npm or Cargo packages on our side. The caller brings the LLM client + SDK.
-
Integration guides — new docs/integration-guides/ directory with one short doc per MCP client we validate:
claude-code.md (reference implementation)
cursor.md (prove it's not Anthropic-specific)
- at least one more (Zed or Continue.dev) to confirm MCP generality
- Update
~/.claude/skills/agentkeys-workflow-collection/SKILL.md to reference the resume handshake.
Acceptance
Verify against at least 2 MCP clients to prove chrome-devtools-mcp client-agnosticism:
- Reference client 1, Claude Code:
provision_key(service=openai, fallback_preference=\"caller-driven\"). Hit synthetic tripwire (from Stage 0's test harness) → NeedsFallback { token, tier: \"caller-driven\", chrome_url: \"http://localhost:9222\" }. Claude Code's chrome-devtools-mcp clicks the injected button. resume_provision(token) → scraper resumes → {\"type\":\"success\",\"api_key\":\"sk-proj-...\"}.
- Reference client 2, Cursor (or Zed): repeat the same flow, confirming the handoff doesn't depend on Anthropic-specific plumbing. Document setup in
docs/integration-guides/cursor.md.
- Audit trail in
~/.agentkeys/audit/<ts>.jsonl captures: tripwire event, caller's LLM action summary, resume event, success. End-to-end ≤30s overhead vs happy path.
- Non-MCP caller regression: plain curl against the daemon hits the same synthetic tripwire and receives
NeedsFallback { tier: \"none_configured\" } — confirms Stage 0 fallback still kicks in correctly for unsupported clients.
- Capability-flag negotiation: an MCP client that advertises
browser-control-capable=false gets routed to Stage 0 behavior, not Stage 1.
- State validation: call
resume_provision with Chrome parked on an unexpected URL → daemon refuses with a clear error, does not re-spawn the scraper.
Dependencies
- Blocked on Stage 0 tripwire event +
~/.agentkeys/fallback/<token>.json persistence landing first (sibling issue).
Out of scope
Stage 1 of the Stage 5b LLM-fallback plan — the unblock path for MCP-capable callers (Claude Code, Cursor, Zed, Continue.dev, Claude Desktop, Cline, JetBrains AI Assistant, anything stdio-MCP-capable). Depends on the Stage 0 tripwire + audit pipeline landing first.
Goal
When a deterministic scraper hits a tripwire, let an MCP-capable caller's own LLM attach
chrome-devtools-mcp(or equivalent) to the daemon's Chrome port, drive the browser through the unknown step, and resume the scraper — all at zero cost to AgentKeys (caller's existing LLM quota) with no new API key on our side. Preserves Stage 5b's "no second API key" rule.Scope
MCP handshake capability negotiation —
crates/agentkeys-mcp/src/lib.rs:chrome-devtools-mcp(or equivalent browser-MCP) tohttp://localhost:9222?".none_configured(from sibling issue).browser-control-capablecapability flag in the daemon's MCP handshake.New MCP tool
agentkeys.resume_provision({ token, user_action_summary }):chrome-devtools-mcp.--resume-from-step <step>flag, scraper picks up and completes.{\"type\":\"success\",\"api_key\":\"sk-...\"}.Scraper resume flag —
openrouter-cdp.ts+openai-cdp.ts:--resume-from-step <step>argument.~/.agentkeys/fallback/<token>.json(persisted by Stage 0).Chrome port exposure — no daemon changes needed. The Chrome endpoint is already on
http://localhost:9222viascripts/reset-chrome-for-recording.sh. Stage 1 just documents that callers attach their ownchrome-devtools-mcpto the same port (same pattern our repo's.mcp.jsonuses for dev-time work). No proxying, no new daemon surface.Zero new dependencies. Stage 1 adds no npm or Cargo packages on our side. The caller brings the LLM client + SDK.
Integration guides — new
docs/integration-guides/directory with one short doc per MCP client we validate:claude-code.md(reference implementation)cursor.md(prove it's not Anthropic-specific)~/.claude/skills/agentkeys-workflow-collection/SKILL.mdto reference the resume handshake.Acceptance
Verify against at least 2 MCP clients to prove chrome-devtools-mcp client-agnosticism:
provision_key(service=openai, fallback_preference=\"caller-driven\"). Hit synthetic tripwire (from Stage 0's test harness) →NeedsFallback { token, tier: \"caller-driven\", chrome_url: \"http://localhost:9222\" }. Claude Code'schrome-devtools-mcpclicks the injected button.resume_provision(token)→ scraper resumes →{\"type\":\"success\",\"api_key\":\"sk-proj-...\"}.docs/integration-guides/cursor.md.~/.agentkeys/audit/<ts>.jsonlcaptures: tripwire event, caller's LLM action summary, resume event, success. End-to-end ≤30s overhead vs happy path.NeedsFallback { tier: \"none_configured\" }— confirms Stage 0 fallback still kicks in correctly for unsupported clients.browser-control-capable=falsegets routed to Stage 0 behavior, not Stage 1.resume_provisionwith Chrome parked on an unexpected URL → daemon refuses with a clear error, does not re-spawn the scraper.Dependencies
~/.agentkeys/fallback/<token>.jsonpersistence landing first (sibling issue).Out of scope
/agentkeys-ship-scraperskill). The audit trail from Stage 1 sessions feeds it.