An open-source security playbook for AI agents. Structured, OWASP-grounded procedures that enable agents to perform security engineering tasks — from code review to AI agent security audits.
- Why Use This?
- What This Is
- Quick Start
- Skills Catalog
- Agents
- Example Output
- Plays
- Architecture
- OWASP Foundation
- Related Projects
- Contributing
Without a playbook, asking an AI agent to "review my code for security" gives you a surface-level checklist. With these plays, the agent follows a structured OWASP-grounded procedure — systematically testing every vulnerability class, producing findings with CWE mappings, OpenCRE cross-references, evidence snippets, and specific remediation code.
- Consistent methodology — Every assessment follows a documented procedure, not ad-hoc prompting. Results are reproducible across runs and reviewers.
- Structured, actionable output — Findings include severity, CWE, evidence, and remediation steps with code examples. No vague warnings.
- Cross-standard traceability — Findings link to CWE, ASVS, WSTG, and NIST 800-53 via OpenCRE for compliance mapping.
- 15 security skills — From dependency CVE scanning to prompt injection testing to multi-agent threat modeling. Install as a Claude Code plugin or use standalone.
- Works beyond Claude Code — Skills are Claude Code plugins; plays are standalone procedures any AI agent can follow.
This is not a framework or a library. There is no code to import.
Each play is a step-by-step security procedure with checklists, decision criteria, and output templates. An AI agent follows the procedure to produce consistent, evidence-based findings. Think of it like a SOC analyst's playbook — but written for AI agents to execute.
With Claude Code (recommended):
Step 1 — Register the plugin marketplace:
/plugin marketplace add OWASP/secure-agent-playbook
Step 2 — Install a skill set:
/plugin install code-security-skills@agent-security-playbook
/plugin install ai-security-skills@agent-security-playbook
Step 3 — Use the skills by mentioning the task in conversation:
"Review this code for security issues"
"Scan my dependencies for CVEs"
"Audit this MCP server configuration"
"Test this chatbot for prompt injection"
Claude will automatically activate the relevant skill based on context. See Skills Catalog for all available skills and Example Output for what the results look like.
Organization plugin — For Claude organization admins installing via Organization Plugins:
- Go to the latest release
- Download
secure-agent-playbook.zipfrom the release assets - Upload the zip at Organization settings > Plugins > Add plugin
Note: Do not use GitHub's "Download ZIP" button — it nests files in a subdirectory that the plugin validator rejects. Always use the release asset zip.
Local development — To test from a local clone instead of GitHub:
/plugin install /path/to/agent-security-playbook
Without Claude Code:
Reference plays directly as procedures for any AI agent or manual use:
- Point your agent at a play: "Follow the procedure in
plays/tier4-ai-security/agent-security-audit.md" - Or use the plays as checklists for manual security reviews
| Skill | What It Does | Say This | OWASP Ref |
|---|---|---|---|
code-review-security |
Security code review mapped to Top 10 + ASVS | "Review this code for security issues" | Top 10, ASVS |
sca-audit |
Dependency CVE scanning with reachability analysis | "Scan my dependencies for CVEs" | A06:2021 |
secrets-scan |
Detect hardcoded credentials and API keys | "Scan for hardcoded secrets" | CWE-798 |
api-security-review |
API review against OWASP API Top 10 (2023) | "Review this API for security" | API Top 10 |
web-security-review |
Web app review against OWASP Top 10 (2021) | "Review this web app for OWASP Top 10" | Top 10 |
iac-security-review |
IaC security (Terraform, K8s, CloudFormation) | "Review this Terraform for security" | CIS Benchmarks |
securability-engineering |
Generate inherently securable code (FIASSE) | "Generate secure code for..." | FIASSE |
securability-engineering-review |
Assess code securability (0-10 SSEM scoring) | "Assess the securability of this code" | FIASSE/SSEM |
security-guidance |
Auto-triggered ASVS guidance for security-sensitive code | (auto-triggered) | ASVS 5.0 |
| Skill | What It Does | Say This | OWASP Ref |
|---|---|---|---|
agent-security-audit |
Audit agent permissions, injection surfaces, data exfil paths | "Audit this agent's security" | LLM Top 10 |
llm-risk-assess |
LLM app assessment against LLM Top 10 2025 | "Assess LLM risks for this app" | LLM Top 10 |
agentic-ai-risk-assess |
Agentic app assessment against Top 10 Agentic 2026 | "Assess agentic AI risks" | Agentic Top 10 |
mcp-server-review |
MCP server security review | "Review this MCP server" | LLM Top 10 |
prompt-injection-test |
Prompt injection testing (Arcanum PI Taxonomy) | "Test for prompt injection" | LLM01 |
multi-agentic-threat-model |
CSA MAESTRO 7-layer threat modeling | "Model threats for this multi-agent system" | CSA MAESTRO |
Agents are autonomous security specialists that invoke skills and produce structured reports. Each agent has a focused system prompt, scoped tool access, and preloaded skills. Use them individually or as a coordinated team.
| Agent | Focus | Skills Invoked |
|---|---|---|
code-security-reviewer |
Code vulnerabilities, secrets, web security | code-review-security, secrets-scan, web-security-review |
dependency-auditor |
Supply chain and dependency CVE risks | sca-audit |
api-security-reviewer |
API security against OWASP API Top 10 | api-security-review |
ai-security-assessor |
Agent configs, MCP servers, LLM app risks | agent-security-audit, mcp-server-review, llm-risk-assess, prompt-injection-test |
security-team-lead |
Coordinates specialists, consolidates report | securability-engineering-review |
Standalone usage — invoke any agent directly:
"Use code-security-reviewer to review src/"
"Use dependency-auditor to scan this project"
Team assessment — with agent teams enabled, the team lead dispatches specialists in parallel and consolidates findings into a single report:
"Run a full security assessment of this project"
The team lead scopes the target, dispatches relevant specialists (skipping those whose focus area isn't present), deduplicates findings, identifies cross-domain risk chains, and produces a unified report using templates/report.md.
Agent teams requires
CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1. See Claude Code docs for setup. Individual agents work without this flag.
Running "Review src/auth/ for security issues" on a Node.js/Express codebase produces findings like this:
Security Code Review — src/auth/
Scope: Node.js/Express, server-side, processes user credentials
Findings: CRITICAL 1 | HIGH 2 | MEDIUM 1 | LOW 0
- CWE: CWE-89
- OpenCRE: CRE-764-507 — Injection Prevention
- OWASP Ref: A03:2021 Injection
- Location:
src/auth/login.js:42 - Impact: Attacker can extract the entire users table, bypass authentication, or execute arbitrary SQL via crafted
idparameter - Evidence:
// src/auth/login.js:42 const user = await db.query("SELECT * FROM users WHERE id=" + req.params.id);
- Remediation: Use parameterized queries:
const user = await db.query("SELECT * FROM users WHERE id = $1", [req.params.id]);
- Confidence: HIGH
Every skill produces structured findings with severity, CWE, evidence, and remediation code. See
examples/for complete sample assessment reports.
The differentiator — security procedures purpose-built for the AI agent era.
| Play | What It Does |
|---|---|
| agent-security-audit | Audit agent permissions, prompt injection surfaces, data exfiltration paths, guardrails |
| llm-risk-assess | Assess LLM applications against OWASP Top 10 for LLM Applications |
| mcp-server-review | Review MCP server implementations for overpermissioning, injection, data exposure |
| prompt-injection-testing | Test LLM apps against 18 attack techniques, 20 evasions, 13 intents |
Immediate, practical value for any codebase.
| Play | What It Does |
|---|---|
| sca-audit | Scan dependencies for known CVEs with reachability analysis |
| code-review-security | Systematic security code review mapped to OWASP Top 10 and ASVS |
| secrets-scan | Detect hardcoded credentials, API keys, and tokens |
| api-security-review | Review APIs against OWASP API Security Top 10 |
| securability-engineering-review | Assess code against FIASSE/SSEM securable attributes: Maintainability, Trustworthiness, Reliability, and Transparency |
- Tier 2: Threat modeling, ASVS verification, infrastructure hardening
- Tier 3: WSTG testing checklist, DAST orchestration, attack surface mapping
- Tier 5: SAMM maturity assessment, compliance mapping, aggregate reporting
Three-layer design:
agents/— Autonomous security specialists with focused system prompts. Each agent invokes one or more skills, operates in an isolated context, and produces structured reports. Can work solo or as a coordinated team.skills/— Self-containedSKILL.mdfiles following the Agent Skills spec. Installable as a Claude Code plugin via.claude-plugin/plugin.json. Each skill summarizes a procedure and references its corresponding play.plays/— Full reference procedures with detailed checklists, tables, decision criteria, and examples. Skills reference these for comprehensive coverage.
Agents orchestrate, skills execute, plays provide the full procedure. Contributors edit plays. This means the playbook works with any AI agent (just point it at a play), while Claude Code users get plugin-based installation with agents and skills.
All plays reference OWASP standards and datasets:
- OWASP Top 10 — Web application risks
- OWASP API Security Top 10 — API-specific risks
- OWASP Top 10 for LLM Applications — AI/LLM risks
- OWASP ASVS — Security verification requirements
- OWASP WSTG — Testing methodology
- OWASP SAMM — Security program maturity model
- OWASP Cheat Sheet Series — Developer security guidance
| Project | Relationship |
|---|---|
| OWASP Agent Skills Project | Proactive ASVS 5.0 guidance for AI coding agents — helps agents write secure code. We use their ASVS reference data in data/asvs/. Complementary: they guide code generation, we find vulnerabilities in existing code. |
| Arcanum PI Taxonomy | Prompt injection attack classification by Jason Haddix. Our prompt-injection-testing play is built on this taxonomy. CC BY 4.0. |
| OpenCRE | Cross-standard requirement mappings (CWE, ASVS, WSTG, NIST 800-53). We use OpenCRE links in findings for multi-framework traceability. |
See CONTRIBUTING.md for detailed guidelines.
New plays should:
- Solve one well-defined security task
- Include clear trigger conditions (when should this play run?)
- Follow a structured procedure with checkpoints
- Produce findings using
templates/finding.mdformat - Reference OWASP standards where applicable
- Prefer existing tools (semgrep, trivy, osv-scanner, trufflehog) over reimplementing detection
This project is open source. See LICENSE for details.