Autonomous AI security platform for enterprise codebases.
Project Aura detects vulnerabilities, generates production-ready patches, validates fixes in isolated sandboxes, and queues them for human approval -- all autonomously. Unlike traditional security scanners that stop at detection, Aura provides end-to-end remediation through a multi-agent AI system built for regulated industries.
Documentation | Quick Start | Architecture | Contributing
Traditional security tools identify problems. Aura solves them.
Traditional Scanner Aura
------------------- ----
SQL Injection found in user_service.py:47 SQL Injection found in user_service.py:47
Severity: Critical Severity: Critical
Recommendation: Sanitize user input Patch generated: parameterized query via ORM
Sandbox validation: PASSED (all tests green)
Then what? Awaiting human approval for deployment...
- Engineer researches the fix
- Engineer writes the patch
- Engineer tests the patch
- Engineer deploys the fix
- 45 days later...
Key differentiators:
- Full remediation, not just detection -- generates context-aware patches using deep codebase understanding
- Hybrid GraphRAG architecture -- combines structural analysis (call graphs, dependencies) with semantic search for comprehensive code context
- Multi-agent collaboration -- specialized Coder, Reviewer, and Validator agents with checks and balances
- Human-in-the-loop governance -- configurable approval workflows with industry-specific policy presets
- Sandbox-first validation -- every patch is tested in network-isolated environments before reaching human reviewers
- Constitutional AI guardrails -- 16-principle critique-revision pipeline ensures safe, compliant agent behavior
- Aerospace / safety-critical verification -- Deterministic Verification Envelope (ADR-085) provides DO-178C-aligned output verification via N-of-M consensus, MC/DC structural coverage, and Z3 formal verification with DAL-A/DAL-B policy profiles
- Continuous model assurance -- Automated provenance verification, frozen reference oracle, anti-Goodhart controls, shadow deployment, and one-click rollback for every model swap (ADR-088)
A look inside the Aura Platform.
Aura's security remediation pipeline is a layered architecture in which specialized AI agents collaborate through structured workflows — from vulnerability detection through GraphRAG context retrieval, patch generation, sandbox verification, and HITL approval.
flowchart TD
Detect([Vulnerability Detection])
subgraph GraphRAG[Hybrid GraphRAG]
direction TB
Neptune[Neptune Graph<br/>Structure]
OpenSearch[OpenSearch<br/>Semantics]
BM25[BM25<br/>Keywords]
Fusion[Three-Way Fusion · RRF]
Neptune --> Fusion
OpenSearch --> Fusion
BM25 --> Fusion
end
subgraph Agents[Multi-Agent System]
direction LR
Orchestrator[Orchestrator] --> Coder[Coder] --> Reviewer[Reviewer] --> Validator[Validator]
end
subgraph CAI[Constitutional AI]
direction LR
Critique[Critique<br/>16 principles] --> Revision[Revision]
end
subgraph Sandbox[Sandbox Validation]
direction LR
Syntax[Syntax] --- UnitTests[Unit Tests] --- Security[Security] --- Performance[Performance]
end
subgraph HITL[HITL Approval]
direction LR
Policy[Policy Engine] --> Review[Human Review] --> Deploy[Deploy]
end
Detect --> GraphRAG --> Agents --> CAI --> Sandbox --> HITL
| Component | Purpose | Technology |
|---|---|---|
| Agent Orchestrator | Coordinates remediation workflows | Python, FastAPI |
| Hybrid GraphRAG | Deep code understanding via graph + vector retrieval | Neptune (Gremlin), OpenSearch (k-NN) |
| Multi-Agent System | Specialized agents for code gen, review, validation | LLMs via AWS Bedrock |
| Constitutional AI | Principled safety guardrails for agent outputs | Critique-revision pipeline |
| Sandbox Network | Isolated environments for patch validation | ECS Fargate, network isolation |
| HITL Workflows | Configurable human approval gates | 4 autonomy levels, 7 policy presets |
| Layer | Technologies |
|---|---|
| Backend | Python 3.11+, FastAPI |
| LLM Integration | AWS Bedrock (Claude 3.5 Sonnet, Claude 3.5 Haiku) |
| Graph Database | AWS Neptune (Gremlin query language) |
| Vector Search | AWS OpenSearch with k-NN |
| Container Orchestration | AWS EKS with EC2 Managed Node Groups |
| Infrastructure as Code | AWS CloudFormation (183 templates) |
| Frontend | React 18, TypeScript, Tailwind CSS, Next.js 14 |
| Service Discovery | dnsmasq (3-tier architecture with DNSSEC) |
Aura's retrieval architecture combines three methods for comprehensive code understanding:
- Graph traversal (Neptune) -- call graphs, dependency chains, inheritance hierarchies
- Semantic search (OpenSearch k-NN) -- embedding-based similarity for related patterns
- Keyword search (BM25) -- exact identifier and function name matching
Results are fused using Reciprocal Rank Fusion (RRF), achieving 22-25% improvement in retrieval accuracy compared to single-method approaches.
Specialized agents collaborate through structured message passing:
| Agent | Responsibility |
|---|---|
| Orchestrator | Coordinates workflow phases, manages state and checkpoints |
| Coder | Generates context-aware patches using LLM capabilities |
| Reviewer | Validates patches against OWASP Top 10 and security policies |
| Validator | Executes 5-layer validation pipeline in sandboxed environments |
| Monitor | Tracks health, performance, token costs, and security metrics |
Organizations configure HITL requirements based on compliance needs:
| Autonomy Level | Behavior | Use Case |
|---|---|---|
FULL_HITL |
All changes require approval | CMMC Level 3, maximum oversight |
HITL_FINAL |
Approve deployment only | SOX compliance |
HITL_CRITICAL |
Approve critical/high severity only | Balanced automation |
FULL_AUTONOMOUS |
No approval required | Development environments |
7 industry presets: defense_contractor, financial_services, healthcare, fintech_startup, enterprise_standard, internal_tools, fully_autonomous
Guardrails that always require humans: Production deployments, credential modifications, access control changes, database migrations, infrastructure changes.
All agent outputs pass through a critique-revision pipeline based on Anthropic's Constitutional AI research:
- 16 principles across Safety, Compliance, Anti-Sycophancy, Transparency, Helpfulness, and Code Quality
- Cost-optimized: Haiku for critique, Sonnet for revision (~85% cost savings)
- Constructive engagement: Issues are revised, not just blocked
- Trust Center dashboard: Real-time metrics for critique accuracy, revision convergence, and cache hit rate
| Technique | Impact |
|---|---|
| Chain of Draft prompting | 92% token reduction vs. Chain of Thought |
| Semantic caching (OpenSearch k-NN) | 68% cache hit rate |
| Self-reflection (Reflexion-style) | 30% reduction in false positives |
| Selective decoding (JEPA) | 2.85x inference efficiency for routing tasks |
| Titan Neural Memory | Continuous learning from remediation outcomes |
| Recursive context scaling (RLM) | 100x context window expansion for large codebases |
- Python 3.11+
- AWS CLI configured with appropriate credentials
- Podman (recommended) or Docker
git clone https://github.com/aenealabs/aura.git
cd aura
pip install -r requirements.txtpytest tests/# One-command clean-account deploy (bootstrap + state machine)
ALERT_EMAIL=ops@example.com ./deploy/deploy.sh deploy devThe single command runs bootstrap (24 layer CodeBuild projects + private ECR base images + deployment-pipeline state machine) and then triggers the state machine to deploy every layer in dependency order.
See the Installation Guide for SaaS, Kubernetes, and Podman deployment options, and the Deployment Guide for the canonical AWS infrastructure deploy reference.
| Option | Setup Time | Best For |
|---|---|---|
| Cloud (SaaS) | Same day | Teams wanting immediate value without infrastructure overhead |
| Self-Hosted (Kubernetes) | 1-2 weeks | Organizations with data residency requirements or existing EKS clusters |
| Self-Hosted (Podman) | 1 day | Air-gapped deployments, small teams, proof-of-concept |
All deployment options support AWS GovCloud regions for government workloads.
| Control | Implementation |
|---|---|
| Encryption at rest | AES-256 via AWS KMS customer-managed keys |
| Encryption in transit | TLS 1.3 for all communications |
| Network isolation | VPC endpoints, no public internet exposure |
| Secrets management | AWS Secrets Manager, no hardcoded credentials |
| Container security | Private ECR base images, vulnerability scanning |
| Framework | Status |
|---|---|
| NIST 800-53 | Technical controls implemented |
| SOX | Controls implemented |
| GovCloud Ready | 100% (all deployed services compatible) |
| CMMC Level 2 | Infrastructure complete, organizational controls pending |
| FedRAMP High | Authorization path available |
- Input validation (SQL injection, XSS, SSRF, prompt injection detection)
- Secrets detection (30+ secret types with entropy-based detection)
- Security audit logging with CloudWatch and DynamoDB persistence
- Real-time threat intelligence with daily blocklist updates
- Semantic guardrails engine (6-layer threat detection)
- Agent capability governance (4-tier tool classification, runtime enforcement)
- Platform Overview -- What is Aura, key benefits, use cases
- Quick Start Guide -- Get running in 5 minutes
- System Requirements -- Prerequisites
- Installation Guide -- SaaS, Kubernetes, Podman deployment
- First Project Walkthrough -- Connect a repository and run your first scan
- Autonomous Security Intelligence -- LLM-powered remediation
- Hybrid GraphRAG -- Neptune + OpenSearch architecture
- Multi-Agent System -- Agent orchestration
- HITL Workflows -- Autonomy levels and policy presets
- Sandbox Security -- Isolated validation model
- System Architecture -- Technical design and deployment topology
- API Reference -- REST, GraphQL, and webhooks
- Troubleshooting -- Common issues and solutions
- Monitoring and Operations -- Observability, logging, scaling
- FAQ -- Frequently asked questions
The project maintains Architecture Decision Records documenting rationale for significant design choices. Key ADRs include:
- ADR-024: Titan Neural Memory Architecture
- ADR-034: Context Engineering Framework
- ADR-063: Constitutional AI Integration
- ADR-065: Semantic Guardrails Engine
- ADR-083: Runtime Agent Security Platform
- ADR-085: Deterministic Verification Envelope (DO-178C, N-of-M consensus, MC/DC, Z3 formal verification)
- ADR-087: Hyperscale Agent Orchestration
- ADR-088: Continuous Model Assurance (provenance, frozen oracle, anti-Goodhart, rollback)
| Metric | Value |
|---|---|
| Lines of Code | 375,000+ |
| Test Suite | 24,800+ tests (0 failures) |
| Architecture Decision Records | 89 ADRs |
| CloudFormation Templates | 183 (24 CodeBuild + 159 infrastructure) |
| Infrastructure Phases | 9 of 9 complete |
Aura's security posture has been assessed against known agentic AI attack vectors including command injection, prompt injection, dependency confusion, and agent execution escape. Key architectural controls:
- Command Execution: Allowlist-based filtering with
shell=Falseenforcement viaSecureCommandExecutor— eliminates shell metacharacter and Unicode bypass attacks - Prompt Injection Defense: 6-layer Semantic Guardrails Engine (Unicode normalization, pattern matching, embedding similarity, LLM-as-judge, session tracking, decision engine) applied to both user input and ingested repository content
- Supply Chain: Purpose-built dependency confusion detector (typosquatting, namespace hijacking, version confusion), SBOM attestation with Sigstore signing, private ECR base images
- Agent Isolation: 4-tier tool classification with default-deny policy, container + network sandboxing with eBPF escape detection, restricted Python execution namespace
- Agent Governance: Configurable HITL autonomy levels (0-5), time-bounded dynamic grants, shadow agent detection with behavioral baselines and quarantine
For details, see Security Architecture.
We welcome contributions. See CONTRIBUTING.md for:
- Issue management and labels
- Pull request process and review standards
- Commit message format (Conventional Commits)
- Branch naming conventions
- Release process
If you discover a security vulnerability, please report it responsibly. Do not open a public GitHub issue for security vulnerabilities.
Email: security@aenealabs.com
We will acknowledge receipt within 48 hours and provide a detailed response within 5 business days.
Project Aura is licensed under the Business Source License 1.1.
The BSL allows you to use the source code for non-production purposes. Production use requires a commercial license from Aenea Labs. The license converts to open source (Apache 2.0) after the change date specified in the LICENSE file.
Aenea Labs builds the autonomous AI solutions.
Project Aura is designed for security teams that need to remediate vulnerabilities at scale while maintaining full compliance audit trails. The platform is purpose-built for regulated industries — defense, financial services, healthcare, and government contracting — where AI-generated code, AI agents, and software supply chains all require continuous security oversight.










