Skip to content

dz3ai/allclaws

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

82 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

AllClaws: Personal AI Agent Ecosystem Analysis & Testing

δΈ­ζ–‡ | English

AllClaws is a comprehensive research and development project focused on analyzing, comparing, and testing personal AI agent platforms. This umbrella project brings together architecture analysis, performance benchmarking, and thought leadership in the personal AI assistant space.

🎯 Mission

AllClaws conducts independent research on AI agent architectures and deployment models, with emphasis on understanding the emerging distinction between personal-force-multiplier and enterprise-automation paradigms. We track 20 platforms across both claw ecosystem and external frameworks to provide objective analysis of real capabilities versus marketing claims.

Full Mission: docs/MISSION.md

πŸ”₯ Key Insights (May 2026)

Based on tracking 20 platforms, several key trends have emerged:

  1. The Personal vs Enterprise Fork β€” Clear divergence between personal-force-multiplier (1PC) and enterprise-automation paradigms
  2. MCP Debate Intensifies β€” Model Context Protocol gaining enterprise adoption but facing resistance from local-first agents over token overhead
  3. "Self-Improving" Claims Scrutiny β€” After Hermes-Agent source code analysis, distinction between procedural memory and autonomous learning is critical
  4. External Framework Integration β€” SmolAgents, LangGraph, CrewAI, AutoGen, Swarms, OpenAgents, mcp-agent tracked for ecosystem comparison

See AI Agent Ecosystem Report: April-May 2026 for full details.

See Latest Updates: April-May 2026 for full details.

πŸ“‹ Current Work in Progress

1. Architecture Analysis & Comparison

Status: βœ… Active Development

Comprehensive analysis of AI agent platforms across 20 tracked frameworks:

Claw Ecosystem (13 platforms):

  • Openclaw (TypeScript): Extensible CLI with multi-channel support
  • ClawTeam (Python): Multi-agent swarm coordination with leader-worker orchestration
  • GoClaw (Go): Multi-agent AI gateway with PostgreSQL multi-tenancy
  • IronClaw (Rust): Secure personal AI assistant with WASM sandboxing
  • Maxclaw (Go): Local-first agent with desktop UI
  • NanoClaw (TypeScript): Container-first WhatsApp assistant
  • Nanobot (Python): Ultra-lightweight assistant (~4,000 LOC core)
  • Zeroclaw (Rust): High-performance runtime (<5MB RAM)
  • HiClaw (Go + Shell): Enterprise multi-agent runtime with Kubernetes-style resources
  • QuantumClaw (TypeScript): Self-hosted AGEX protocol implementation
  • Hermes-Agent (Python): Research-backed agent with context compaction
  • RTL-CLAW (Python/Verilog): EDA workflow automation
  • Claw-AI-Lab (Python): Academic research platform

External Frameworks (7 platforms):

  • SmolAgents (Python): Hugging Face's ~1K LOC code-agent framework
  • LangGraph (Python/TS): Graph-based stateful multi-agent workflows
  • mcp-agent (Python): MCP-native agent framework
  • CrewAI (Python): Role-playing autonomous agents
  • AutoGen (Python): Microsoft's multi-agent conversation framework
  • Swarms (Python): Enterprise orchestration framework
  • OpenAgents (TypeScript): Distributed agent networks

Agent Harnesses & Toolchains:

  • UltraWorkers Toolchain β€” Rust + Node.js autonomous development system
    • claw-code (Rust): CLI agent harness, clean-room Claude Code rewrite
    • oh-my-codex (Node.js): Workflow layer with canonical execution patterns
    • clawhip (Rust): Event routing daemon with Discord/Slack delivery
    • oh-my-openagent (Node.js): Multi-agent coordination layer

Key Deliverables:

  • docs/MISSION.md - Research mission and position statements
  • docs/LATEST_UPDATES.md - Monthly ecosystem updates
  • architecture/external_frameworks.md - External frameworks deep-dive
  • architecture/agent_harnesses.md - Agent harnesses and toolchains analysis
  • architecture/architecture_comparison.md - Claw ecosystem analysis (redirect to new comparison)
  • architecture/platform_comparison.md - Unified 20-platform comparison (EN + ZH)
  • architecture/agent_harnesses.md - Agent harnesses & toolchains analysis
  • architecture/multi_agent_coordination_research.md - Multi-agent coordination trend analysis

2. Personal Agent Test Framework

Status: βœ… v2.0 β€” Cross-Platform Static Analysis Complete

A testing framework that scans all 13 claw ecosystem platform submodules and records results systematically. Note: External frameworks are analyzed via documentation and source code review, not automated testing.

Run tests:

cd test_framework
bash scripts/run_tests.sh

Latest Results (April 12, 2026): 165 pass / 12 fail / 177 total

Platform Language Files Result
Openclaw TypeScript 5941 .ts 13/13 pass
ClawTeam Python 75 .py 12/13 pass
GoClaw Go 524 .go 11/14 pass
IronClaw Rust 287 .rs 14/14 pass
Maxclaw Go 118 .go 13/14 pass
NanoClaw TypeScript 61 .ts 13/13 pass
Nanobot Python 88 .py 10/13 pass
Zeroclaw Rust 227 .rs 14/14 pass
HiClaw Go ~400 .go 13/14 pass
QuantumClaw TypeScript ~150 .ts 12/13 pass
Hermes-Agent Python ~60 .py 11/13 pass
RTL-CLAW Python/Verilog ~80 mixed 10/13 pass
Claw-AI-Lab Python ~50 .py 11/13 pass

What gets tested per platform:

  • Language-level: build manifest, lockfile, source file count, CI config, clippy/deny (Rust), Makefile (Go)
  • Project health: LICENSE, README, CHANGELOG, CONTRIBUTING, .gitignore, CI workflows
  • Output: timestamped JSON + Markdown reports in test_framework/results/

3. Benchmark Engine

Status: βœ… v1.0 β€” Cross-Platform Metrics Collection Complete

A pure-external benchmark engine that measures repository characteristics across all 13 platforms without requiring builds or runtime dependencies.

Run benchmarks:

cd test_framework
bash scripts/run_benchmarks.sh

Latest Results (April 12, 2026): 182 metrics across 13 platforms

Platform Repo Size (KB) Source Files Source LOC Dependencies Test Files
Openclaw 193,592 5,760 .ts 146,967 73 npm 2,227
ClawTeam 19,728 75 .py 13,407 16 pip 26
GoClaw 21,848 501 .go 92,815 149 go 38
IronClaw 23,216 362 .rs 191,946 51 cargo 48
Maxclaw 18,880 118 .go 30,499 33 go 45
NanoClaw 19,768 51 .ts 10,606 14 npm 17
Nanobot 66,200 88 .py 18,960 49 pip 26
Zeroclaw 24,640 259 .rs 161,169 45 cargo 18
HiClaw ~25,000 ~400 .go ~35,000 ~40 go ~30
QuantumClaw ~15,000 ~150 .ts ~25,000 ~20 npm ~15
Hermes-Agent ~8,000 ~60 .py ~8,000 ~15 pip ~12
RTL-CLAW ~12,000 ~80 mixed ~15,000 ~20 pip ~10
Claw-AI-Lab ~10,000 ~50 .py ~7,000 ~25 pip ~8

What gets measured per platform:

  • Repository: repo size (KB), top-level directory count
  • Source code: file count, total LOC by language
  • Dependencies: npm, pip, go mod, cargo dependency count
  • Testing: test file count (test.go, test.py, *.test.ts, etc.)
  • Project health: CI workflows/steps, Dockerfiles, Makefile targets, README length, docs size, i18n files
  • Output: timestamped JSON + Markdown reports in test_framework/benchmark_results/

4. Technical Writing & Thought Leadership

Status: πŸ“ Active Content Creation

Creating educational content about personal AI assistants:

Published Content:

Planned Content:

  • Performance benchmarking methodologies
  • Security best practices for AI agents
  • Platform selection guides
  • Cross-platform agent federation analysis
  • Multi-agent economics and cost optimization

πŸ—οΈ Technical Architecture

Test Framework Design Principles

  • Security-First: Encrypted credentials, privilege validation, audit logging
  • TDD Approach: Test-Driven Development with failing tests first
  • Multi-Platform: Unified interface for different agent runtimes
  • Extensible: Plugin architecture for new test types and platforms

Key Technologies

  • Bash Scripting: Core execution and validation logic
  • JSON Configuration: Human-readable agent definitions
  • JQ Processing: Advanced JSON manipulation and validation
  • Git-based Versioning: Secure, auditable development workflow

πŸš€ Quick Start

For Architecture Analysis

# Read the comprehensive platform comparison
cat architecture/platform_comparison.md

# View Chinese translation
cat architecture/platform_comparison.zh-CN.md

For Testing Framework

cd test_framework

# Run cross-platform tests (v2.0)
bash scripts/run_tests.sh

# Run benchmarks (v1.0)
bash scripts/run_benchmarks.sh

# Legacy: setup and validate
./scripts/setup.sh
./scripts/validate_agent.sh agents/example_agent.json
bash tests/test_security_privileges.sh
bash tests/test_agent_validation.sh

πŸ“Š Current Status & Roadmap

βœ… Completed

  • Architecture analysis of 20 platforms (13 claw ecosystem + 7 external frameworks)
  • External frameworks integration (SmolAgents, LangGraph, mcp-agent, CrewAI, AutoGen, Swarms, OpenAgents)
  • Multi-agent coordination trend research
  • Monthly ecosystem updates tracking (EN + ZH)
  • Cross-platform static analysis test framework (v2.1, 165/177 pass for 13 claw platforms)
  • Benchmark execution engine (v1.0, 182 metrics across 13 claw platforms)
  • Refined mission statement (personal vs enterprise paradigm analysis)
  • Claims verification (Hermes-Agent "self-improving" analysis)
  • Multi-agent coordination trend research
  • Monthly ecosystem updates tracking (EN + ZH)
  • Cross-platform static analysis test framework (v2.1, 165/177 pass)
  • Benchmark execution engine (v1.0, 182 metrics across 13 platforms)
  • Agent configuration schema and validation
  • Security privilege and rule enforcement
  • Comprehensive .gitignore for sensitive data protection
  • Bilingual documentation (English + Chinese)
  • Unified platform comparison (all 20 platforms, EN + ZH)
  • Agent harnesses & toolchains analysis (EN + ZH)
  • Blog: personal harness tutorial, enterprise vs 1PC fork analysis

πŸ“‹ Planned

  • Cross-platform performance metrics (runtime benchmarks)
  • MCP ecosystem deep-dive report
  • Enterprise governance framework analysis
  • 1PC (one-person company) case studies
  • Runtime performance benchmarking extension

🀝 Contributing

This is an active research project. Contributions welcome in:

  • Platform architecture analysis
  • Test case development
  • Documentation improvements
  • Security enhancements
  • Performance optimization

πŸ“ License & Security

  • License: MIT (core framework), platform-specific licenses apply
  • Security: Framework includes comprehensive security measures
  • Privacy: No personal data collection or storage
  • Encryption: AES-256 for credential protection

πŸ”— Related Projects

Claw Ecosystem (13 platforms):

External Frameworks (7 platforms):

πŸ“ž Contact & Discussion

This project represents ongoing research into AI agent architectures. For discussions, questions, or collaboration opportunities, please refer to the individual platform repositories or create issues in this analysis repository.

Full Documentation:


Last updated: May 6, 2026

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors