Quick Start · Features · Integrations · SDK · CLI · Architecture · Docs · Contributing
81% fewer planning steps · 72% less tokens · on real GDPVal knowledge-work tasks,
on top of what a SOTA self-improving Hermes agent already learns on its own.
See the benchmark →
Reflexio is an AI agent self-improvement harness that enables your AI agents to continuously learn from real user interactions. It turns user corrections into persisted behavioral improvements for agents and capturing successful execution paths for reuse.
What one user teaches, every user benefits from.
As your agent is used more, it becomes smarter, faster, and more effective at solving domain-specific tasks. The moat for AI agents is what your agent learns from every interaction it handles.
Our vision is that AI systems should get better with every interaction.
Benchmarked on GDPVal: on 4 of 5 real knowledge-work tasks from OpenAI's public GDPVal benchmark, Reflexio cuts a median −81% planning steps and −72% tokens on a Hermes agent running
minimax/MiniMax-M2.7— measured against a warm baseline: the same agent re-running the task after it has already learned from itself. In other words, Reflexio's savings come on top of what a SOTA self-improving agent has learnt on its own. See the full writeup → benchmark/gdpval/RESULTS.md.
flowchart LR
A[AI Agent] -->|conversations| B[Reflexio]
G[Human Expert] -->|ideal responses| B
B --> C[User Profiles]
B --> D[Playbook Extraction]
D --> E[Playbook Aggregation]
B --> F[Success Evaluation]
Publish conversations from your agent, and Reflexio closes the self-improvement loop:
- Never Repeat the Same Mistake: Transforms user corrections and interaction signals into improved decision-making processes — so agents adapt their behavior and avoid repeating the same mistakes.
- Lock In What Works: Persists successful strategies and workflows so your agent reuses proven paths instead of starting from scratch.
- Transfer Learning Across Users: What one user teaches, every user benefits from — corrections and successful strategies from one individual propagate to improve the agent for everyone, no retraining required.
- Learn from Human Experts: Publish expert-provided ideal responses alongside agent responses — Reflexio automatically extracts actionable playbooks from the differences.
For developers: See developer.md for project structure, environment setup, testing, and coding guidelines.
- Demo
- Quick Start
- Features
- Integrations
- SDK Usage
- Architecture
- Documentation
- Contributing
- Star History
- License
| Tool | Description |
|---|---|
| uv | Python package manager |
| Node.js >= 18 | Frontend runtime |
Option A — Install from PyPI (fastest, for users):
pip install reflexio-ai
# start/stop services. data saved under ~/.reflexio
reflexio services start # API (8081), Docs (8082), SQLite storage
reflexio services stop # Stop all servicesOption B — Clone from source (for contributors):
# clone the repo
git clone https://github.com/ReflexioAI/reflexio.git
cd reflexio
# configure: copy env template, then set at least one LLM API key (OpenAI, Anthropic, etc.)
cp .env.example .env
# install dependencies
uv sync # Python (includes workspace packages)
npm --prefix docs install # API docs
# start/stop services. data saved under ~/.reflexio
uv run reflexio services start # API (8081), Docs (8082), SQLite storage
uv run reflexio services stop # Stop all servicesAlternative:
python -m reflexio.cli services startor./run_services.sh
Once running, open http://localhost:8082 to interactively browse and try out the API.
Reflexio ships a first-class CLI — the fastest way to see the loop end-to-end with no code. Publish a real multi-turn conversation where the user corrects the agent (that's the signal Reflexio learns from), then search for what was extracted:
uv run reflexio publish --user-id alice --wait --data '{
"interactions": [
{"role": "user", "content": "Deploy the new service."},
{"role": "assistant", "content": "Starting deployment to us-east-1..."},
{"role": "user", "content": "Wait — we never deploy production to us-east-1. Always use us-west-2."},
{"role": "assistant", "content": "Understood. Switching to us-west-2."}
]
}'
# Search the extracted profiles and playbooks
uv run reflexio search "deployment region"One conversation, two artifacts: a user profile (production region is us-west-2) and an agent playbook (confirm region before deploying). See the CLI reference for all input modes (inline JSON, --file, --stdin) and the full command list.
import reflexio
client = reflexio.ReflexioClient(
url_endpoint="http://localhost:8081/"
)
# Publish a multi-turn conversation where the user corrects the agent —
# Reflexio extracts a profile ("prod region = us-west-2") and a playbook
# ("confirm region before deploying").
client.publish_interaction(
user_id="alice",
interactions=[
{"role": "user", "content": "Deploy the new service."},
{"role": "assistant", "content": "Starting deployment to us-east-1..."},
{"role": "user", "content": "Wait — we never deploy production to us-east-1. Always use us-west-2."},
{"role": "assistant", "content": "Understood. Switching to us-west-2."},
],
)Reflexio will automatically generate profiles and extract playbooks in the background.
- Extracts behavioral profiles from conversations using configurable extractors
- Supports versioning (current → pending → archived) with upgrade/downgrade workflows
- Multiple extractors run in parallel with independent windows and strides
Read more about user profiles →
- Extracts playbooks from user behavior patterns
- Clusters similar entries and aggregates with LLM (with change detection to skip unchanged clusters)
- Approval workflow: review and approve/reject agent playbooks
Read more about agent playbooks →
- Publish human-expert ideal responses alongside agent responses via the
expert_contentfield - Reflexio automatically compares agent vs. expert responses, focusing on substantive differences (missing info, incorrect approach, reasoning gaps) while ignoring stylistic ones
- Generates actionable playbooks as trigger/instruction/pitfall SOPs that teach the agent what to do differently
Read more about interactions & expert content →
- Session-level evaluation triggered automatically (10 min after last request)
- Shadow comparison mode: A/B test regular vs shadow agent responses
- Tool usage analysis for blocking issue detection
- Hybrid search (vector + full-text) across profiles and playbooks
- LLM-powered query rewriting for improved recall
- Unified search across all entity types in parallel
- Fast at scale: unified search across ~3,000 indexed rows (profile + user playbook + agent playbook, ~1,000 rows each, queried in parallel) runs at ~57 ms p50 / ~73 ms p95 — measured service-layer with local SQLite on an Apple Silicon MacBook, 30 trials × 20 fixed queries. See the full benchmark report or reproduce with
reflexio.benchmarks.retrieval_latency.
- OpenAI, Anthropic, Google Gemini, OpenRouter, Azure, MiniMax, and custom endpoints
- Powered by LiteLLM — configure your preferred provider via API keys or custom endpoints
For detailed API documentation, see the full API reference.
Install the client:
pip install reflexio-clientimport reflexio
client = reflexio.ReflexioClient(
url_endpoint="http://localhost:8081/"
)
# Publish interactions
client.publish_interaction(
user_id="user-123",
interactions=[
{"role": "user", "content": "..."},
{"role": "assistant", "content": "..."},
],
agent_version="v1", # optional: track agent versions
session_id="session-abc", # optional: group requests into sessions
)
# Search profiles
profiles = client.search_profiles(
reflexio.SearchUserProfileRequest(query="deployment region preference")
)
# Search agent playbooks
playbooks = client.get_agent_playbooks(
reflexio.GetAgentPlaybooksRequest(agent_version="v1")
)# Update org configuration
client.set_config(reflexio.SetConfigRequest(
config=reflexio.Config(
api_key_config=reflexio.APIKeyConfig(openai="sk-..."),
profile_extractor_configs=[...],
playbook_configs=[reflexio.PlaybookConfig(...)],
)
))Reflexio integrates with popular AI agent frameworks out of the box:
- Claude Code -- Hook into Claude Code sessions to automatically capture corrections and preferences.
- LangChain -- Drop-in callbacks for LangChain chains and agents.
- OpenClaw -- Native integration with the OpenClaw agent framework.
Client (SDK / Web UI)
→ FastAPI Backend
→ Reflexio Orchestrator
→ GenerationService
├─ ProfileGenerationService → Extractor(s) → Deduplicator → Storage
├─ PlaybookGenerationService → Extractor(s) → Deduplicator → Storage
└─ GroupEvaluationScheduler → Evaluator(s) → Storage (deferred 10 min)
See developer.md for project structure, supported LLM providers, and development setup.
For comprehensive guides, examples, and API reference, visit the Reflexio Documentation.
We welcome contributions! Please see developer.md for guidelines.
This project is currently licensed under Apache License 2.0.


