An AI-powered Motoko programming assistant that brings intelligent code suggestions, context-aware completions, and instant documentation access directly into your IDE. Built with Retrieval-Augmented Generation (RAG), ChromaDB vector store, and LLM integration (Gemini/OpenAI/Claude).
- π Smart Context Retrieval - Search through 40+ Motoko code samples and official documentation
- π€ AI Code Generation - Generate Motoko code with LLM assistance (Gemini/OpenAI/Claude)
- β‘ RAG-Powered - Combines vector similarity search with intelligent code generation
- π― IDE Integration - Works seamlessly with Cursor, Claude Desktop, and MCP-compatible editors
- π User Authentication - Secure API key management for multi-user environments
- π Production Ready - Hosted backend available at
https://icp-coder.q3labs.io
Get up and running in 3 minutes using our hosted backend.
Visit our Swagger UI to register and generate your API key:
- Open: https://icp-coder.q3labs.io/swagger/index.html
- Register via
/api/v1/auth/registerendpoint - Login via
/api/v1/auth/loginendpoint - Generate your API key from
/api/v1/keysendpoint - Save your API key - you'll need it in the next step
Add this configuration to your Cursor MCP settings file (~/.cursor/mcp.json):
{
"mcpServers": {
"icp-coder": {
"command": "npx",
"args": [
"-y",
"@q3labs/icp-coder"
],
"env": {
"API_KEY": "your-api-key-here",
"BACKEND_URL": "https://icp-coder.q3labs.io"
}
}
}
}Replace your-api-key-here with the API key from Step 1.
Completely restart Cursor (not just reload) for the changes to take effect.
Once configured, you'll have access to:
-
get_motoko_context- Retrieves relevant Motoko code snippets and documentation- Search through curated examples and official docs
- Get contextual code samples for your queries
-
generate_motoko_code- Generates complete Motoko code- AI-powered code generation using RAG context
- Supports custom temperature and token limits
Example queries:
- "How do I create a stable variable in Motoko?"
- "Generate a canister for user profile management with CRUD operations"
- "Show me examples of using HashMap in Motoko"
If tools don't appear after restarting Cursor, try global installation:
npm install -g @q3labs/icp-coderUpdate config to use the global command:
{
"mcpServers": {
"icp-coder": {
"command": "icp-coder",
"args": [],
"env": {
"API_KEY": "your-api-key-here",
"BACKEND_URL": "https://icp-coder.q3labs.io"
}
}
}
}The MCP server requires Node.js 22+. Check your version:
node --versionβββββββββββββββ
β Cursor β Your IDE with MCP support
ββββββββ¬βββββββ
β MCP Protocol
ββββββββΌβββββββββββββββ
β @q3labs/icp-coder β MCP Server (npm package)
ββββββββ¬βββββββββββββββ
β HTTPS/REST API
ββββββββΌβββββββββββββββ
β Backend Server β Go API + Python RAG Pipeline
β ChromaDB Store β Vector embeddings + LLM
βββββββββββββββββββββββ
Workflow:
- Query - You ask a question about Motoko in Cursor
- Context Retrieval - MCP server searches ChromaDB for relevant code samples
- LLM Generation - Retrieved context is combined with your prompt via LLM
- Response - Smart, context-aware code suggestions returned to your IDE
ICP Coder is built around a Model Context Protocol (MCP) server that streams Motoko-specific context directly into your IDE. The service:
- Serves Motoko knowledge over MCP protocol
- Retrieves embeddings from ChromaDB populated with documentation and sample projects
- Orchestrates LLM providers (Gemini/OpenAI/Claude) with retrieved snippets for smart code generation
Want to run your own backend? Follow these instructions to set up the full stack locally.
- Node.js 22+ - MCP server (for production usage via npm)
- Go 1.24+ - Backend API server (for local setup)
- Python 3.11+ - RAG pipeline and embedding generation
- Docker & Docker Compose - Containerized deployment (recommended)
- Make - Build automation
You'll need at least one LLM provider API key:
- Google Gemini (recommended)
- OpenAI (alternative)
- Claude (alternative)
- ~10GB of free storage for the full dataset and embeddings
Clone the repository and set up the backend:
git clone https://github.com/Quantum3-Labs/icp-coder.git
cd icp-coder/backendCreate your environment file:
cp .env.example .envEdit .env and configure:
- Your LLM API key (Gemini/OpenAI/Claude - choose one)
- Database settings
PUBLIC_BACKEND_URL(usehttp://localhost:8080for local)
Important: Only set one LLM provider and its key at a time.
Start the backend:
make upThe backend will be available at http://localhost:8080.
Once the backend is running:
- Open: http://localhost:8080/swagger/index.html
- Register via
/api/v1/auth/register - Login via
/api/v1/auth/login - Generate your API key from
/api/v1/keys
Update your ~/.cursor/mcp.json to point to your local backend:
{
"mcpServers": {
"icp-coder": {
"command": "npx",
"args": ["-y", "@q3labs/icp-coder"],
"env": {
"API_KEY": "your-api-key-here",
"BACKEND_URL": "http://localhost:8080"
}
}
}
}Restart Cursor completely.
For active development with live reload:
cd backend
# Use development environment
cp .env.dev.example .env.dev
# Edit .env.dev and add your API keys
# Start with live reload (uses Air)
make dev
# View logs
make dev-logs
# Stop
make dev-downDevelopment features:
- Automatic rebuild on code changes using Air
- Debug mode with verbose logging
- Source code mounted as volume for instant changes
- Swagger docs auto-generated on every build
See backend/Makefile for all development commands (make dev-*).
For MCP server development:
cd mcp_server
npm install
npm run buildUpdate ~/.cursor/mcp.json to use local files:
{
"mcpServers": {
"icp-coder": {
"command": "node",
"args": ["/absolute/path/to/icp-coder/mcp_server/dist/index.js"],
"env": {
"API_KEY": "your-api-key-here",
"BACKEND_URL": "http://localhost:8080"
}
}
}
}You can also use the backend directly via REST API:
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-d '{
"messages": [
{"role": "user", "content": "How do I write a counter canister in Motoko?"}
]
}'With optional parameters:
curl -X POST http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-d '{
"model": "gemini-2.0-flash-exp",
"messages": [
{"role": "user", "content": "How do I write a counter canister in Motoko?"}
],
"temperature": 0.7,
"max_tokens": 2000,
"conversation_id": 123
}'IC-Vibe-Coding-Template-Motoko can be enhanced with ICP Coder's RAG context. Follow the installation instructions in that repository to integrate.
icp-coder/
βββ backend/ # Go backend server
β βββ cmd/
β β βββ server/
β β βββ main.go # Main entry point
β βββ internal/
β β βββ api/
β β β βββ handlers/ # HTTP request handlers
β β β βββ middleware/ # CORS, auth middleware
β β β βββ router.go # API routing
β β βββ auth/ # Authentication service
β β βββ codegen/ # Code generation with LLM providers
β β βββ database/ # Database connection & queries
β β βββ rag/ # RAG service & Python client
β βββ scripts/ # Python ingestion scripts
β βββ docs/ # Swagger API documentation
β βββ Dockerfile
β βββ docker-compose.yml
β βββ Makefile
β βββ go.mod
β βββ requirements.txt # Python dependencies
βββ mcp_server/ # MCP (Model Context Protocol) server
β βββ src/
β β βββ tools/
β β β βββ generate-motoko-code.tool.ts
β β β βββ get-motoko-context.tool.ts
β β βββ index.ts # MCP server entry point
β βββ package.json # Published as @q3labs/icp-coder
β βββ tsconfig.json
βββ RAG_PIPELINE_DIAGRAM.md
βββ RAG_APPROACH_DIAGRAM.md
βββ README.md
- High-Level Architecture: ARCHITECTURE_DIAGRAM.md
- RAG Pipeline Details: RAG_PIPELINE_DIAGRAM.md
- RAG Approach: RAG_APPROACH_DIAGRAM.md
- API Documentation: https://icp-coder.q3labs.io/swagger/index.html
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License - see LICENSE file for details.
Built with β€οΈ by Quantum3 Labs for the Internet Computer ecosystem.