Run powerful AI models entirely in your browser. No servers, no API keys, no data transmission.
CipherDev is a fully client-side AI chat application that runs Large Language Models using WebGPU or WASM. Every conversation stays on your device. IBM Bob certified for zero data transmission.
- All LLM inference runs locally in your browser
- No backend servers for chat processing
- No API keys required
- No telemetry or analytics
- IBM Bob Privacy Audit certified
- WebGPU support for blazing-fast inference
- Automatic WASM fallback for compatibility
- Smart device tier detection
- Optimized model recommendations
- TinyLlama 1.1B (650MB) - Fast, lightweight
- Gemma 2 2B IT (1.4GB) - Balanced performance
- Llama 3.2 3B (1.9GB) - High quality
- Phi-3.5-mini (2.2GB) - Advanced reasoning
- Real-time speech-to-text with Whisper AI
- Voice Activity Detection (VAD) for optimal quality
- AI-powered meeting summaries
- Export to Markdown or JSON
- 100% local processing - no audio leaves your device
- Real-time health/disaster risk evaluation
- Temperature, heart rate, and condition analysis
- Watsonx AI integration (optional)
- Privacy-preserving local processing
- Node.js 18+ and npm
- Chrome 113+ or Edge 113+ (for WebGPU)
- 4GB+ RAM recommended
# Clone the repository
git clone https://github.com/yourusername/chipherdev.git
cd chipherdev
# Install dependencies
npm install
# Start development server
npm run devOpen https://devcipher.vercel.app/ in your browser.
npm run build
npm start
**Note about Build Warnings**: During production build, you may see warnings about error pages (404/500). These are non-fatal warnings from Next.js static generation and do not affect the application functionality. All main pages build successfully.
chipherdev/
βββ app/ # Next.js 14 App Router
β βββ (app)/ # App routes with layout
β β βββ chat/ # Chat interface
β β βββ models/ # Model selection
β β βββ audit/ # Privacy audit
β β βββ settings/ # Settings page
β βββ api/ # API routes
β β βββ check-risk/ # Health risk assessment
β βββ layout.tsx # Root layout
β βββ page.tsx # Landing page
β βββ globals.css # Global styles
βββ components/ # React components
β βββ ui/ # UI primitives
β βββ layout/ # Layout components
β βββ check-risk/ # Risk assessment UI
βββ features/ # Feature modules
β βββ hardware/ # Device detection
β βββ llm/ # LLM engines
β βββ audit/ # Privacy audit
β βββ conversation/ # Export utilities
βββ store/ # Zustand state management
β βββ slices/ # State slices
β βββ useAppStore.ts # Combined store
βββ lib/ # Utilities
βββ bob_sessions/ # Privacy proof screenshots
βββ public/ # Static assets
| Layer | Technology |
|---|---|
| Framework | Next.js 14 (App Router) |
| Language | TypeScript 5 (strict mode) |
| UI | React 18, Tailwind CSS 3.4 |
| LLM Engine | @mlc-ai/web-llm (WebGPU) |
| Fallback | @xenova/transformers (WASM) |
| State | Zustand 5 |
| Icons | Lucide React |
CipherDev automatically detects your device capabilities:
- WebGPU availability
- GPU name and memory
- RAM and CPU cores
- Device tier classification (High/Mid/Low/Minimal)
Models are downloaded from HuggingFace and cached locally:
- Quantized models (4-bit) for efficiency
- Progressive loading with status updates
- IndexedDB caching for instant reloads
All chat processing happens in your browser:
- WebGPU acceleration when available
- WASM fallback for universal support
- Streaming responses for real-time feedback
- No data ever sent to external servers
IBM Bob verifies zero data transmission:
- Network request analysis
- Storage inspection
- System verification
- Visual proof via screenshots
β
Run AI models locally in your browser
β
Download model weights from HuggingFace
β
Store models in browser cache (IndexedDB)
β
Export conversations as local files
β Send your messages to any server
β Collect analytics or telemetry
β Require API keys or accounts
β Track your usage
β Share data with third parties
| Device Tier | GPU | RAM | Recommended Model | Speed |
|---|---|---|---|---|
| High | WebGPU, 2GB+ VRAM | 8GB+ | Llama 3.2 3B | ~30 tokens/s |
| Mid | WebGPU | 4GB+ | Gemma 2 2B | ~20 tokens/s |
| Low | WebGPU | 4GB+ | TinyLlama 1.1B | ~15 tokens/s |
| Minimal | WASM only | 2GB+ | TinyLlama 1.1B | ~5 tokens/s |
CipherDev includes a health/disaster risk assessment feature:
POST /api/check-risk
{
"age": 35,
"location": "Mumbai",
"healthCondition": "diabetic",
"temperature": 38.5,
"humidity": 80,
"heartRate": 95
}
Response:
{
"risk": "medium",
"action": "medical_kit",
"reason": "Elevated temperature with pre-existing condition"
}Set environment variables to use IBM Watsonx AI:
WATSONX_API_KEY=your_api_key
WATSONX_PROJECT_ID=your_project_id
WATSONX_URL=https://us-south.ml.cloud.ibm.com
#### How to Get Watsonx API Credentials
**Step-by-step guide:**
1. **Sign up for IBM Cloud** (if you don't have an account):
- Visit [https://cloud.ibm.com/registration](https://cloud.ibm.com/registration)
- Create a free IBM Cloud account
2. **Create a Watsonx.ai instance**:
- Go to [IBM Cloud Catalog](https://cloud.ibm.com/catalog)
- Search for "watsonx.ai"
- Click on "watsonx.ai" service
- Select your region (e.g., Dallas, Frankfurt, Tokyo)
- Choose a pricing plan (Lite plan available for free)
- Click "Create"
3. **Get your API Key**:
- Go to [IBM Cloud API Keys](https://cloud.ibm.com/iam/apikeys)
- Click "Create an IBM Cloud API key"
- Give it a name (e.g., "CipherDev Watsonx Key")
- Click "Create"
- **Important**: Copy and save the API key immediately (you won't be able to see it again)
4. **Get your Project ID**:
- Go to [Watsonx Projects](https://dataplatform.cloud.ibm.com/projects)
- Create a new project or select an existing one
- Click on the "Manage" tab
- Copy the "Project ID" from the project details
5. **Find your Watsonx URL**:
- Based on your region:
- **US South (Dallas)**: `https://us-south.ml.cloud.ibm.com`
- **EU (Frankfurt)**: `https://eu-de.ml.cloud.ibm.com`
- **JP (Tokyo)**: `https://jp-tok.ml.cloud.ibm.com`
#### Configure Your Environment
Create a `.env.local` file in your project root:
```bash
# Copy from example
cp .env.local.example .env.localEdit .env.local and add your credentials:
WATSONX_API_KEY=your_actual_api_key_here
WATSONX_PROJECT_ID=your_actual_project_id_here
WATSONX_URL=https://us-south.ml.cloud.ibm.comThen restart the development server:
npm run devNote: Without these credentials, the app will use a simple rule-based risk assessment system (no external API calls).
## π§ͺ Development
### Run Tests
```bash
npm test
npm run lintnpm run type-checkCipherDev is certified by IBM Bob for zero data transmission. See bob_sessions/ for proof screenshots:
- Landing Page - Feature showcase
- Hardware Detection - Device capabilities
- Model Loading - Download progress
- Chat Session - Live conversation
- Audit Page - Privacy certification
- Network DevTools - Zero external requests
Contributions are welcome! Please read our contributing guidelines before submitting PRs.
- @mlc-ai/web-llm - WebGPU inference engine
- @xenova/transformers - WASM fallback
- HuggingFace - Model hosting
- IBM Bob - Privacy audit certification
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Made with β€οΈ and chaos also with IBM Bob π€π
CipherDev - AI that respects your privacy
MIT licence