Skip to content

lethiess/chatty

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Chatty

Chatty is a local and private chatbot based on Open WebUI and Ollama. It runs entirely on your machine and requires no internet connection once Docker images and ollama models are pulled.

RAG (Retrieval Augmented Generation) enhances responses with contextual information. Embeddings are generated using an Ollama model, and vectors are stored in Qdrant. Uploaded files are persisted in RustFS (S3-compatible storage), while chat history and settings are stored in PostgreSQL.

For optimal performance, run Ollama directly on the host machine rather than in Docker, though both options are supported.

Architecture

graph TB
    subgraph Host["Host Machine"]
        Browser["Web Browser"]
        Ollama["Ollama<br/>(Preferred)"]
    end
    
    subgraph Docker["Docker Environment"]
        subgraph External["chatty-external Network"]
            Traefik["Traefik<br/>Reverse Proxy"]
            OllamaPull["ollama-pull<br/>(Init Container)"]
        end
        
        subgraph Internal["chatty-internal Network<br/>(Isolated)"]
            OpenWebUI["Open WebUI"]
            Qdrant["Qdrant<br/>Vector DB"]
            Postgres["PostgreSQL<br/>Database"]
            RustFS["RustFS<br/>S3 Storage"]
            Tika["Apache Tika<br/>Content Extraction"]
            OllamaDocker["Ollama<br/>(Optional Profile)"]
        end
    end
    
    Browser -->|HTTP| Traefik
    Traefik -->|chatty.local| OpenWebUI
    Traefik -->|qdrant.chatty.local| Qdrant
    Traefik -->|rustfs.chatty.local| RustFS
    
    OpenWebUI -->|Embeddings/Inference| Ollama
    OpenWebUI -->|Vector Search| Qdrant
    OpenWebUI -->|Database| Postgres
    OpenWebUI -->|File Storage| RustFS
    OpenWebUI -->|Content Extraction| Tika
    OpenWebUI -.->|Embedding/Inference| OllamaDocker
    
    OllamaPull -.->|Pull Models| Ollama
    
    style Ollama fill:#90EE90
    style OllamaDocker fill:#FFE4B5
    style External fill:#E3F2FD
    style Internal fill:#FFF3E0
Loading

Quick Start

1. Prerequisites

  • Docker and Docker Compose installed
  • Ollama running on your host machine
  • /etc/hosts updated with local domains:
    127.0.0.1 chatty.local
    127.0.0.1 traefik.chatty.local
    127.0.0.1 rustfs.chatty.local
    127.0.0.1 qdrant.chatty.local
    

2. Configuration

  1. Copy the example environment file and configure secrets:
    cp .env.example .env
  2. Edit .env with your database and S3 credentials
  3. (Optional) Customize RAG settings in .env.owui (no secrets required)

3. Start Services

docker compose up -d

Access Open WebUI at http://chatty.local

4. Load Models in Ollama

On your host machine:

# Pull chat models e.g. gpt-oss:20b, mistral:latest, gemma3:latest
ollama pull gpt-oss:20b 
ollama pull mistral:latest
ollama pull gemma3:latest

# List installed models
ollama list

The ollama-pull container will automatically pull the embedding model configured in .env.owui.

Configuration Files

  • .env: Secrets (database, S3 credentials) - DO NOT COMMIT
  • .env.owui: Open WebUI & RAG configuration - safe to commit
  • .env.example: Template for .env

Services

Service URL Purpose
Open WebUI http://chatty.local Chatty User Interface
Traefik http://traefik.chatty.local Reverse proxy dashboard
RustFS http://rustfs.chatty.local S3-compatible storage
Qdrant http://qdrant.chatty.local Vector database
PostgreSQL http://postgres.chatty:5432 or http://localhost:5432 Database

Optional: Dockerized Ollama

To run Ollama in Docker instead of on the host:

docker compose --profile ollama up -d

Note: Host-based Ollama is recommended for better GPU/hardware utilization.

Docker Network Architecture

  • chatty-internal: Isolated network for backend services (no internet access)
  • chatty-external: Public network for services requiring host/internet connectivity

Documentation

About

Private and local chatbot based on Open WebUI and Ollama.

Resources

License

Stars

Watchers

Forks

Contributors

Languages