This project uses a custom image named rag-chainlit built from python:3.12-slim.
python:3.12-slimis the base image.rag-chainlit:latestis your project image layered on top of that base.- A container is a running (or stopped) instance of an image.
docker build -f Dockerfile.chainlit -t rag-chainlit .Rebuild after any code or dependency changes.
# Images
docker images
# Running containers
docker ps
# All containers (running + stopped)
docker ps -aThe agent supports two LLM providers, controlled by the LLM_PROVIDER environment variable:
| Provider | LLM_PROVIDER |
Model env var | Default model | Requires |
|---|---|---|---|---|
| Ollama (local) | ollama (default) |
OLLAMA_MODEL |
qwen3:14b |
Ollama running on host |
| OpenAI (cloud) | openai |
OPENAI_MODEL |
gpt-5.2 |
OPENAI_API_KEY |
When LLM_PROVIDER is omitted or set to ollama, behavior is identical to before — it
connects to your local Ollama instance. Set it to openai to use OpenAI's API instead
(no Ollama needed).
| Variable | Default | Description |
|---|---|---|
LLM_PROVIDER |
ollama |
Which LLM backend: ollama or openai |
OLLAMA_MODEL |
qwen3:14b |
Model tag when using Ollama |
OLLAMA_BASE_URL |
http://localhost:11434 |
Ollama server URL |
OPENAI_API_KEY |
(none) | Required when LLM_PROVIDER=openai |
OPENAI_MODEL |
gpt-5.2 |
Model name when using OpenAI |
SMASH_API_BASE_URL |
https://server.cetacean-tuna.ts.net |
Remote Smash data API |
SMASH_DB_PATH |
(host default) | Path to smash.db inside container (set to /data/smash.db with volume mount) |
DISABLE_HIGH_INTENSITY |
false |
Set to true to hide the expensive analytics tool |
On startup the UI shows two buttons — API Agent and SQL Agent. Pick which mode you want before chatting.
docker run --rm -it \
-p 8000:8000 \
--add-host=host.docker.internal:host-gateway \
-e OLLAMA_BASE_URL=http://host.docker.internal:11434 \
-e OLLAMA_MODEL=qwen3:14b \
-e SMASH_API_BASE_URL=https://server.cetacean-tuna.ts.net \
--name rag-chainlit \
rag-chainlitdocker run --rm -it \
-p 8000:8000 \
--add-host=host.docker.internal:host-gateway \
-e OLLAMA_BASE_URL=http://host.docker.internal:11434 \
-e OLLAMA_MODEL=qwen3:14b \
-e SMASH_API_BASE_URL=https://server.cetacean-tuna.ts.net \
-e SMASH_DB_PATH=/data/smash.db \
-v ~/code-repos/smashDA/.cache/startgg/smash.db:/data/smash.db:ro \
--name rag-chainlit \
rag-chainlit-v ...smash.db:/data/smash.db:romounts the SQLite database read-only into the container.-e SMASH_DB_PATH=/data/smash.dbtells the SQL agent where to find it inside the container.- Without the volume mount, choosing "SQL Agent" will fail.
No Ollama required. The container calls the OpenAI API directly.
docker run --rm -it \
-p 8000:8000 \
-e LLM_PROVIDER=openai \
-e OPENAI_API_KEY=sk-... \
-e OPENAI_MODEL=gpt-5.2 \
-e SMASH_API_BASE_URL=https://server.cetacean-tuna.ts.net \
--name rag-chainlit \
rag-chainlitdocker run --rm -it \
-p 8000:8000 \
-e LLM_PROVIDER=openai \
-e OPENAI_API_KEY=sk-... \
-e OPENAI_MODEL=gpt-5.2 \
-e SMASH_API_BASE_URL=https://server.cetacean-tuna.ts.net \
-e SMASH_DB_PATH=/data/smash.db \
-v ~/code-repos/smashDA/.cache/startgg/smash.db:/data/smash.db:ro \
--name rag-chainlit \
rag-chainlit- Open
http://localhost:8000. --rmauto-deletes the container when it exits.- Change Ollama model with
-e OLLAMA_MODEL=qwen3:8b. - Change OpenAI model with
-e OPENAI_MODEL=gpt-5.2. - Switch provider with
-e LLM_PROVIDER=openai(orollama).
You can also run agents directly from the command line with the --provider flag:
# Ollama (default)
python agent.py --query "Who are the top players in GA?"
# OpenAI
python agent.py --query "Who are the top players in GA?" --provider openai --model gpt-5.2
# SQL agent with OpenAI
python sql_agent.py --query "Show me GA players with high win rates" --provider openaiThe --provider flag accepts ollama (default) or openai. When using openai, the
OPENAI_API_KEY environment variable must be set. The --model flag overrides the
default for that provider.
# Logs
docker logs rag-chainlit
# Follow logs live
docker logs -f rag-chainlit
# Enter shell in running container
docker exec -it rag-chainlit sh
# Stop container
docker stop rag-chainlit# Remove stopped containers
docker container prune
# Remove dangling/unused images
docker image prune -aIf build/run says it cannot connect to /var/run/docker.sock, start Docker daemon:
sudo systemctl start docker
sudo systemctl enable dockerIf Chainlit UI opens but replies with Agent error: [Errno 111] Connection refused,
the container usually cannot reach Ollama. (This does not apply when using the OpenAI
provider since it talks to OpenAI's servers directly.)
curl -sS http://localhost:11434/api/tagsIf this fails, Ollama is not running on host.
docker run --rm --add-host=host.docker.internal:host-gateway \
python:3.12-slim python - <<'PY'
import urllib.request
print(urllib.request.urlopen("http://host.docker.internal:11434/api/tags", timeout=3).read()[:200])
PYIf this fails, host Ollama is likely bound only to loopback and not reachable from Docker bridge.
Run container with host networking and point Ollama to localhost:
docker run --rm -it \
--network host \
-e OLLAMA_BASE_URL=http://127.0.0.1:11434 \
-e OLLAMA_MODEL=qwen3:14b \
-e SMASH_API_BASE_URL=https://server.cetacean-tuna.ts.net \
-e SMASH_DB_PATH=/data/smash.db \
-v ~/code-repos/smashDA/.cache/startgg/smash.db:/data/smash.db:ro \
--name rag-chainlit \
rag-chainlitThen open http://localhost:8000.