Skip to content

Latest commit

 

History

History
239 lines (175 loc) · 6.48 KB

File metadata and controls

239 lines (175 loc) · 6.48 KB

Docker Quick Notes (This Project)

This project uses a custom image named rag-chainlit built from python:3.12-slim.

Mental model

  • python:3.12-slim is the base image.
  • rag-chainlit:latest is your project image layered on top of that base.
  • A container is a running (or stopped) instance of an image.

Build

docker build -f Dockerfile.chainlit -t rag-chainlit .

Rebuild after any code or dependency changes.

Inspect what you have

# Images
docker images

# Running containers
docker ps

# All containers (running + stopped)
docker ps -a

LLM Providers

The agent supports two LLM providers, controlled by the LLM_PROVIDER environment variable:

Provider LLM_PROVIDER Model env var Default model Requires
Ollama (local) ollama (default) OLLAMA_MODEL qwen3:14b Ollama running on host
OpenAI (cloud) openai OPENAI_MODEL gpt-5.2 OPENAI_API_KEY

When LLM_PROVIDER is omitted or set to ollama, behavior is identical to before — it connects to your local Ollama instance. Set it to openai to use OpenAI's API instead (no Ollama needed).


Environment Variables Reference

Variable Default Description
LLM_PROVIDER ollama Which LLM backend: ollama or openai
OLLAMA_MODEL qwen3:14b Model tag when using Ollama
OLLAMA_BASE_URL http://localhost:11434 Ollama server URL
OPENAI_API_KEY (none) Required when LLM_PROVIDER=openai
OPENAI_MODEL gpt-5.2 Model name when using OpenAI
SMASH_API_BASE_URL https://server.cetacean-tuna.ts.net Remote Smash data API
SMASH_DB_PATH (host default) Path to smash.db inside container (set to /data/smash.db with volume mount)
DISABLE_HIGH_INTENSITY false Set to true to hide the expensive analytics tool

Run Chainlit

On startup the UI shows two buttons — API Agent and SQL Agent. Pick which mode you want before chatting.

Ollama — API Agent only (no database needed)

docker run --rm -it \
  -p 8000:8000 \
  --add-host=host.docker.internal:host-gateway \
  -e OLLAMA_BASE_URL=http://host.docker.internal:11434 \
  -e OLLAMA_MODEL=qwen3:14b \
  -e SMASH_API_BASE_URL=https://server.cetacean-tuna.ts.net \
  --name rag-chainlit \
  rag-chainlit

Ollama — With SQL Agent (requires database mount)

docker run --rm -it \
  -p 8000:8000 \
  --add-host=host.docker.internal:host-gateway \
  -e OLLAMA_BASE_URL=http://host.docker.internal:11434 \
  -e OLLAMA_MODEL=qwen3:14b \
  -e SMASH_API_BASE_URL=https://server.cetacean-tuna.ts.net \
  -e SMASH_DB_PATH=/data/smash.db \
  -v ~/code-repos/smashDA/.cache/startgg/smash.db:/data/smash.db:ro \
  --name rag-chainlit \
  rag-chainlit
  • -v ...smash.db:/data/smash.db:ro mounts the SQLite database read-only into the container.
  • -e SMASH_DB_PATH=/data/smash.db tells the SQL agent where to find it inside the container.
  • Without the volume mount, choosing "SQL Agent" will fail.

OpenAI — API Agent only (no Ollama needed)

No Ollama required. The container calls the OpenAI API directly.

docker run --rm -it \
  -p 8000:8000 \
  -e LLM_PROVIDER=openai \
  -e OPENAI_API_KEY=sk-... \
  -e OPENAI_MODEL=gpt-5.2 \
  -e SMASH_API_BASE_URL=https://server.cetacean-tuna.ts.net \
  --name rag-chainlit \
  rag-chainlit

OpenAI — With SQL Agent (no Ollama needed)

docker run --rm -it \
  -p 8000:8000 \
  -e LLM_PROVIDER=openai \
  -e OPENAI_API_KEY=sk-... \
  -e OPENAI_MODEL=gpt-5.2 \
  -e SMASH_API_BASE_URL=https://server.cetacean-tuna.ts.net \
  -e SMASH_DB_PATH=/data/smash.db \
  -v ~/code-repos/smashDA/.cache/startgg/smash.db:/data/smash.db:ro \
  --name rag-chainlit \
  rag-chainlit

Common options

  • Open http://localhost:8000.
  • --rm auto-deletes the container when it exits.
  • Change Ollama model with -e OLLAMA_MODEL=qwen3:8b.
  • Change OpenAI model with -e OPENAI_MODEL=gpt-5.2.
  • Switch provider with -e LLM_PROVIDER=openai (or ollama).

CLI Usage (inside container or locally)

You can also run agents directly from the command line with the --provider flag:

# Ollama (default)
python agent.py --query "Who are the top players in GA?"

# OpenAI
python agent.py --query "Who are the top players in GA?" --provider openai --model gpt-5.2

# SQL agent with OpenAI
python sql_agent.py --query "Show me GA players with high win rates" --provider openai

The --provider flag accepts ollama (default) or openai. When using openai, the OPENAI_API_KEY environment variable must be set. The --model flag overrides the default for that provider.


Useful container commands

# Logs
docker logs rag-chainlit

# Follow logs live
docker logs -f rag-chainlit

# Enter shell in running container
docker exec -it rag-chainlit sh

# Stop container
docker stop rag-chainlit

Cleanup

# Remove stopped containers
docker container prune

# Remove dangling/unused images
docker image prune -a

Common issue

If build/run says it cannot connect to /var/run/docker.sock, start Docker daemon:

sudo systemctl start docker
sudo systemctl enable docker

Troubleshooting Connection refused

If Chainlit UI opens but replies with Agent error: [Errno 111] Connection refused, the container usually cannot reach Ollama. (This does not apply when using the OpenAI provider since it talks to OpenAI's servers directly.)

1) Verify Ollama on host

curl -sS http://localhost:11434/api/tags

If this fails, Ollama is not running on host.

2) Verify container -> host Ollama path

docker run --rm --add-host=host.docker.internal:host-gateway \
  python:3.12-slim python - <<'PY'
import urllib.request
print(urllib.request.urlopen("http://host.docker.internal:11434/api/tags", timeout=3).read()[:200])
PY

If this fails, host Ollama is likely bound only to loopback and not reachable from Docker bridge.

3) Linux fallback (recommended if step 2 fails)

Run container with host networking and point Ollama to localhost:

docker run --rm -it \
  --network host \
  -e OLLAMA_BASE_URL=http://127.0.0.1:11434 \
  -e OLLAMA_MODEL=qwen3:14b \
  -e SMASH_API_BASE_URL=https://server.cetacean-tuna.ts.net \
  -e SMASH_DB_PATH=/data/smash.db \
  -v ~/code-repos/smashDA/.cache/startgg/smash.db:/data/smash.db:ro \
  --name rag-chainlit \
  rag-chainlit

Then open http://localhost:8000.