A maintainable, version-trackable setup for running multiple containerized services on a single Ubuntu VPS.
Stack: Docker Compose, Caddy (reverse proxy + automatic TLS), GitHub Actions (CI/CD), GitHub Container Registry (ghcr.io)
You have a handful of projects — a SaaS app, an internal tool, a marketing site — and you want to run them all on one affordable VPS. But managing multiple services by hand leads to:
- Snowflake servers with undocumented manual changes
- No reproducibility — if the VPS dies, you're rebuilding from memory
- Messy TLS certificate management with certbot cron jobs
- Services interfering with each other's dependencies
- No clear deployment pipeline
Solo-Stack gives you a single Git repository that defines your entire server. Every piece of config is a file you can review, version, and redeploy. Each project is an isolated Docker Compose stack with its own dependencies. A single Caddy instance handles TLS and routing.
Internet
|
+-----+-----+
| Caddy | <- automatic Let's Encrypt TLS
| :80 :443 |
+--+---+---+-+
| | |
+-------+ | +-------+
| | |
+-------+--+ +-----+----+ +--+----------+
| saas-app | | internal | | marketing |
| :8000 | | :3000 | | :80 |
+---+------+ +----+------+ +-------------+
| |
+----+----+ +----+----+
| mysql | | redis | <- isolated backend networks
| redis | +---------+ not reachable by other projects
| meili |
+---------+
Key principles:
- The
proxynetwork is the only shared Docker network. Only web-facing app containers join it. - Each project's backing services live on a private
backendnetwork, unreachable by Caddy or other projects. - Every piece of config is a file in Git. No GUI-driven state, no manual server changes.
- Each project deploys independently via its own GitHub Actions workflow.
solo-stack/
├── README.md
├── .gitignore
├── caddy/
│ ├── docker-compose.yml # Caddy reverse proxy
│ └── Caddyfile # Route definitions (edit per project)
├── backups/
│ ├── docker-compose.yml # offen/docker-volume-backup
│ └── .env.example # S3 credentials, schedule, retention
├── scripts/
│ ├── bootstrap.sh # One-time VPS provisioning
│ └── deploy.sh # Generic deploy helper
└── .github/
└── workflows/
├── deploy-caddy.yml # Reload Caddy on config push
├── deploy-template.yml # For custom apps you build
└── deploy-thirdparty-template.yml # For cloned third-party repos
Project directories (e.g., saas-app/, internal-tool/) are created by you when you add services. See Adding a New Project below.
SSH into a fresh Ubuntu 24.04 VPS as root and run the bootstrap script:
curl -fsSL https://raw.githubusercontent.com/reneweiser/solo-stack/main/scripts/bootstrap.sh | bashThis installs Docker, creates a deploy user with SSH access (copies root's authorized_keys), configures Docker log rotation, sets up UFW firewall rules, clones the repo into /opt/solo-stack, and creates the shared proxy network.
After bootstrap, verify the deploy user's SSH key is set up for your CI/CD:
cat /home/deploy/.ssh/authorized_keys # should contain your CI public keycd /opt/solo-stack/caddy
# Edit Caddyfile — uncomment and update domains for your projects
docker compose up -dCreate a project directory, add a docker-compose.yml (see examples below), and start it:
cd /opt/solo-stack
mkdir saas-app && cd saas-app
# Add docker-compose.yml and .env
docker compose up -dcd /opt/solo-stack/backups
cp .env.example .env
# Fill in S3 credentials and schedule
docker compose up -dFor apps you build and push to a container registry (GHCR, Docker Hub, etc.):
- Create a directory:
mkdir /opt/solo-stack/my-project - Add a
docker-compose.yml— web-facing services join theproxynetwork, backing services stay on a privatebackendnetwork - Add the route to
caddy/Caddyfile - Copy
.github/workflows/deploy-template.ymlto.github/workflows/deploy-my-project.ymland replacePROJECT_NAMEwith your directory name - Commit, push, done
Many open-source projects ship their own docker-compose.yml in a repo you clone (e.g., Zammad, Plausible, Gitea). You don't build anything — you just configure and run.
The key trick: use docker-compose.override.yml to connect them to your Caddy proxy network without editing their compose file. Docker Compose automatically merges both files, so git pull to get upstream updates stays clean.
-
Clone the project into your solo-stack directory on the VPS:
cd /opt/solo-stack git clone https://github.com/zammad/zammad-docker-compose.git zammad -
Create a
docker-compose.override.ymlto connect the web-facing service to the proxy network and remove the exposed host port (Caddy handles that):# /opt/solo-stack/zammad/docker-compose.override.yml services: zammad-nginx: ports: !reset [] networks: - default - proxy networks: proxy: external: true
Find the web-facing service by looking for the one with
ports:in their compose file. That's the service Caddy should route to. -
Configure the project — copy their
.env.exampleto.envand fill in values as their docs describe. -
Add the route to
caddy/Caddyfile:support.example.com { reverse_proxy zammad-zammad-nginx-1:8080 }
The container name follows the pattern
{directory}-{service}-{n}. The internal port is whatever the service listens on (check their compose file — Zammad's nginx uses8080). -
Start it:
cd /opt/solo-stack/zammad docker compose up -d -
For CI/CD, copy
.github/workflows/deploy-thirdparty-template.ymland replacePROJECT_NAMEwith your directory name. This gives you a manual trigger button in GitHub Actions and an optional weekly auto-update schedule.
cd /opt/solo-stack/zammad
git pull # get upstream compose changes
docker compose pull # pull new images
docker compose up -d # recreate with new images
docker image prune -f # clean up old imagesYour docker-compose.override.yml is untracked by the upstream repo, so it survives git pull without conflicts.
These are complete, copy-paste examples for common project types. Create the directory and save the compose file to get started.
saas-app/docker-compose.yml
services:
app:
image: ghcr.io/your-org/saas-app:latest
restart: unless-stopped
env_file: .env
depends_on:
mysql:
condition: service_healthy
redis:
condition: service_healthy
meilisearch:
condition: service_started
networks:
- proxy
- backend
mysql:
image: mysql:8.4
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
volumes:
- mysql_data:/var/lib/mysql
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 5
networks:
- backend
labels:
# Back up MySQL via docker-volume-backup pre-hook
- docker-volume-backup.archive-pre=/bin/sh -c 'mysqldump -u root -p"$$MYSQL_ROOT_PASSWORD" --all-databases > /backup/dump.sql'
redis:
image: redis:7-alpine
restart: unless-stopped
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "-a", "${REDIS_PASSWORD}", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
- backend
meilisearch:
image: getmeili/meilisearch:v1.12
restart: unless-stopped
environment:
MEILI_MASTER_KEY: ${MEILI_MASTER_KEY}
MEILI_ENV: production
MEILI_DB_PATH: /meili_data
volumes:
- meili_data:/meili_data
networks:
- backend
networks:
proxy:
external: true
backend:
volumes:
mysql_data:
redis_data:
meili_data:saas-app/.env.example
# App
APP_ENV=production
APP_URL=https://saas.example.com
# MySQL
MYSQL_ROOT_PASSWORD=CHANGE_ME
MYSQL_DATABASE=saas
MYSQL_USER=saas
MYSQL_PASSWORD=CHANGE_ME
DATABASE_URL=mysql://saas:CHANGE_ME@mysql:3306/saas
# Redis
REDIS_PASSWORD=CHANGE_ME
REDIS_URL=redis://:CHANGE_ME@redis:6379
# Meilisearch
MEILI_MASTER_KEY=CHANGE_ME
MEILI_URL=http://meilisearch:7700internal-tool/docker-compose.yml
services:
app:
image: ghcr.io/your-org/internal-tool:latest
restart: unless-stopped
env_file: .env
depends_on:
redis:
condition: service_healthy
networks:
- proxy
- backend
redis:
image: redis:7-alpine
restart: unless-stopped
command: redis-server --appendonly yes
volumes:
- redis_data:/data
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
networks:
- backend
networks:
proxy:
external: true
backend:
volumes:
redis_data:marketing-site/docker-compose.yml
services:
web:
image: ghcr.io/your-org/marketing-site:latest
restart: unless-stopped
networks:
- proxy
networks:
proxy:
external: trueBackups use offen/docker-volume-backup — a lightweight container that backs up Docker volumes on a cron schedule.
It supports:
- Scheduled backups via cron expression
- S3-compatible upload (AWS, Backblaze B2, MinIO, Wasabi)
- Pre-backup hooks via container labels (e.g.,
mysqldumpbefore archiving) - Stop-during-backup labels for data consistency
- Retention pruning to auto-delete old backups
- Notifications via webhook on success/failure
To opt a service into backups, add labels to its container in its own compose file:
labels:
# Run a command before backup (e.g., database dump)
- docker-volume-backup.archive-pre=/bin/sh -c 'mysqldump -u root -p"$$MYSQL_ROOT_PASSWORD" --all-databases > /backup/dump.sql'
# Stop this container during backup for data consistency
- docker-volume-backup.stop-during-backup=trueThen mount the relevant volumes in backups/docker-compose.yml. Remember to declare volumes from other compose projects as external: true — see the comments in the backup compose file.
Important: Configure NOTIFICATION_URLS in your .env so you know when backups fail. Silent backup failures are only discovered when you need to restore.
If the VPS dies, restore to a new server:
# 1. Run bootstrap on the new VPS
curl -fsSL https://raw.githubusercontent.com/reneweiser/solo-stack/main/scripts/bootstrap.sh | bash
# 2. Install the AWS CLI (or S3-compatible tool)
apt install -y awscli
# 3. Download the latest backup archive
aws s3 cp s3://your-backup-bucket/solo-stack/backup-LATEST.tar.gz /tmp/restore.tar.gz
# 4. Extract and restore volumes
cd /tmp && tar -xzf restore.tar.gz
# For each volume, create it and copy data in:
docker volume create saas-app_mysql_data
docker run --rm -v saas-app_mysql_data:/restore -v /tmp/backup:/backup alpine \
sh -c 'cp -a /backup/saas-mysql/. /restore/'
# 5. Copy .env files and start services
cd /opt/solo-stack/saas-app && docker compose up -dAdapt the volume names and paths to your setup. Test this procedure periodically — untested backups are not backups.
Each project gets its own GitHub Actions workflow. Two templates are provided:
deploy-template.yml— for custom apps you build and push to GHCR. Triggers on pushes to the project directory.deploy-thirdparty-template.yml— for third-party projects you clone. Triggers manually or on a weekly schedule to pull upstream updates.
Required GitHub secrets:
| Secret | Description |
|---|---|
VPS_HOST |
VPS IP or hostname |
VPS_SSH_KEY |
SSH private key for deploy user |
Rule: .env files never enter Git. Only .env.example (with placeholder values) is committed. On the server, .env files are created once and updated manually or via CI secrets.
Solo-Stack does not include a monitoring stack — you should add one. A lightweight option:
- Run Uptime Kuma as another compose project on the proxy network
- Configure it to check each service's health endpoint or TCP port
- Set up notifications (email, Slack, Telegram) so you know when something is down
At minimum, use an external uptime monitor (e.g., UptimeRobot, Healthchecks.io) to check your public-facing URLs.
If a deploy goes wrong, roll back to the previous image using the SHA tag:
cd /opt/solo-stack/saas-app
# Find the previous image SHA (the build workflow tags every image with the commit SHA)
docker compose logs app 2>&1 | head -1 # or check GitHub Actions run history
# Pull and deploy the known-good image
docker compose pull app # if reverting to :latest won't help
# Or pin the image temporarily in docker-compose.yml:
# image: ghcr.io/your-org/saas-app:abc123def
docker compose up -d --no-deps appThe deploy workflows tag every image with both :latest and :$GITHUB_SHA. Use the SHA tag to pin to a known-good version.
# Start a project
cd /opt/solo-stack/saas-app && docker compose up -d
# Stop a project (keeps volumes/data)
docker compose down
# Update a single service image
docker compose pull app && docker compose up -d --no-deps app
# View running containers
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
# Follow logs
docker compose logs -f app
# Clean up unused images (only removes images unused for 24h+)
docker image prune -f --filter "until=24h"
# Check disk usage
docker system dfReviewed by a 3-expert panel (2 cycles):
| Reviewer | Background | Cognitive Style | Focus |
|---|---|---|---|
| Ines | Senior Infrastructure Architect | Analytical / Rigorous | Correctness and Internal Logic |
| Marco | DevOps Product Engineer | Creative / Lateral | Completeness, Gaps, and Alternatives |
| Suki | Production SRE | Adversarial / Skeptical | Practical Viability and Failure Modes |
All 3 reviewers returned REVISE. Key changes applied:
- Bootstrap: Added SSH key setup for deploy user, Docker log rotation, UFW 443/udp for HTTP/3, fixed clone/ownership flow (Ines, Marco, Suki — consensus)
- CI/CD: Added Caddy config validation before reload, safer
git pull --ff-onlyfor third-party updates,--filter "until=24h"on image prune (Marco, Suki — consensus) - Backups: Added external volume declaration example, restore procedure in README, stronger notification guidance (Marco, Suki)
- README: Added Monitoring, Rollback, and Restore sections (Marco, Suki — consensus)
Ines: APPROVE, Marco: REVISE, Suki: REVISE. Key changes applied:
- CI/CD: Added
set -euo pipefailto all SSH script blocks for error propagation (Marco, Suki, Ines — consensus) - Bootstrap: Fixed
/opt/solo-stackdirectory creation before clone, addedgit safe.directoryconfig (Suki) - Deploy: Added post-deploy container health check in deploy template (Marco, Suki — majority)
- Reverted: NOTIFICATION_URLS overcorrection — kept commented with stronger warning (Ines — overcorrection flag)
--remove-orphansflag on all deploy commands (Ines, Minor, Medium confidence) — only applied to third-party template where it's most relevant- Per-project scoped image prune — consensus that global prune is correct for single-VPS setups