Summary
Zeroboot's current deployment tooling (deploy/deploy.sh + systemd) targets bare-metal
or standalone VM hosts. There is no documented path for running Zeroboot inside a
Kubernetes cluster, which is where most production AI workloads live today.
The gap
To run Zeroboot in k8s, the following is needed but undocumented:
- Which EC2/cloud instance types expose
/dev/kvm — e.g. AWS t3 (burstable) does
NOT, while c5/m5/c6i (Nitro non-burstable) do. This is non-obvious and a common
blocker.
- A KVM device plugin setup — pods need
/dev/kvm access without privileged: true.
The kubevirt device plugin pattern works but isn't mentioned anywhere.
- An official container image — the deploy script copies a compiled binary over SSH.
There's no Dockerfile or published image, so teams must build their own, bundling
Firecracker + the zeroboot binary + vmlinux + rootfs.ext4.
- PersistentVolume guidance — the template snapshot (
/var/lib/zeroboot) must survive
pod restarts. Without a PVC, every restart triggers a 15s re-snapshot.
- Helm chart or reference manifests — Deployment, Service, PVC, and node
affinity/toleration patterns for pinning Zeroboot to KVM-capable nodes.
Why this matters
Container orchestration (Kubernetes, ECS) is the standard runtime for production AI agent
systems. Teams evaluating Zeroboot as a sandbox backend for agent workloads will hit all
of the above blockers on day one. A k8s deployment guide (or even a reference Helm chart)
would significantly lower the adoption barrier.
Suggested additions
Summary
Zeroboot's current deployment tooling (
deploy/deploy.sh+ systemd) targets bare-metalor standalone VM hosts. There is no documented path for running Zeroboot inside a
Kubernetes cluster, which is where most production AI workloads live today.
The gap
To run Zeroboot in k8s, the following is needed but undocumented:
/dev/kvm— e.g. AWSt3(burstable) doesNOT, while
c5/m5/c6i(Nitro non-burstable) do. This is non-obvious and a commonblocker.
/dev/kvmaccess withoutprivileged: true.The kubevirt device plugin pattern works but isn't mentioned anywhere.
There's no Dockerfile or published image, so teams must build their own, bundling
Firecracker + the zeroboot binary +
vmlinux+rootfs.ext4./var/lib/zeroboot) must survivepod restarts. Without a PVC, every restart triggers a 15s re-snapshot.
affinity/toleration patterns for pinning Zeroboot to KVM-capable nodes.
Why this matters
Container orchestration (Kubernetes, ECS) is the standard runtime for production AI agent
systems. Teams evaluating Zeroboot as a sandbox backend for agent workloads will hit all
of the above blockers on day one. A k8s deployment guide (or even a reference Helm chart)
would significantly lower the adoption barrier.
Suggested additions
docs/KUBERNETES.md— step-by-step guide covering node requirements, device plugin,PVC setup, and reference manifests
Dockerfile— builds an image with Firecracker + zeroboot binary + kernel + a base rootfs