Deploy applications to cloud platforms with a single command. carburetor handles cloning your source code, running the build pipeline, and shipping the artifact to the cloud — no manual steps.
carburetor deploy
| Dimension | Supported |
|---|---|
| App types | React, Node.js, Docker, Custom script |
| Cloud platforms | AWS (S3/EC2), GCP, Azure, Lambda |
| VCS providers | GitHub, GitLab |
| Executors | Local, Jenkins |
- Bun v1.0 or later
- Node.js 20+ (fallback if Bun is unavailable)
- Git
curl -fsSL https://bun.sh/install | bashgit clone https://github.com/your-org/carburetor.git
cd carburetor
bun installRun directly:
bun run src/index.ts deploybun run buildProduces a signed, ready-to-run ./carburetor binary in the project root.
Move it somewhere on your $PATH:
mv carburetor /usr/local/bin/carburetorThen use it from anywhere:
carburetor deploybun run build:allProduces binaries for every platform under dist/:
dist/
macos-arm64/carburetor
macos-x64/carburetor
linux-arm64/carburetor
linux-x64/carburetor
windows-x64/carburetor.exe
macOS binaries are automatically ad-hoc signed so Gatekeeper doesn't block them. Linux and Windows binaries require no signing. Ship the folder matching the customer's platform.
- Bump the version in
package.json:
{
"version": "1.2.0"
}- Rebuild the binaries:
bun run build:allThe version is read directly from package.json at build time — no other files need updating. Verify with:
./dist/macos-arm64/carburetor --versionCopy the example config and fill in your values:
cp carburetor.example.yml carburetor.ymlcarburetor.yml:
project:
type: react # react | node | docker | custom
build:
# script: "npm run build:prod" # optional — overrides default build steps
outputDir: dist # optional — defaults per project type
target:
platform: aws # aws | gcp | azure | lambda
region: us-east-1
environment: production
resourceId: my-s3-bucket-name # S3 bucket, instance ID, function name, etc.
vcs:
provider: github # github | gitlab
repoUrl: "https://github.com/your-org/your-repo"
branch: main
executor:
type: local # local | jenkinsCredentials are never stored in
carburetor.yml. Set them as environment variables (see below).
export carburetor_VCS_TOKEN=ghp_your_token_hereexport AWS_ACCESS_KEY_ID=AKIA...
export AWS_SECRET_ACCESS_KEY=...
export AWS_SESSION_TOKEN=... # optional, for temporary credentialsexport GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.jsonexport AZURE_CLIENT_ID=...
export AZURE_CLIENT_SECRET=...
export AZURE_TENANT_ID=...
export AZURE_SUBSCRIPTION_ID=...carburetor deployOptions:
-c, --config <path> Path to carburetor.yml (default: ./carburetor.yml)
-i, --interactive Launch interactive setup wizard (no config file needed)
-t, --target <platform> Override target platform (aws|gcp|azure|lambda)
-e, --env <name> Override environment name
--dry-run Validate config and credentials without deploying
--json Emit newline-delimited JSON events (for CI/scripting)
-v, --verbose Show full step output
Examples:
# Deploy using default config
carburetor deploy
# Interactive wizard — no carburetor.yml needed
carburetor deploy --interactive
# Interactive wizard + dry-run (validate credentials without deploying)
carburetor deploy --interactive --dry-run
# Interactive wizard — no carburetor.yml needed
carburetor deploy --interactive
# Interactive wizard + dry-run (validate credentials without deploying)
carburetor deploy --interactive --dry-run
# Override platform at runtime
carburetor deploy --target lambda
# Validate only — no deploy
carburetor deploy --dry-run
# Use a custom config path
carburetor deploy --config ./config/prod.yml
# JSON output for CI pipelines
carburetor deploy --jsonDeploy a single Docker image directly to an EC2 instance — no VCS token required. You point carburetor at a Dockerfile; it ships the file to EC2, builds the image there, and starts the container on port 80.
Pipeline (runs on every carburetor deploy invocation):
Copy Dockerfile locally → Transfer to EC2 → Install Docker (if absent) → Build image on EC2 → Free port 80 → Start container
Re-running the command always gets the latest code (build runs with --no-cache) and replaces the existing container automatically.
All git cloning and build logic lives inside your Dockerfile — carburetor just ships it. A typical React app Dockerfile looks like:
FROM node:20-alpine AS builder
RUN apk add --no-cache git
ARG GIT_REPO_URL=https://github.com/your-org/your-react-app.git
ARG GIT_BRANCH=main
WORKDIR /app
RUN git clone --depth 1 --branch ${GIT_BRANCH} ${GIT_REPO_URL} .
RUN npm ci && npm run build
FROM nginx:1.27-alpine
COPY --from=builder /app/out /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]Only the target section is required — no vcs section needed for Docker deployments:
project:
type: docker
build:
dockerfilePath: ./Dockerfile # relative to this file, or use --dockerfile flag
target:
platform: aws
region: us-east-1
environment: production
resourceId: i-0123456789abcdef0 # your EC2 instance IDexport AWS_ACCESS_KEY_ID=AKIA...
export AWS_SECRET_ACCESS_KEY=...
export carburetor_EC2_SSH_KEY_PATH=/path/to/your-key.pem
export carburetor_EC2_SSH_USER=ec2-user # default; omit if using ec2-user# From config file
carburetor deploy
# Or pass the Dockerfile path directly (no project.type needed in config)
carburetor deploy --dockerfile ./Dockerfile
# Using the included React sample
carburetor deploy --dockerfile ./examples/react-app/Dockerfile
# Dry-run — validate credentials without deploying
carburetor deploy --dockerfile ./Dockerfile --dry-runNew flag:
--dockerfile <path> Path to Dockerfile (enables Docker EC2 deploy mode, always serves on port 80)
The container always binds to port 80 on the EC2 host. Ensure your EC2 security group allows inbound TCP on port 80.
→ [Prepare Dockerfile] $ cp /abs/Dockerfile artifact.tar.gz
✓ Prepare Dockerfile (5ms)
→ [Transfer artifact to EC2]
✓ Transfer artifact to EC2
→ [Install Docker on EC2]
✓ Install Docker on EC2
→ [Build Docker image on EC2]
✓ Build Docker image on EC2
→ [Free port 80 and remove old container]
✓ Free port 80 and remove old container
→ [Start container]
✓ Start container
✓ Deployed successfully
Endpoint: http://ec2-12-34-56-78.compute-1.amazonaws.com
Total time: 85.2s
examples/react-app/Dockerfile clones a React app from a Git repository, builds it, and serves it via nginx. Edit GIT_REPO_URL to point at your own repository before deploying:
# Stage 1: clone + build
FROM node:20-alpine AS builder
RUN apk add --no-cache git
ARG GIT_REPO_URL=https://github.com/your-org/your-react-app.git
ARG GIT_BRANCH=main
WORKDIR /app
RUN git clone --depth 1 --branch ${GIT_BRANCH} ${GIT_REPO_URL} .
RUN npm ci && npm run build
# Stage 2: serve
FROM nginx:1.27-alpine
COPY --from=builder /app/out /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]Deploy it to EC2 with one command:
carburetor deploy --dockerfile ./examples/react-app/DockerfileThe wizard guides you through every deployment decision step by step — no carburetor.yml required.
Run it when deploying to a new environment for the first time or for ad-hoc deployments.
┌ carburetor — Interactive Deployment Wizard
│
◆ What type of project are you deploying?
│ ● React App ○ Other (experimental)
│
◆ Repository URL https://github.com/acme/my-app
◆ Branch to deploy main
◆ Version control ● GitHub
◆ GitHub Token •••••••••••••••••• (masked)
│
◆ Cloud platform ● AWS
◆ Service type ● EC2 Instance
◆ AWS region us-east-1
◆ Environment production
◆ EC2 Instance ID i-0abc123def456
│
◆ AWS_ACCESS_KEY_ID •••••••••••••••••• (masked)
◆ AWS_SECRET_ACCESS_KEY •••••••••••••••••• (masked)
◆ SSH key ● Paste inline ○ Path to file
◆ SSH username ec2-user
◆ Deploy directory /var/www/app
│
┌─── Deployment Summary ──────────────────
│ Project : react │ Repo : acme/my-app
│ Cloud : aws │ Service : ec2
│ Region : us-east-1 │ Env : production
└─────────────────────────────────────────
◆ Proceed with deployment? Yes / No
- All secret fields are masked with
•characters and never written to disk. - Press Ctrl-C at any prompt to cancel without triggering a deployment.
- Combine with
--dry-runto validate credentials before your first real deploy.
Check that your VCS and cloud credentials are valid before deploying:
carburetor validateValidating credentials...
✓ VCS credentials valid
✓ Cloud credentials valid
✓ All checks passed — ready to deploy.
carburetor version- Validate — checks VCS and cloud credentials
- Plan — detects project type, builds a pipeline of steps
- Fetch — clones the configured repo and branch into a temp directory
- Build — runs the pipeline steps locally (install deps → build → package artifact)
- Ship — uploads the artifact to the configured cloud platform
- Report — prints the live endpoint URL on success
- Validate — checks AWS and SSH credentials (no VCS token needed)
- Copy — copies your Dockerfile locally as the artifact
- Transfer — SCPs the Dockerfile to the EC2 instance
- Install — installs Docker on EC2 if not already present (idempotent)
- Build — runs
docker build --no-cacheon EC2; your Dockerfile handles all git cloning and compilation - Free — stops nginx and removes any existing
carburetor-appcontainer to clear port 80 - Start — starts the new container on port 80
- Report — prints the accessible EC2 endpoint
→ Validating credentials...
→ Building deployment pipeline...
→ Running 3 pipeline step(s) locally...
✓ Deployed successfully
Endpoint: https://my-bucket.s3.us-east-1.amazonaws.com/deploys/1234/artifact.tar.gz
Total time: 42.3s
bun testRuns the full unit test suite across the Manager and Engine layers. Output lists each test name and a pass/fail count; exit code is 0 when all tests pass.
bun test tests/unit/engines/OrchestratingEngine.test.tsbun test --watchbun run test:coveragePrints a per-file coverage table and writes coverage/lcov.info for use with any lcov viewer:
File | % Funcs | % Lines | Uncovered Line #s
src/engines/OrchestratingEngine.ts | 100.00 | 100.00 |
src/engines/ShippingEngine.ts | 100.00 | 100.00 |
src/engines/executors/LocalPipelineExecutor.ts | 66.67 | 89.61 | 86-93
src/managers/DeploymentManager.ts | 100.00 | 100.00 |
bun run test:coverage:checkRuns the suite, generates coverage, then verifies per-layer minimums:
| Layer | Line | Branch |
|---|---|---|
src/managers/ |
≥ 90% | ≥ 80% |
src/engines/ |
≥ 88% | ≥ 80% |
| Global | ≥ 85% | — |
Exits 0 with ✓ Coverage thresholds met on pass. Exits 1 with a named error message on violation — use as a required CI step to block low-coverage merges.
tests/
├── helpers/
│ ├── fixtures.ts ← shared test-data builders
│ └── mocks.ts ← interface mock factories (bun:test)
└── unit/
├── client/
│ └── wizard/
│ └── WizardSession.test.ts
├── managers/
│ └── DeploymentManager.test.ts
├── engines/
│ ├── OrchestratingEngine.test.ts
│ ├── ShippingEngine.test.ts
│ └── DockerOrchestration.test.ts
└── executors/
└── LocalPipelineExecutor.test.ts
Unit tests mock all external I/O — no real VCS or cloud API calls are made.
# Install dependencies
bun install
# Run in dev mode (no build step)
bun run dev
# Type check
bunx tsc --noEmit
# Build binary
bun run buildsrc/
client/ DeployCLI — commander-based CLI, argument parsing
managers/ DeploymentManager — orchestrates the deploy flow
engines/ OrchestratingEngine, ShippingEngine — business logic
executors/ LocalPipelineExecutor, JenkinsPipelineExecutor
access/ VCSAccess, CSPAccess — external resource adapters
models/ TypeScript types and enums
config/ ConfigLoader — reads and validates carburetor.yml
index.ts Dependency injection wiring and entry point