For DevEx and platform teams standardizing Python quality across repositories:
uvx --from interlocks il doctor
uvx --from interlocks il check
uvx --from 'interlocks>=0.2,<0.3' il ciThese three commands cover the full adoption arc: diagnose a project, run the local quality loop, and gate a pull request with a pinned spec. No install required to start.
interlocks gives one local/hook/CI command surface for ruff, basedpyright, coverage.py, mutmut, deptry, import-linter, pip-audit, and lizard, while driving the project's pytest/unittest tests and pytest-bdd or behave acceptance suite when available. New repositories can start with auto-detected paths and bundled tool defaults; mature repositories can opt into named presets or explicit [tool.interlocks] thresholds when they need stronger gates.
Use interlocks when you want one deterministic Python quality loop across a repo or an organization, especially when local checks, CI, and agent-authored PRs are drifting apart.
It is not a hosted dashboard, a polyglot quality platform, or a replacement for project-owned tests. It standardizes the repeatable Python gates so humans and agents review against the same evidence.
uvx --from interlocks il doctor
uvx --from interlocks il checkdoctor performs static local inspection only: nearest pyproject.toml, detected source/test/features paths, runner, invoker, active preset, resolved gate values, PATH visibility, blockers, warnings, local integration state, and shortest next steps. It does not run tests, typecheck, coverage, mutation, dependency audit, or network checks.
check runs the local edit loop: fix, format, typecheck, tests, optional acceptance tests, advisory dependency hygiene, cached CRAP feedback when fresh coverage exists, and the suppressions report. It is the command to run after edits before pushing.
The core quality tools ship with the CLI. Unpinned uvx follows the latest PyPI release, which is right for exploration.
For frequent local use, install once:
uv tool install interlocks
il checkpipx install interlocks is also supported when pipx is your installed-tool manager.
Why a tool install, not a project dep? From 0.2 onward interlocks ships zero runtime dependencies and dispatches every gate (
ruff,basedpyright,coverage,mutmut,deptry,import-linter,lizard,pip-audit) throughuvx/uv run --withat pinned versions baked into the package. Installing it as a project dep used to drag a ~100-package transitive graph into your resolver and clashed with libraries likelitellm. As a tool, its environment is isolated from yours.
After installing, populate the cache so subsequent runs work offline:
interlocks warmWire local integrations with a single command:
cd your-python-project
interlocks setup
interlocks setup --checksetup idempotently installs local feedback loops: git pre-commit hook, Claude Code Stop hook, AGENTS.md / CLAUDE.md interlocks block, and bundled Claude skill at .claude/skills/interlocks/SKILL.md. setup --check is read-only and exits non-zero when any local integration is missing or stale.
For GitHub Actions CI, run interlocks setup --ci=github --check to detect existing wiring, or interlocks setup --ci=github to create .github/workflows/interlocks.yml only when no existing workflow invokes interlocks.
For repeatable CI, pin or range-pin the package spec:
uvx --from 'interlocks>=0.2,<0.3' il ci
# or exact-pin:
uvx --from interlocks==0.2.0 il ciIf interlocks is installed in the CI environment, the direct command is:
interlocks ciFor GitHub Actions, either run interlocks setup --ci=github or copy this workflow:
name: interlocks
on:
pull_request:
push:
branches: [main]
jobs:
interlocks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: 0xjgv/interlocks@v1The reusable action installs interlocks via uv tool install, restores the uvx tool cache from actions/cache@v4, runs interlocks warm, then runs interlocks ci with UV_OFFLINE=1. It writes a concise GITHUB_STEP_SUMMARY when GitHub provides the summary file. The default installs the latest PyPI release; when reproducibility matters, pin the package through the existing install-command input:
- uses: 0xjgv/interlocks@v1
with:
install-command: uv tool install 'interlocks>=0.2,<0.3'The action does not duplicate lint, typecheck, coverage, CRAP, dependency, architecture, acceptance, or mutation logic; the CLI remains the source of truth.
For onboarding interlocks one PR at a time on a legacy codebase, interlocks check --changed[=<ref>] scopes file-level gates (fix, format, typecheck, CRAP) to the .py files changed vs the base ref:
interlocks check --changed # scope vs cfg.changed_ref (default origin/main)
interlocks check --changed=HEAD~1 # scope vs explicit refGraph-wide gates (deps, behavior-attribution, acceptance) and the test suite are skipped with a banner — running them under --changed would re-introduce the pre-existing failures that the flag is meant to filter out. Run interlocks test separately when you want the full suite. Override the default base with [tool.interlocks] changed_ref = "main" (or any git ref). pre-commit and ci are unchanged.
When interlocks check or interlocks ci fails, run the failing gate directly to focus the feedback:
il lint
il typecheck
il test
il coverage --min=80
il deps
il audit
il arch
il acceptancePass --help to any gate for available flags. interlocks help shows the common path, and interlocks help --advanced lists every subcommand.
For mature repositories and AI-authored code review, go beyond the local edit loop with crap, mutation, trust, and evaluate. Detailed flags are in the Tasks Reference.
When agents write most of the PRs, human review stops being the quality floor. Deterministic gates become the part that scales:
crapcatches complex code the agent shipped without matching tests.mutationcatches tests the agent wrote that do not actually test the code.coverageand complexity trends feed drift telemetry — signal that agent output is regressing before users notice.trustcombines those into one actionable report, so reviewers (human or LLM-based) have a stable ground truth.
interlocks is complementary to LLM-based reviewers such as CodeRabbit, Greptile, or Diamond. They catch style, design, and intent. interlocks catches what is machine-verifiable: complexity, coverage, mutation survival, dependency hygiene, architectural drift. Runs in seconds, same command locally and in CI.
Python quality often accretes as scattered Ruff, pyright, pytest, coverage, deptry, pip-audit, import-linter, and mutmut config. Local checks drift from protected-branch checks, and agent-written tests can look green while missing behavior. interlocks turns that sprawl into one repeatable gate with explicit thresholds and closure commands.
Presets are optional defaults under [tool.interlocks]. Explicit values in the same layer override preset defaults, so you can manually tune thresholds in pyproject.toml after choosing a preset.
[tool.interlocks]
preset = "baseline" # "baseline" | "strict" | "legacy"baselinelowers first-adoption friction: advisory CRAP, relaxed thresholds, mutation off in CI, acceptance off incheck.strictis for mature repositories: stronger thresholds, blocking CRAP and mutation, mutation in CI, acceptance incheck, and required Gherkin coverage.legacyis for ratcheting existing repositories: very permissive thresholds, advisory gates, mutation off in CI.
agent-safe is intentionally unsupported. If configured, interlocks doctor reports it as an unsupported preset instead of resolving agent-specific defaults.
Nothing is required. interlocks walks up from CWD to the nearest pyproject.toml and auto-detects:
- project root: first directory with
pyproject.toml - test runner: pytest if pytest config or declared deps are present, otherwise unittest
- test dir: first existing of
tests/,test/,src/tests/ - source dir: build-backend declarations, package layouts,
src/<pkg>, top-level packages, or the project root - test invoker:
uv runwhenuv.lockexists, elsepython -m - features dir: first existing of
tests/features/,features/,<test_dir>/features/
Override anything via [tool.interlocks] in pyproject.toml:
[tool.interlocks]
preset = "baseline"
# Paths / runners
src_dir = "mypkg"
test_dir = "tests"
test_runner = "pytest" # "pytest" | "unittest"
test_invoker = "python" # "python" | "uv"
pytest_args = ["-q", "-x"]
# Thresholds
coverage_min = 80
crap_max = 30.0
complexity_max_ccn = 15
complexity_max_args = 7
complexity_max_loc = 100
mutation_min_coverage = 70.0
mutation_max_runtime = 600
mutation_min_score = 80.0
# Gate behavior
skip = [] # e.g. ["typecheck"] for explicit project policy
enforce_crap = true
run_mutation_in_ci = false
enforce_mutation = false
mutation_ci_mode = "off" # "off" | "incremental" | "full"
mutation_since_ref = "origin/main"
# Acceptance
acceptance_runner = "pytest-bdd" # "pytest-bdd" | "behave" | "off"
features_dir = "tests/features"
run_acceptance_in_check = false
require_acceptance = false # true → fail stages when no Gherkin scenarios are present
# Evaluation policy / cached evidence
evaluate_dependency_freshness = false
# Explicit freshness check; not part of default PR CI.
dependency_freshness_command = "interlocks deps-freshness"
dependency_freshness_stage = "interlocks nightly"
audit_severity_threshold = "high" # "low" | "medium" | "high" | "critical"
pr_ci_runtime_budget_seconds = 0
pr_ci_evidence_max_age_hours = 24
ci_evidence_path = ".interlocks/ci.json"Precedence, lowest to highest:
- Bundled dataclass defaults.
- Project preset defaults from
[tool.interlocks]. - Project explicit values.
- CLI flags inside tasks, such as
--min=,--max=,--max-runtime=,--min-score=, and--min-coverage=.
Run interlocks help to see the active preset and resolved values.
Run interlocks config to see every [tool.interlocks] key and source.
Run interlocks config show ruff|basedpyright|coverage|import-linter to see whether a tool is using bundled defaults or project-owned native config.
Run interlocks presets to see preset options, their main thresholds, and copyable config.
Run interlocks presets set baseline to set a project preset from the CLI.
When a project has no native config, interlocks supplies bundled defaults for ruff, basedpyright, coverage.py, and import-linter. Those defaults are adoption defaults, not hidden project policy. Inspect them with interlocks config show ruff, interlocks config show basedpyright, interlocks config show coverage, or interlocks config show import-linter.
For basedpyright, the bundled config is intentionally an adoption baseline. In bare projects, il typecheck passes --project <bundled pyrightconfig.json>, so it may report fewer diagnostics than raw basedpyright with no config. Add [tool.basedpyright], pyrightconfig.json, or pyrightconfig.toml when you want project-owned basedpyright policy. The bundled baseline treats deprecated API usage as an error.
No. A project-owned [tool.ruff], ruff.toml, [tool.basedpyright], pyrightconfig.json, [tool.coverage], .coveragerc, [tool.importlinter], or .importlinter replaces the bundled default for that tool. config show reports the active source.
Prefer native tool ignores for narrow code-level exceptions. Use presets and thresholds for policy. Use interlocks check --changed[=<ref>] to scope first adoption to changed files. Use global skip only when you need an explicit gate-level escape hatch: interlocks check --skip=typecheck, INTERLOCKS_SKIP=typecheck interlocks check, or [tool.interlocks] skip = ["typecheck"]. Unknown skip labels exit 2, and skipped gates print warnings.
interlocks setup installs local hooks, agent docs, and the bundled Claude skill. It does not silently add CI. For GitHub Actions, run interlocks setup --ci=github --check to detect wiring and interlocks setup --ci=github to create .github/workflows/interlocks.yml only when no existing workflow invokes interlocks.
interlocks installs on Python 3.11, 3.12, and 3.13. Python 3.11 is the floor because tomllib is required and bundled defaults target py311 syntax.
Yes for Python repositories that want deterministic local/CI quality gates. The CLI is self-dogfooded, has a reusable GitHub Action, and keeps hosted state out of the trust path. It is still intentionally narrow: no dashboard, no polyglot orchestration, and no replacement for project-owned tests.
| Stage | When | What runs |
|---|---|---|
interlocks check |
Local edit loop | fix -> format -> parallel(typecheck, test, acceptance when opted in) -> deps advisory -> cached CRAP advisory or refresh hint -> suppressions |
interlocks pre-commit |
Git pre-commit hook | fix/format staged Python files, re-stage, typecheck, tests when source changed |
interlocks ci |
Pull requests and protected branches | format-check, lint, complexity, audit, deps, typecheck, coverage, arch, acceptance -> CRAP -> optional mutation (per mutation_ci_mode); writes .interlocks/ci.json timing evidence |
interlocks nightly |
Scheduled jobs | coverage -> audit (warn-skips on transient pip-audit failures) -> mutation, always blocking on mutation_min_score |
interlocks post-edit |
Editor/agent hook interface | advisory ruff fix + format on changed Python files |
interlocks setup |
Local onboarding | installs/checks hooks, agent docs, and Claude skill; --ci=github installs/checks GitHub CI wiring |
interlocks clean |
Local cleanup | removes caches, build artifacts, coverage output, mutation state, and __pycache__/ |
mutation_ci_mode picks how interlocks ci invokes mutmut:
"off"— skip mutation in CI (default;run_mutation_in_ci = truelegacy flag still forces a full run)"incremental"— mutates only files changed vsmutation_since_ref(defaultorigin/main); fast PR signal — runtime scales with PR diff. Empty diff is a clean skip."full"— full mutmut suite
Nightly always runs the full suite + score gate, so PRs trade some signal for speed; the scheduled job catches anything the incremental pass misses.
Correctness:
fix/format: ruff lint-fix and format, mutating files.fix-optimize(aliasunblock)[--apply]: the self-sufficient unblock command. Discovers every fixable ruff rule on the changed file set, picks the highest-value subset under a budget, and writes the full.lintfix/artifact set (plan.json+optimize.json;metrics.jsonwith--metrics). Non-mutating by default;--applyapplies the selected rules, verifies, and restores on failure.--annotateemits GitHub Actions PR annotations..lintfix/replay.jsonis auto-discovered when present (--no-statsto skip).fix-rule --rule=<CODE> [--apply]: rule-scoped support fix (e.g.I001import sort,F401unused import). Plans by default; with--applymutates the tree only when the rule's mode isauto, budgets pass, and the verifier (interlocks ciby default) succeeds. Escrow-mode rules (e.g.F401) always write.lintfix/escrow/<rule>.patchfor review instead of mutating.lint/format-check: read-only equivalents for CI.typecheck: basedpyright.test: pytest or unittest, auto-detected.acceptance: Gherkin via pytest-bdd or behave. Whenrequire_acceptance = true, registered public behavior IDs must be covered by runnable scenarios.
Hygiene:
audit: pip-audit CVE scan;audit_severity_thresholdmakes high-severity policy explicit inevaluate.deps: deptry unused, missing, and transitive import checks.deps-freshness: explicit package-index check for outdated dependencies; not part of default PR CI.arch: import-linter contracts; default contract forbids source importing tests.
Advanced gates:
coverage --min=N: coverage.py with fail-under.--min=Noverridescoverage_min. uv-managed projects get Coverage.py injected viauv run --with; no project dep required.crap --max=N [--changed-only]: CRAP complexity x coverage gate. Blocking depends onenforce_crap.mutation --max-runtime=N [--min-coverage=N] [--min-score=N] [--changed-only]: mutmut. Advisory unlessenforce_mutation = trueor--min-score=is passed.trust [--refresh] [--no-trend]: actionable trust report combining coverage, CRAP, mutation, suspicious-test AST inspection, recent git diff, and next actions.--refreshruns coverage first with--min=0.evaluate: static quality checklist scoring 11 automatable checks (acceptance, unit-tests, coverage, mutation, complexity, deps, deps-freshness, security, audit-severity, pr-speed, ci) for a 0-33 verdict. Reports gap-closure command, task/stage kind, and rationale without running tests, audits, mutation, or package-index lookups. Three checks read explicit policy:evaluate_dependency_freshness(deps-freshness),audit_severity_threshold(audit-severity),pr_ci_runtime_budget_seconds+.interlocks/ci.json(pr-speed).
Scaffolding:
init: writes a greenfieldpyproject.toml,tests/__init__.py, andtests/test_smoke.py; refuses to overwrite.init-acceptance: writes a working pytest-bdd example undertests/features/andtests/step_defs/; refuses to overwrite.
Utility:
config: list every[tool.interlocks]key with type, default, description, and current resolved value (read-only). Single source of truth for agents driving setup.doctor: adoption diagnostic. Exempt from thepyproject.tomlpreflight gate.setup: install hooks, agent docs, and Claude skill.setup --checkverifies them read-only.help: command list plus detected paths, active preset, and thresholds.presets: show preset options, current values, copyable config, and set a project preset withinterlocks presets set <preset>.version: print the installed interlocks version.
Drop .feature files under tests/features/ and step definitions under tests/step_defs/; interlocks acceptance runs them via pytest-bdd and shares coverage with test. Or run interlocks init-acceptance for a working example.
Behavior coverage uses explicit IDs for observable public behavior. For interlocks itself, IDs live in interlocks/behavior_coverage.py near public-boundary inventory entries. Downstream projects with no registry keep zero-config behavior.
Mark scenarios with either syntax immediately above Scenario or Scenario Outline:
# req: task-coverage
@req-stage-ci
Scenario: quality gates run
Given a projectMultiple IDs may attach to one scenario. Comments or tags inside scenario steps do not count. Under require_acceptance = true, runnable projects fail when a live behavior ID is uncovered, a scenario marker is stale, or duplicate live IDs exist. Remediation names the behavior ID and suggests adding # req: <id> or @req-<id>.
Advisory trace evidence is separate from behavior markers. Set INTERLOCKS_ACCEPTANCE_TRACE=1 to request runtime public-symbol evidence; trace failures, missing evidence, or newly untraced symbols are diagnostic-only in this release and do not change acceptance, ci, or check exit codes.
Runner detection order:
acceptance_runnerin config ("pytest-bdd","behave", or"off").- Behave layout:
features_dir/steps/plusfeatures_dir/environment.py. behavedeclared as a dependency but notpytest-bdd.- Default to pytest-bdd.
Acceptance always runs in interlocks ci when a features directory exists. It is opt-in for interlocks check via run_acceptance_in_check = true. Set require_acceptance = true under [tool.interlocks] to make missing Gherkin scenarios and missing behavior markers stage failures; the strict preset enables this by default. check enforces only when run_acceptance_in_check = true.
When the target project has no config for a given tool, interlocks injects its bundled default.
| File | Consumed by | Detected via | Injected flag |
|---|---|---|---|
ruff.toml |
fix, format, lint, format-check |
[tool.ruff], ruff.toml, .ruff.toml |
--config |
pyrightconfig.json |
typecheck |
[tool.basedpyright], pyrightconfig.{json,toml} |
--project |
coveragerc |
coverage |
[tool.coverage.*], .coveragerc |
--rcfile= |
importlinter_template.ini |
arch |
[tool.importlinter], .importlinter, setup.cfg |
formatted tempfile plus --config |
bdd_example.feature |
init-acceptance |
none | direct copy |
bdd_test_example.py |
init-acceptance |
none | direct copy |
bdd_conftest.py |
init-acceptance |
none | direct copy |
agents_block.md |
setup, agents |
existing interlocks doc reference |
appended/created |
skill/SKILL.md |
setup, setup-skill |
byte match at .claude/skills/interlocks/SKILL.md |
direct copy |
scaffold_pyproject.toml |
init |
none | read plus {project_name} substitution |
scaffold_test_example.py |
init |
none | direct copy |
The bundled pyrightconfig.json uses standard mode, suppresses selected noisy diagnostics for first adoption, and sets reportDeprecated = "error".
interlocks deps and interlocks mutation ship no bundled fallback: deptry applies its built-ins, and mutmut reads the project's pyproject.toml.
When interlocks itself crashes, the CLI prints a pre-filled GitHub Issues URL to stderr alongside the canonical Python traceback. The URL opens in your default browser if one is available; the URL on stderr is the contract, browser-open is convenience. interlocks never opens a network connection of its own — only your browser does, only if you choose to follow the link.
What gets captured:
- interlocks version, Python version, platform (
uname -s/uname -m) - subcommand that crashed (e.g.
check,lint) - exception type name
- traceback frames inside
interlocks/(third-party frames collapse to<external frames: N>) - UTC timestamp, CI boolean (from
CIenv), 16-hex fingerprint
What does NOT leave the machine — by construction:
- No source code, file contents, or local variables
- No environment variables or
sys.argvvalues - No hostnames, usernames, or absolute paths (paths are scrubbed to
~/...or project-relative) - No automatic issue submission — interlocks opens a pre-filled GitHub issue in your browser only after you confirm
How reporting works:
- Interactive terminals ask
Report this crash to the interlocks maintainers? Y/n. - Press Enter,
y, oryesto open the pre-filled GitHub issue in your browser. - Answer
n/no, use a non-interactive shell, or run in CI to keep the report local only.
Where local files live:
~/.cache/interlocks/crashes/<fingerprint>.json— full payload, mode 0600 in mode 0700 dir~/.cache/interlocks/crashes/dedup.json— 30-day fingerprint window so repeat crashes do not re-prompt
Inspect cache state any time with interlocks doctor (look for the [crash reports] row) or directly with ls ~/.cache/interlocks/crashes/.
To share a crash manually, attach the payload JSON to your issue or paste relevant fields. The same fields are pre-filled in the URL body the browser opens.
Inspired by Uncle Bob Martin. Since Clean Code, we tend to forget the fundamentals — clean code, deterministic gates, fast feedback. These fundamentals are back stronger than ever, especially as agents write more of the code.
When a PR is blocked by a single lint family (import sort, EOF newline, etc.) and you want to unblock the engineer without rewriting unrelated legacy code, use fix-rule instead of broad fix:
# Plan only — no mutation. Shows files touched, churn, inside/outside-diff lines, risk.
interlocks fix-rule --rule=I001
# Apply iff: rule mode is `auto` + budget passes + final verifier passes.
# Verifier defaults to `interlocks ci`; override with `--verify-cmd="<command>"`.
interlocks fix-rule --rule=I001 --apply
# Escrow-mode rules (F401, UP*, ...) always write `.lintfix/escrow/<rule>.patch`
# for review — never mutate the working tree.
interlocks fix-rule --rule=F401 --applyDefaults:
- Non-mutating.
--applyis required to change files, and even then escrow-mode rules stay in escrow. - Base ref
origin/main(override with--base=<ref>). - Budget profile
unblock(5 files, 80 changed lines, 10 outside-diff lines, risk ≤ 8). Override with--budget=renovationfor planned cleanup PRs. - On verifier failure the original tree is restored and the patch is preserved at
.lintfix/failed.patchfor review.
Rule modes (initial):
| Rule | Mode | Notes |
|---|---|---|
I001 |
auto | Import sort — low churn, no semantic change. |
W292 |
auto | EOF newline. |
F401 |
escrow | Unused import — may have side effects or re-export intent. |
UP* |
escrow | Type-annotation / syntax modernization. |
SIM*, C4* |
advisory | Control-flow / collection rewrites — review noise. |
The full design is in lint_fix_harness_SPEC.md. Phase 0 + 1 ship today; planner/optimizer phases are tracked in the spec.
When a PR is blocked by several lint families at once, fix-optimize (alias unblock) is the one command that just works — it discovers every fixable rule on the changed file set, picks the highest-value subset that fits the budget, and writes the complete .lintfix/ artifact set in a single run:
# Preview — no mutation. Writes .lintfix/plan.json + .lintfix/optimize.json.
interlocks unblock
# Apply the selected subset, verify, restore the tree on any failure.
interlocks unblock --applyDefaults:
- Non-mutating.
--applyis required to change files; escrow-mode rules stay in escrow regardless. - Base ref
origin/main(override with--base=<ref>), budget profileunblock(--budget=renovationfor planned cleanup). .lintfix/replay.jsonis auto-discovered when present to weight rule values from replay history; pass--no-statsto ignore it, or--stats=<path>to point elsewhere.- On verifier failure the original tree is restored and the patch is preserved at
.lintfix/failed.patch.
In CI, a single advisory step covers planning, PR annotations, and metrics:
- name: Unblock pass (advisory)
if: always()
run: interlocks fix-optimize --base=origin/main --annotate --metrics--annotate emits ::notice:: / ::warning:: PR annotations (never ::error::, never changes the exit code); --metrics rolls the artifacts up into .lintfix/metrics.json. Both are produced before --apply runs, so CI still gets hints and metrics even when an apply fails. interlocks setup --ci=github installs this step for you.
Advanced (power-user). The phased commands behind fix-optimize stay available for finer control: fix-plan (non-mutating plan only), fix-replay (replay history into .lintfix/replay.json), fix-annotate (annotations from an existing JSON), and fix-metrics (standalone aggregation). The golden path needs none of them.
interlocks setup is the recommended integration entrypoint and handles hooks, agent docs, and the Claude skill in one command. For cases where you need to install or verify a single integration independently, narrow commands are available:
setup-hooks: writes only the git pre-commit hook (interlocks pre-commit) and Claude Code Stop hook (interlocks post-edit). Use when you want to manage agent docs and the skill separately.agents: appends or creates theAGENTS.md/CLAUDE.mdinterlocks guidance block only.setup-skill: installs or refreshes.claude/skills/interlocks/SKILL.mdonly.
Hooks reference the Python that installed interlocks, so rerun interlocks setup after switching install locations or interpreters.
interlocks pre-commit and interlocks post-edit are the stable hook interfaces when integrating with a custom hook manager.
Maintainer-only release details live in PYPI_RELEASE_CHECKLIST.md.
Package identity:
- PyPI distribution:
interlocks - import package:
interlocks - CLI command:
interlocks
Trusted Publishing setup:
- PyPI: owner
0xjgv, repointerlocks, workflowrelease.yml, environmentpypi - TestPyPI: owner
0xjgv, repointerlocks, workflowrelease.yml, environmenttestpypi - No PyPI API token required.
Release checklist:
- Set
pyproject.tomlversion to the next release. - Set
interlocks/__init__.py__version__to the same release. - Update
CHANGELOG.mdfor the release. - Run
uv run interlocks ci. - Run
uv build. - Trigger
releasemanually to publish to TestPyPI. - Create matching
vX.Y.Ztag. - Push tag.
- Confirm PyPI release, GitHub release assets, and attestations.
See CHANGELOG.md for release history.