HyperMine is an open-source Bitcoin mining research and control-plane platform for studying hashing performance, telemetry, rollout safety, and miner integration workflows.
Honest disclaimer
This is a research platform. CPU mining is not economically viable. This project exists to study and understand Bitcoin mining mechanics, not to generate profit.
HyperMine is designed as a modular research environment for exploring how Bitcoin mining systems behave in practice. It includes:
- CPU hashing benchmarks for naive and midstate-reuse paths
- Revenue and profitability estimation helpers
- Telemetry loading, replay, validation, and before/after comparison
- Policy simulation and dry-run control logic
- Rollout safety primitives such as canary apply, rollback, health gates, and preflight checks
- Miner onboarding and handshake discovery flows
- Vendor adapter abstractions for Antminer, WhatsMiner, and safe null adapters
- Signed verified profile workflow for payload overrides
- Persistence and audit logging for proposals, handshakes, and apply actions
- FastAPI endpoints for research, inspection, and control-plane orchestration
At a high level the repository is organized around the following runtime layers:
src/hypermine/hash_benchmark.pyCPU hash benchmark engine for naive and midstate strategiessrc/hypermine/header_mutation_benchmark.pyheader mutation and Merkle-path benchmark helperssrc/hypermine/economics.pyexpected-value BTC/USD output estimationsrc/hypermine/reporting/interval and daily profitability reporting from CSV telemetry and market datasrc/hypermine/telemetry.pytelemetry event schema, replay, and aggregate metricssrc/hypermine/telemetry_comparison.pybefore/after telemetry window comparison for rollout analysissrc/hypermine/policy/policy models, rules, simulation, and dry-run proposal helperssrc/hypermine/rollout.pystaged rollout safety checks, telemetry validation, and health gatessrc/hypermine/adapters/vendor abstractions, payload catalogs, and verified override supportsrc/hypermine/inventory.pyinventory loading and adapter factory selectionsrc/hypermine/onboarding.pyonboarding confidence scoring, command matrix generation, drift checks, and readiness checklistssrc/hypermine/approval.pysigned approval tokens for live control actionssrc/hypermine/verified_profiles.pysigned verified profile workflow for payload overridessrc/hypermine/persistence.pyproposal, handshake, and apply audit persistence backendssrc/hypermine/api/FastAPI schemas, auth, service layer, and API app factoryscripts/CLI wrappers for benchmarks, API launch, simulation, reporting, and approval-token generationbenchmarks/manifests, templates, sample request bodies, and recorded benchmark outputsdocs/public English documentation, with historical internal planning material preserved underdocs/internal/
src/hypermine/ Core package
scripts/ CLI entry points
benchmarks/templates/ Example CSV/JSON inputs
benchmarks/results/ Recorded benchmark and sample control-plane outputs
docs/ Public English documentation
docs/internal/ Historical internal planning and research notes
tests/ Pytest suite
For local development with the API and test toolchain:
python -m pip install --upgrade pip
python -m pip install -e ".[api,dev]"If you only want the base library:
python -m pip install -e .python scripts/run_hash_benchmark.py --strategy midstate --duration-seconds 10 --processes 4 --difficulty 145.04e12 --block-reward-btc 3.125python scripts/run_api.py --host 127.0.0.1 --port 8000Then open the generated OpenAPI documentation at:
http://127.0.0.1:8000/docs
pytest tests/hypermine.hash_benchmarktimed hashing benchmarks and benchmark result serializationhypermine.header_mutation_benchmarksynthetic header mutation strategies for experimentationhypermine.economicsexpected BTC/USD production estimates from hashrate and network assumptionshypermine.performance_analysisbenchmark-result analysis and recommendation helpershypermine.manifestsmanifest loading and validation for benchmark campaigns
hypermine.telemetryevent normalization, replay, and aggregate share/power metricshypermine.telemetry_comparisonbefore/after telemetry delta reportinghypermine.reporting.profitabilityinterval and daily profitability reporting from CSV templates
hypermine.policy.modelsdispatch model definitionshypermine.policy.rulesrule-based mode scoring and candidate evaluationhypermine.policy.simulatorpolicy simulation enginehypermine.policy.controllerdry-run action plan generationhypermine.rollouthealth gates, telemetry validation, and staged rollout checks
hypermine.adapters.miner_apiabstract miner adapter contract and null adapterhypermine.adapters.vendorsHTTP-oriented Antminer and WhatsMiner adapter implementationshypermine.adapters.payload_catalogmodel-aware payload profiles and catalog discoveryhypermine.adapters.override_registryverified override registry loading and profile selectionhypermine.inventoryinventory JSON parsing and adapter constructionhypermine.onboardingconfidence scoring, command matrices, firmware drift, and readiness checklist helpershypermine.approvalsigned approval tokens for live applyhypermine.verified_profilessigned verified profile generation and validation
hypermine.api.appFastAPI app factory and route definitionshypermine.api.servicerequest execution layer for simulation, onboarding, preflight, apply, and evidence bundle workflowshypermine.api.authAPI-key and role guard helpershypermine.persistenceJSONL/SQLite/PostgreSQL persistence backends for proposals, handshakes, and apply audits
The repository ships with historical benchmark and sample control-plane artifacts under benchmarks/results/.
Useful reference files include:
benchmarks/results/hash-ranked.jsonranked short benchmark resultsbenchmarks/results/hash-midstate-10s-16p.jsonhistorical midstate benchmark samplebenchmarks/results/current-hash-midstate-10s-20p-2026-03-30.jsonrecent local comparison runbenchmarks/results/profitability-report-sample.jsonsample profitability payloadbenchmarks/results/policy-simulation-sample.jsonsample policy simulation outputbenchmarks/results/dry-run-actions-sample.jsonsample dry-run proposal outputbenchmarks/results/apply-audit.jsonlsample apply audit log
See docs/benchmarks.md for a more detailed interpretation guide.
Simple runnable examples are provided under examples/:
examples/run_benchmark.pyexamples/run_profitability.pyexamples/simulate_policy.py
These are intentionally small, readable entry points for new contributors.
Please read CONTRIBUTING.md before opening a pull request.
In short:
- install the project in editable mode with dev extras
- run
pytest tests/ - keep documentation honest about the limits of CPU mining
- treat all control-plane features as research and safety tooling, not profit claims
- Public architecture notes:
docs/architecture.md - Benchmark interpretation guide:
docs/benchmarks.md - Historical internal notes:
docs/internal/
This project is released under the MIT License.