diff --git a/AGENTS.md b/AGENTS.md index 21fc361..8f389cd 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -74,3 +74,17 @@ All three tools are pre-configured in `pyproject.toml` and can be run without ex - **ruff**: `uv run ruff check` (excludes `docs/`, configured in `[tool.ruff]`) - **mypy**: `uv run mypy` (targets `src/`, configured in `[tool.mypy]`) - **pytest**: `uv run pytest` (targets `test/`, configured in `[tool.pytest.ini_options]`) + +## Agent skills + +### Issue tracker + +Issues live as GitHub issues on the canonical upstream `toolsforexperiments/labcore` (not `origin`). See `docs/agents/issue-tracker.md`. + +### Triage labels + +Default canonical label names (`needs-triage`, `needs-info`, `ready-for-agent`, `ready-for-human`, `wontfix`). See `docs/agents/triage-labels.md`. + +### Domain docs + +Single-context layout: one `CONTEXT.md` + `docs/adr/` at the repo root. See `docs/agents/domain.md`. diff --git a/CONTEXT.md b/CONTEXT.md new file mode 100644 index 0000000..9a1e640 --- /dev/null +++ b/CONTEXT.md @@ -0,0 +1,68 @@ +# labcore — Domain Context + +The vocabulary used across the codebase and docs. Update entries as terms are +clarified; remove or rewrite entries that go stale. + +## Protocols subsystem + +- **Protocol** — the top-level entity a lab user runs end-to-end (e.g. a qubit + tune-up). A protocol holds a tree of branches and operations that execute in + sequence, with optional conditional branching. Implemented as a subclass of + `ProtocolBase` whose `__init__` builds `self.root_branch`. + +- **Operation** — a single measurement step inside a protocol (e.g. a resonator + spectroscopy, a power Rabi). Each operation follows a fixed lifecycle: + `measure → load_data → analyze → evaluate → correct`. Implemented as a + subclass of `ProtocolOperation`. + +- **Parameter** — a named handle that an operation reads from or writes to. + Sits between operations and two concerns the operation does not want to know + about: + 1. **Persistence across processes.** Lab work runs in many processes — a + notebook for ad-hoc operations, a script for a full protocol — and + parameter values must survive process boundaries. Each parameter holds a + `params` proxy to whatever persistence layer is in use (typically the + `instrumentserver` parameter manager, but a config file or any other + store works equally well). + 2. **Hardware translation.** Different platforms speak different languages. + QICK can program a qubit frequency in GHz directly; OPX has to split the + same value into IF + LO and mix. Each platform-specific getter/setter + (`_qick_getter`, `_opx_getter`, `_dummy_getter`) carries whatever + conversion logic that platform needs. + + The analysis layer only sees the resolved value via `param()`; it does not + care how it was produced. Operations register parameters via + `_register_inputs`, `_register_outputs`, and `_register_correction_params`. + +- **Correction parameter** — a parameter that controls a *correction strategy* + rather than hardware state (e.g. a noise tolerance, a step count). Subclass + of `CorrectionParameter`. Excluded from hardware verification; otherwise + identical to `ProtocolParameterBase`. + +- **Check** — a pure, side-effect-free assessment performed during `evaluate()`, + producing a `CheckResult(name, passed, description)`. An operation can + register multiple checks; the default `evaluate()` runs them all and returns + RETRY if any fail. + +- **Correction** — a strategy applied *between retries* when a specific check + fails. One instance per operation, created in `__init__` and reused across + retries so stateful strategies (e.g. stepping through a list of windows) work + correctly. A correction declares which check it is `triggered_by`. + +- **Branch** — a named sequence of operations and conditions inside a protocol. + Implemented as `BranchBase`. The simplest protocol is one root branch + containing a flat list of operations (see `QubitTuneup`). + +- **Platform** — the hardware backend a protocol runs against (`DUMMY`, `QICK`, + `OPX`). Selected globally via the `PLATFORMTYPE` module variable in + `labcore.protocols.base`; parameters and operations dispatch to + platform-specific code (`_dummy_getter`, `_qick_getter`, …) based on it. + +- **Report** — a self-contained HTML document assembled by + `ProtocolBase._assemble_report()` after a protocol runs. Each operation + contributes by appending strings (markdown) and figure paths to + `self.report_output`; figures are embedded as base64 data URIs so the + resulting file stands on its own. The default `correct()` adds a check + table; `_register_success_update` adds parameter-improvement lines. + SuperOperations aggregate their sub-operations' contributions. Saved under + `report_path / "{ProtocolName}_report"`. diff --git a/docs/_static/protocols/qubit_tuneup_report.png b/docs/_static/protocols/qubit_tuneup_report.png new file mode 100644 index 0000000..29871de Binary files /dev/null and b/docs/_static/protocols/qubit_tuneup_report.png differ diff --git a/docs/adr/0001-parameters-abstract-persistence-and-hardware.md b/docs/adr/0001-parameters-abstract-persistence-and-hardware.md new file mode 100644 index 0000000..d55cd89 --- /dev/null +++ b/docs/adr/0001-parameters-abstract-persistence-and-hardware.md @@ -0,0 +1,46 @@ +# Parameters abstract persistence and hardware translation + +Parameters in `labcore.protocols` are dataclass subclasses of +`ProtocolParameterBase` that hold a `params` proxy to a persistence backend +and implement platform-specific getter/setter methods (`_qick_getter`, +`_opx_getter`, `_dummy_getter`). Operations interact with parameters through +a uniform `param() / param(value)` call, never with the backing store or the +target platform directly. We chose this shape because parameters sit between +operations and two concerns the operation must not be coupled to: a +persistence layer that survives Python-process boundaries (notebook running +one operation, script running a full protocol — same parameter values), and +hardware platforms that handle parameters in non-equivalent ways (QICK takes +a qubit frequency in GHz directly; OPX has to split it into IF + LO and mix). + +## Considered Options + +- **Flat dict-of-values.** A `dict[str, float]` shared via a module global + or passed into operations. Rejected: provides no place for hardware + translation logic, and forces persistence to be solved in user code. +- **Direct coupling to `instrumentserver`.** Make every parameter call + `instrumentserver.helpers.nestedAttributeFromString` directly, no + abstraction. Rejected: hard-codes one persistence backend; users wanting + config files or other stores would have to fork. Also still leaves the + hardware-translation problem unsolved. +- **Per-operation hardcoding.** Each operation reads/writes hardware in its + own `_measure_*` body. Rejected: parameters are typically reused across + many operations (a `QubitFrequency` shows up in spectroscopy, Rabi, T1, …) + and duplicating the read/write/translate logic per operation is a + maintenance hazard. + +## Consequences + +- **More boilerplate per parameter.** A parameter that needs to support + three platforms is ~30 lines of dataclass + getter/setter pairs even when + the logic is trivial. Mitigated by the "implement only the platforms you + use" pattern — most parameters today implement DUMMY + QICK only and let + the others raise `NotImplementedError`. +- **Persistence backend is swappable.** A toolbox can use the + `instrumentserver` parameter manager (the common choice today), a config + file, or any other store, without changes to operations or to labcore. +- **New platforms add zero churn to existing operations.** Adding OPX + support for a parameter is a localized change — implement + `_opx_getter`/`_opx_setter`. Operations and the analysis layer don't move. +- **Analysis layer stays clean.** Analysis only ever calls `param()` and + receives the resolved value; it does not see the platform-specific + conversion logic. diff --git a/docs/agents/domain.md b/docs/agents/domain.md new file mode 100644 index 0000000..2cb2f58 --- /dev/null +++ b/docs/agents/domain.md @@ -0,0 +1,41 @@ +# Domain Docs + +How the engineering skills should consume this repo's domain documentation when exploring the codebase. + +This repo uses a **single-context** layout: one `CONTEXT.md` at the repo root and one `docs/adr/` directory. + +> Note: `labcore` is one of four packages in the [toolsforexperiments ecosystem](https://toolsforexperiments.github.io/guides/software_map.html) — alongside `instrumentserver`, `plottr`, and `CQEDToolbox`. Each package lives in its own git repo with its own single-context setup. Cross-package vocabulary (e.g. how `labcore` relates to `instrumentserver`) belongs as a short "Ecosystem position" section in this repo's eventual `CONTEXT.md`, not as a separate context. + +## Before exploring, read these + +- **`CONTEXT.md`** at the repo root. +- **`docs/adr/`** — read ADRs that touch the area you're about to work in. + +If any of these files don't exist yet, **proceed silently**. Don't flag their absence; don't suggest creating them upfront. The producer skill (`/grill-with-docs`) creates them lazily when terms or decisions actually get resolved. + +## File structure + +``` +/ +├── CONTEXT.md ← domain glossary (sweep, DataDict, DDH5Writer, …) +├── docs/ +│ ├── adr/ ← architectural decisions +│ │ ├── 0001-….md +│ │ └── 0002-….md +│ └── … ← existing Sphinx docs (unrelated; coexists) +└── src/labcore/ +``` + +The existing Sphinx site under `docs/` is unrelated to `CONTEXT.md` and `docs/adr/` — they coexist. Sphinx will ignore `docs/adr/` unless you explicitly include it in `conf.py`. + +## Use the glossary's vocabulary + +When your output names a domain concept (in an issue title, a refactor proposal, a hypothesis, a test name), use the term as defined in `CONTEXT.md`. Don't drift to synonyms the glossary explicitly avoids. + +If the concept you need isn't in the glossary yet, that's a signal — either you're inventing language the project doesn't use (reconsider) or there's a real gap (note it for `/grill-with-docs`). + +## Flag ADR conflicts + +If your output contradicts an existing ADR, surface it explicitly rather than silently overriding: + +> _Contradicts ADR-0007 (storage format) — but worth reopening because…_ diff --git a/docs/agents/issue-tracker.md b/docs/agents/issue-tracker.md new file mode 100644 index 0000000..ceb9c37 --- /dev/null +++ b/docs/agents/issue-tracker.md @@ -0,0 +1,22 @@ +# Issue tracker: GitHub + +Issues and PRDs for this repo live as GitHub issues on the **canonical upstream**: [`toolsforexperiments/labcore`](https://github.com/toolsforexperiments/labcore). Use the `gh` CLI for all operations. + +> **Important:** This clone has two remotes — `origin` (your fork) and `upstream` (`toolsforexperiments/labcore`). Issues live on `upstream`, not `origin`. **Always pass `--repo toolsforexperiments/labcore`** to `gh issue` commands so they don't default to `origin`. + +## Conventions + +- **Create an issue**: `gh issue create --repo toolsforexperiments/labcore --title "..." --body "..."`. Use a heredoc for multi-line bodies. +- **Read an issue**: `gh issue view --repo toolsforexperiments/labcore --comments`. +- **List issues**: `gh issue list --repo toolsforexperiments/labcore --state open --json number,title,body,labels,comments --jq '[.[] | {number, title, body, labels: [.labels[].name], comments: [.comments[].body]}]'` with appropriate `--label` and `--state` filters. +- **Comment on an issue**: `gh issue comment --repo toolsforexperiments/labcore --body "..."` +- **Apply / remove labels**: `gh issue edit --repo toolsforexperiments/labcore --add-label "..."` / `--remove-label "..."` +- **Close**: `gh issue close --repo toolsforexperiments/labcore --comment "..."` + +## When a skill says "publish to the issue tracker" + +Create a GitHub issue on `toolsforexperiments/labcore`. + +## When a skill says "fetch the relevant ticket" + +Run `gh issue view --repo toolsforexperiments/labcore --comments`. diff --git a/docs/agents/triage-labels.md b/docs/agents/triage-labels.md new file mode 100644 index 0000000..b613c80 --- /dev/null +++ b/docs/agents/triage-labels.md @@ -0,0 +1,24 @@ +# Triage Labels + +The skills speak in terms of five canonical triage roles. This file maps those roles to the actual label strings used in this repo's issue tracker (`toolsforexperiments/labcore` on GitHub). + +| Label in mattpocock/skills | Label in our tracker | Meaning | +| -------------------------- | -------------------- | ---------------------------------------- | +| `needs-triage` | `needs-triage` | Maintainer needs to evaluate this issue | +| `needs-info` | `needs-info` | Waiting on reporter for more information | +| `ready-for-agent` | `ready-for-agent` | Fully specified, ready for an AFK agent | +| `ready-for-human` | `ready-for-human` | Requires human implementation | +| `wontfix` | `wontfix` | Will not be actioned | + +When a skill mentions a role (e.g. "apply the AFK-ready triage label"), use the corresponding label string from this table. + +Of these, only `wontfix` currently exists on `toolsforexperiments/labcore`. The other four will be created on the upstream the first time the `triage` skill applies them. Create them ahead of time with: + +```bash +gh label create needs-triage --repo toolsforexperiments/labcore --description "Maintainer needs to evaluate this issue" +gh label create needs-info --repo toolsforexperiments/labcore --description "Waiting on reporter for more information" +gh label create ready-for-agent --repo toolsforexperiments/labcore --description "Fully specified, ready for an AFK agent" +gh label create ready-for-human --repo toolsforexperiments/labcore --description "Requires human implementation" +``` + +Edit the right-hand column of the table above if you ever decide to remap to existing labels (e.g. reuse `question` as `needs-info`). diff --git a/docs/api/index.md b/docs/api/index.md index 03b86e7..1144046 100644 --- a/docs/api/index.md +++ b/docs/api/index.md @@ -10,5 +10,6 @@ Complete API documentation for Labcore, generated from docstrings. labcore.data labcore.measurement labcore.analysis + labcore.protocols labcore.utils ``` \ No newline at end of file diff --git a/docs/conf.py b/docs/conf.py index aa4a7ab..377a14d 100644 --- a/docs/conf.py +++ b/docs/conf.py @@ -13,8 +13,8 @@ # https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information project = 'Labcore' -copyright = '2025-2026, Marcos Frenkel, Wolfgang Pfaff, Cynthia Nolan, Oliver Wolff' -author = 'Marcos Frenkel, Wolfgang Pfaff, Cynthia Nolan, Oliver Wolff' +copyright = '2025-2026, Tools for Experiments' +author = 'Tools for Experiments' # -- General configuration --------------------------------------------------- # https://www.sphinx-doc.org/en/master/usage/configuration.html#general-configuration diff --git a/docs/user_guide/index.md b/docs/user_guide/index.md index f94d747..f3e5c9e 100644 --- a/docs/user_guide/index.md +++ b/docs/user_guide/index.md @@ -5,5 +5,6 @@ This user guide is organized by different topics, each having their own guides. ```{toctree} measurement/index data/index +protocols/index instruments/index ``` \ No newline at end of file diff --git a/docs/user_guide/protocols/index.md b/docs/user_guide/protocols/index.md new file mode 100644 index 0000000..4181a61 --- /dev/null +++ b/docs/user_guide/protocols/index.md @@ -0,0 +1,114 @@ +# Protocols + +A **protocol** ties several experiments together to achieve a complete goal that no +single one of them can — *calibrating a qubit*, rather than just finding +its frequency. Each experiment is wrapped as an **operation**: a +self-contained unit that measures, analyses, and defines for itself what +counts as success, usually to nail down some number, or provides next steps with an attempt to solve its failures. +The protocol runs its operations in sequence (more complex protocols can have more complex execution flows), +lets each one retry itself with adjusted settings if needed, and records the whole run as a self-contained HTML report. +The result is the calibrated system, with a report that shows how you got there. + +A protocol is built out of three concepts, one per sub-page: + +- {doc}`parameters` — the named handles operations read from and write to +- {doc}`operations` — a single experiment, including its checks and corrections +- {doc}`protocols` — composing operations into a runnable protocol + +## How protocols are organized + +Every protocol is a tree of branches and operations. + +``` +Protocol +└── Branch a named sequence of items + ├── Operation a single measurement step + │ ├── Parameters named handles for inputs and outputs + │ ├── Checks pure assessments after analysis + │ └── Corrections strategies applied between retries + └── Condition (optional) routes execution to one of two branches +``` + +The simplest shape — and the one most protocols use — is a single root +branch with a flat list of operations. See {doc}`protocols` for +super-operations, conditions, and the assembled report. + +## The lifecycle of an operation + +Every operation runs the same five steps in order, on every attempt: + +``` + ◀── platform-specific ──▶ ◀───── platform-agnostic ──────▶ + + measure ──▶ load_data ──▶ analyze ──▶ evaluate ──▶ correct + │ │ │ │ │ + write pull and compute check parameter + hardware normalize (fitting, results writes; + / save shape and statistics) (pure apply any + raw data names assessment) correction + across + platforms +``` + +- `measure` — performs the measurement (or generates fake data on `DUMMY`) and saves the raw data to disk. +- `load_data` — reads the raw data back into memory and normalizes its shape and field names so the rest of the lifecycle is platform-agnostic. +- `analyze` — runs fits and statistics over the loaded data and attaches the results to the operation. +- `evaluate` — returns named check results and an overall status; pure assessment, no side effects. +- `correct` — the only place parameters get written: fitted outputs on success, a correction strategy on retry. + +See {doc}`operations` for how each step is implemented and customized. + +## Run a protocol in 10 lines + +```python +from labcore.protocols import select_platform, ProtocolBase, BranchBase +from labcore.testing.protocol_dummy.gaussian_with_correction import ( + GaussianWithCorrectionOperation, +) + +select_platform("DUMMY") + +class HelloProtocol(ProtocolBase): + def __init__(self): + super().__init__() + self.root_branch = BranchBase("hello") + self.root_branch.extend([GaussianWithCorrectionOperation()]) + +HelloProtocol().execute() +``` + +This protocol has one operation. The operation runs a noisy Gaussian fit +and assesses its own signal-to-noise ratio. The first attempt fails, a +**correction** fires that lowers the simulated noise level, and the operation +retries. After two corrections the SNR check passes, the fit succeeds, and +the protocol writes an HTML report to the current directory. + +A few things to notice: + +- {py:func}`select_platform ` is required + before any protocol can be instantiated. It tells parameters and operations + which hardware backend to dispatch to. `"DUMMY"` is the in-memory backend + used for testing. +- The protocol is just a class with a `root_branch`. The branch holds a + flat list of operations. +- The correction strategy lives **inside** the operation. The protocol does + not know or care that this particular operation retries itself. + +:::{note} +At the moment, protocols only support the `DUMMY`, `QICK`, and `OPX` +platforms. Adding a new platform is a small change — if you need one, +please [open an issue on GitHub](https://github.com/toolsforexperiments/labcore/issues). +::: + +## Where to read next + +Read in order: {doc}`parameters` → {doc}`operations` → {doc}`protocols`. +Each page builds on the previous one. + +```{toctree} +:hidden: + +parameters +operations +protocols +``` diff --git a/docs/user_guide/protocols/operations.md b/docs/user_guide/protocols/operations.md new file mode 100644 index 0000000..604fbf3 --- /dev/null +++ b/docs/user_guide/protocols/operations.md @@ -0,0 +1,479 @@ +# Operations + +An **operation** is a single measurement step inside a protocol — a +resonator spectroscopy, a Rabi calibration, a T1 fit. Every operation +follows the same five-step lifecycle on every attempt and shares the same +hooks for declaring inputs and outputs, assessing results, and reacting to +failure. Most of writing a custom operation is filling in a handful of +methods on a subclass of +{py:class}`ProtocolOperation `. + +This page assumes you have read {doc}`parameters`. + +## The lifecycle of an operation + +``` + ◀── platform-specific ──▶ ◀───── platform-agnostic ──────▶ + + measure ──▶ load_data ──▶ analyze ──▶ evaluate ──▶ correct + │ │ │ │ │ + write pull and compute check parameter + hardware normalize (fitting, results writes; + / save shape and statistics) (pure apply any + raw data names assessment) correction + across + platforms +``` + +The split between platform-specific and platform-agnostic steps is +deliberate: `analyze`, `evaluate`, and `correct` should run identically no +matter which backend produced the data. Whatever per-platform quirks exist +in field names, units, or array shapes have to be reconciled by +`load_data` so that everything downstream sees a single canonical shape. + +- **`measure`** performs the measurement (or generates fake data on + `DUMMY`) and saves it to disk via the standard sweep + DDH5 machinery. + Dispatches to `_measure_dummy` / `_measure_qick` / `_measure_opx`. + Returns the path the data was written to. +- **`load_data`** reads that path back into memory and **normalizes the + data so that downstream steps see the same shape and variable names + regardless of platform**. Different backends can save data with + different field names or slightly different shapes; reconciling those + differences here is what lets `analyze` be platform-agnostic. Stores + the result on the operation as `independents` and `dependents` + dictionaries. Dispatches to `_load_data_dummy` / `_load_data_qick` / + `_load_data_opx`. +- **`analyze`** is platform-agnostic. Run your fits, compute summary + statistics, attach results to `self`. Do **not** mutate parameters here. +- **`evaluate`** is **pure assessment**. It returns named check results + and an overall status (`SUCCESS` / `RETRY` / `FAILURE`). No side + effects. By default this just runs every check registered with + `_register_check`. +- **`correct`** is the **only** place an operation modifies parameters. + On `SUCCESS` it writes any computed outputs back. On `RETRY` it applies + a correction strategy for the failed check. On `FAILURE` it usually + does nothing — the operation has already given up. + +### Running an operation on its own + +While developing a new operation it is often easier to exercise it +standalone than to wrap it in a +{py:class}`ProtocolBase ` subclass. +Every operation has its own `execute()` that runs the full lifecycle once +and returns the +{py:class}`EvaluateResult `: + +```python +from labcore.protocols import select_platform + +select_platform("DUMMY") + +op = MyOperation() +result = op.execute() + +result.status # SUCCESS / RETRY / FAILURE for this attempt +result.checks # CheckResult list from evaluate() +op.report_output # markdown strings and figure paths the operation produced +op.figure_paths # figures attached during analyze +op.improvements # ParamImprovements from registered success updates +``` + +A few things to keep in mind: + +- `op.execute()` runs **one attempt**. The retry-on-`RETRY` loop lives in + the protocol layer — to exercise corrections end-to-end you either call + `op.execute()` again while `result.status == OperationStatus.RETRY`, or + wrap the operation in a small one-operation protocol like the runnable + example at the bottom of this page. +- The HTML report is **not** assembled — that happens only inside + `ProtocolBase.execute()`. For development you typically just inspect + `result.status` and `op.report_output` directly. +- {py:func}`select_platform ` still + has to be called first, exactly as it does for a protocol. + +## Registering inputs, outputs, and platform code + +Operations declare their inputs and outputs with three registration calls +inside `__init__`: + +```python +self._register_inputs( + center=GaussianCenter(params), + sigma=GaussianSigma(params), + offset=GaussianOffset(params), +) +self._register_outputs(amplitude=GaussianAmplitude(params)) +self._register_correction_params( + noise_reduction_factor=GaussianNoiseReductionFactor(params), +) +``` + +Each call does two things: it stores the parameter in a dictionary +(`input_params`, `output_params`, `correction_params`) and exposes it as +an attribute on the operation. After the calls above, `self.center()`, +`self.amplitude()`, and `self.noise_reduction_factor()` all work. Inputs +get verified before the protocol runs; outputs are written by `correct()` +on success; correction parameters skip the hardware verification check. + +Platform-specific work — measurement and data loading — is split exactly +the way parameter getters and setters are: + +```python +def _measure_dummy(self) -> Path: + # generate fake data and run a sweep into a DDH5 file + ... + +def _measure_qick(self) -> Path: + # write QICK pulse sequence, run, save + ... + +def _load_data_dummy(self) -> None: + data = datadict_from_hdf5(self.data_loc / "data.ddh5") + self.independents["x_values"] = data["x"]["values"] + self.dependents["y_values"] = data["y"]["values"] +``` + +The base class's `measure()` and `load_data()` dispatch to the right +method based on the platform selected with +{py:func}`select_platform `. You only +implement the platforms you actually run on; the others raise +`NotImplementedError` if invoked. + +:::{note} +The leading underscore on methods like `_register_inputs`, +`_register_check`, `_measure_dummy`, and `_load_data_dummy` is the Python +convention for *"internal — don't call from outside the class."* It is a +signal to whoever is **using** an operation: instantiate it, hand it to a +protocol, and let the framework call these for you. Whoever is **writing** +an operation absolutely does use them — in `__init__` and in overrides. +The same convention applies everywhere on this page (`_register_outputs`, +`_register_correction_params`, `_register_check`, +`_register_success_update`, `_measure_*`, `_load_data_*`, …). +::: + +## Correcting itself + +### Checks: assessing the result + +A **check** is a pure function that returns a +{py:class}`CheckResult ` — a name, a +boolean `passed`, and a one-line description that ends up in the report: + +```python +def _check_snr(self) -> CheckResult: + return CheckResult( + name="snr", + passed=self.snr >= self.SNR_THRESHOLD, + description=f"SNR={self.snr:.2f}, threshold={self.SNR_THRESHOLD}", + ) +``` + +Register the check inside `__init__`: + +```python +self._register_check( + name="snr", + check_func=self._check_snr, + correction=self._noise_reduction, +) +``` + +The `correction` argument is the strategy to apply when this specific check +fails — covered next. Pass `None` if there is no correction (the operation +fails immediately when this check fails) or a list to declare a fallback +chain. + +The default {py:meth}`evaluate ` +runs every registered check and returns `SUCCESS` if all pass, `RETRY` if +any fail. You only need to override `evaluate` for non-trivial logic that +cannot be expressed as a simple AND of independent checks. + +### Corrections: doing something between retries + +A **correction** object represents a strategy applied between retries when a specific +check fails. It is a subclass of +{py:class}`Correction `: + +```python +from labcore.protocols import Correction + + +class _ReduceNoiseLevelCorrection(Correction): + name = "reduce_noise_level" + description = "Divide measurement noise std by the noise_reduction_factor parameter" + + def __init__(self, operation, max_applications: int = 3): + self.operation = operation + self.max_applications = max_applications + self._applications = 0 + + def can_apply(self) -> bool: + return self._applications < self.max_applications + + def apply(self) -> None: + factor = self.operation.noise_reduction_factor() + self.operation._noise_std /= factor + self._applications += 1 +``` + +A correction has four pieces: + +- **Class-level metadata** — `name`, and `description`. + All three end up in the protocol's report. `name` and `description` + identify the strategy. +- **`__init__`** — usually takes a reference to the operation (so the + correction can read or write its parameters), any configuration values + it needs (a maximum number of applications, a list of frequency windows + to scan, etc.), and any internal state used to track progress (a + counter, an index, …). +- **`can_apply() -> bool`** — defines the **fail state** for the + correction strategy. This is the mechanism that guarantees an operation + does not retry forever: when `can_apply()` returns `False`, the default + `correct()` escalates the operation to `FAILURE` and the protocol moves + on (or stops). Every correction **must** have a meaningful exit + condition encoded here — a counter, an end-of-list check, an + out-of-range guard, anything that bounds the work. A correction that + always returns `True` will keep an operation retrying until the + protocol's hard ceiling on attempts (`DEFAULT_MAX_ATTEMPTS = 100`) + finally stops it, which is a backstop, not a design. +- **`apply() -> None`** — performs the correction. Called between + attempts, before the next `measure` runs. This is where the actual + mutation happens — write a hardware parameter, advance an internal + pointer, increase an averaging count, etc. + +A subtle but important constraint: **the correction is one instance per +operation**, created in the operation's `__init__` and reused across every +retry. That is what lets stateful strategies work — `_applications` in +the example above counts across attempts. If a fresh correction were +built per retry, the counter would always be zero and `can_apply()` could +never return `False`. + +The mapping between a check and its correction is set up at registration: + +```python +self._noise_reduction = _ReduceNoiseLevelCorrection(self, max_applications=3) +self._register_check("snr", self._check_snr, correction=self._noise_reduction) +``` + +#### Fallback chains + +`correction` accepts a list. The default `correct()` walks the list in +order and uses the first one whose `can_apply()` returns `True`. This is +how to express "first try a frequency-window scan; if that runs out, fall +back to a wide sweep": + +```python +self._register_check( + "peak_exists", + self._check_peak, + correction=[self._frequency_sweep, self._wide_sweep_fallback], +) +``` + +If every correction in the chain reports exhausted, the operation moves to +`FAILURE`. + +## Writing back on success + +Most operations need to write a fitted output back to a parameter when the +checks all pass. Register a *success update* in `__init__`: + +```python +self._register_success_update( + param=self.amplitude, + value_func=lambda: self.fit_result.params["A"].value, +) +``` + +`value_func` is called lazily — at `correct()` time — so it can safely +reference attributes that were only set during `analyze` (like +`self.fit_result`). On every successful run the default `correct()` calls +each registered `value_func`, writes the result to the matching parameter, +records a {py:class}`ParamImprovement `, +and appends a "*old → new*" line to the report. Multiple success updates +are applied in registration order. + +If your only success-time work is writing a value back, that is all you +need. You do not have to override `correct()` at all. + +### When to override `correct()` + +Override `correct()` when you want to do something the registration API +cannot express — usually custom report messages or work that depends on +cross-check state. **Always call `super().correct(result)` first** so the +default check table, correction routing, and registered success updates +still run: + +```python +def correct(self, result: EvaluateResult) -> EvaluateResult: + result = super().correct(result) + if result.status == OperationStatus.SUCCESS: + self.report_output.append( + f"Fit **SUCCESSFUL** (SNR={self.snr:.3f}). " + f"{self.amplitude.name}: {old} → {new:.3f}\n" + ) + return result +``` + +The base implementation also escalates `RETRY` to `FAILURE` when a +correction is exhausted, so the returned `result.status` may differ from +the input status — always inspect the returned value, not the original. + +## Adding to the report from an operation + +Each operation accumulates a list of report fragments in +`self.report_output`. The protocol's final HTML report concatenates these +in order, embedding figure paths as base64 images. + +You can append two kinds of items: + +- **Markdown strings**, formatted with backticks, bold, lists, and so on. + These are rendered as-is. +- **`pathlib.Path` objects** pointing at image files (typically the + `figure_paths` accumulated during `analyze`). These are read and + embedded as data URIs so the final report HTML stands on its own. + +Most of the time you will not have to touch this directly: + +- The default `correct()` already appends a check-results table on every + attempt and a parameter-improvement line for each registered success + update. +- Whatever figure paths you append to `self.figure_paths` during + `analyze` get attached to the report by the default check-table block. + +You only need to write to `self.report_output` for messages the framework +does not produce on its own — for example, a one-line summary of the SNR +result tailored to your operation. The pattern in +`GaussianWithCorrectionOperation.correct()` (linked at the bottom) +is the simple case. + +For a richer real-world example, see `T1Operation.correct()` in +[`CQEDToolbox/.../single_qubit/t1.py`](https://github.com/toolsforexperiments/CQEDToolbox/blob/main/src/cqedtoolbox/protocols/operations/single_qubit/t1.py#L421-L448). +It builds a full per-attempt section: a Markdown header with the data +path and SNR threshold, then one sub-section per fit component +(real / imaginary / magnitude) with the corresponding figure embedded +inline and the lmfit fit report dumped in a code block — all written by +`append`-ing strings and `Path`s to `self.report_output` before calling +`super().correct(result)` to attach the check table. + +## Putting it all together + +Here is a complete, runnable operation that uses every concept introduced +above — a registered output, a registered check, a registered success +update, platform-specific `measure` and `load_data`, and a +platform-agnostic `analyze`. Copy it into a script, run it, and the +protocol will execute end-to-end on the `DUMMY` platform: + +```python +from pathlib import Path + +import matplotlib.pyplot as plt +import numpy as np + +from labcore.analysis import DatasetAnalysis +from labcore.analysis.fitfuncs.generic import Gaussian +from labcore.data.datadict_storage import datadict_from_hdf5 +from labcore.measurement.record import dependent, independent, recording +from labcore.measurement.storage import run_and_save_sweep +from labcore.measurement.sweep import Sweep +from labcore.protocols import ( + BranchBase, CheckResult, ProtocolBase, ProtocolOperation, select_platform, +) +from labcore.testing.protocol_dummy.parameters import GaussianAmplitude + +plt.switch_backend("agg") + + +class MinimalGaussianFit(ProtocolOperation): + SNR_THRESHOLD = 2.0 + + def __init__(self, params=None): + super().__init__() + self.amplitude: GaussianAmplitude + self._register_outputs(amplitude=GaussianAmplitude(params)) + + self._register_check("snr", self._check_snr, correction=None) + self._register_success_update( + param=self.amplitude, + value_func=lambda: self.fit_result.params["A"].value, + ) + + self.fit_result = None + self.snr = None + + def _measure_dummy(self) -> Path: + x = np.linspace(-10, 10, 100) + + @recording(independent("x"), dependent("y")) + def measure(xv): + y_clean = 10.0 * np.exp(-((xv - 0.5) ** 2) / 8.0) + return xv, y_clean + np.random.normal(0, 0.3) + + loc, _ = run_and_save_sweep(Sweep(x, measure), "data", self.name) + return Path(loc) + + def _load_data_dummy(self) -> None: + data = datadict_from_hdf5(self.data_loc / "data.ddh5") + self.independents["x_values"] = data["x"]["values"] + self.dependents["y_values"] = data["y"]["values"] + + def analyze(self) -> None: + with DatasetAnalysis(self.data_loc, self.name) as ds: + x = np.asarray(self.independents["x_values"]) + y = np.asarray(self.dependents["y_values"]) + self.fit_result = Gaussian(x, y).run() + residuals = y - self.fit_result.eval() + amp = self.fit_result.params["A"].value + self.snr = float(np.abs(amp / (4 * np.std(residuals)))) + ds.add(snr=self.snr) + + def _check_snr(self) -> CheckResult: + return CheckResult( + name="snr", + passed=self.snr >= self.SNR_THRESHOLD, + description=f"SNR={self.snr:.2f}, threshold={self.SNR_THRESHOLD}", + ) + + +class MinimalProtocol(ProtocolBase): + def __init__(self): + super().__init__() + self.root_branch = BranchBase("minimal") + self.root_branch.extend([MinimalGaussianFit()]) + + +select_platform("DUMMY") +MinimalProtocol().execute() +``` + +Two things to notice: + +- `evaluate` and `correct` are not overridden. The base class runs every + registered check, marks the operation `RETRY` if any fail, and on + `SUCCESS` calls each registered `value_func` and writes the result to + the corresponding parameter — exactly what we want for an operation + this simple. +- No correction is registered, so any failed check immediately fails the + operation. The next step up is a stateful correction strategy. + +For an operation that adds corrections and overrides `correct()` for a +tailored report, see +{py:class}`GaussianWithCorrectionOperation ` +— full source at +[`src/labcore/testing/protocol_dummy/gaussian_with_correction.py`](https://github.com/toolsforexperiments/labcore/blob/main/src/labcore/testing/protocol_dummy/gaussian_with_correction.py). +That file maps onto the sections of this page like so: + +| Section above | Where it appears | +|---|---| +| Registering inputs / outputs / correction params | top of `__init__` | +| Registering a check + correction | `_register_check` call in `__init__` | +| Correction subclass | `_ReduceNoiseLevelCorrection` | +| Platform code | `_measure_dummy`, `_load_data_dummy` | +| Analyze | `analyze()` | +| Override of `correct()` | bottom of the class | + +## Where to read next + +{doc}`protocols` — wrapping operations into a +{py:class}`ProtocolBase ` and running +them. diff --git a/docs/user_guide/protocols/parameters.md b/docs/user_guide/protocols/parameters.md new file mode 100644 index 0000000..401b1c1 --- /dev/null +++ b/docs/user_guide/protocols/parameters.md @@ -0,0 +1,220 @@ +# Parameters + +A **parameter** is a named handle that an operation reads from or writes to. +On the surface it looks like a single getter/setter pair: + +```python +qubit_frequency() # read +qubit_frequency(5.2e9) # write +``` + +Underneath, it's an abstraction layer that solves two problems an operation +should not have to think about: where the value lives between Python +processes (running a protocol on a notebook first and then on a script for example), +and how each hardware platform actually programs it. + +## Why parameters? + +### Persistence across processes + +Lab work runs in many processes — a notebook for ad-hoc operations, a +script for a full protocol, a dashboard for live monitoring. They all need +to see the same parameter values. Parameters do not store values in +themselves; they hold a `params` proxy to whatever persistence backend the +user wants. The common choice today we use is the parameter manager from +[`instrumentserver`](https://toolsforexperiments.github.io/instrumentserver/first_steps/overview.html#parameter-manager), +but a config file or any other store works equally well — the labcore-side +API does not change. + +### Hardware translation + +Different platforms speak different languages. A QICK FPGA can program a +qubit frequency in GHz directly. An OPX has to split the same value into an +intermediate frequency and a local-oscillator frequency, then mix them. +Each platform-specific getter/setter on the parameter holds whatever +conversion logic that platform needs. Operations never see this — they +just call `qubit_frequency()` and get the actual frequency back. + +## The shape of a parameter + +A parameter is a {py:class}`dataclass ` subclass of +{py:class}`ProtocolParameterBase ` +with three fields and one platform-specific getter/setter pair per backend: + +| Field | What it is | +|---|------------------------------------------------------------------------------------------------------------------------------------| +| `name` | The parameter's display name. Used in reports and logs. | +| `description` | Plain-English description of the value. | +| `params` | The hardware/persistence handle. `None` on `DUMMY`; on real hardware it's typically an `instrumentserver` parameter-manager proxy. | + +The class implements `_dummy_getter` / `_dummy_setter`, +`_qick_getter` / `_qick_setter`, and `_opx_getter` / `_opx_setter`. The +right pair is dispatched inside `__call__` based on which platform was +selected with +{py:func}`select_platform `. + +## Writing a parameter + +Suppose your toolbox stores qubit frequencies in an `instrumentserver` +parameter manager exposed as `params.qubit.f()`. Here is what a +`QubitFrequency` parameter looks like: + +```python +from dataclasses import dataclass, field +from labcore.protocols import ProtocolParameterBase + + +@dataclass +class QubitFrequency(ProtocolParameterBase): + name: str = field(default="qubit_frequency", init=False) + description: str = field( + default="Intermediate frequency of the qubit", init=False, + ) + + def _dummy_getter(self): + return self.params.qubit.f() + + def _dummy_setter(self, value): + self.params.qubit.f(value) + + def _qick_getter(self): + return self.params.qubit.freq() + + def _qick_setter(self, value): + self.params.qubit.freq(value) +``` + +The `name` and `description` fields are declared with `init=False` so the +caller does not have to repeat them — every `QubitFrequency` instance has +the same identity. Only `params` (the hardware handle) is supplied at +construction time: + +```python +from labcore.protocols import select_platform + +select_platform("QICK") +freq = QubitFrequency(params=my_instrument_server_proxy) + +freq() # → 5.2e9 (reads via _qick_getter) +freq(5.21e9) # writes via _qick_setter +``` + +:::{note} +This example writes the same value through both `DUMMY` and `QICK` because the QICK +takes a frequency in GHz directly. An OPX getter/setter would do more work: +it would split the requested frequency into IF + LO, write the LO to the +microwave source, and write the IF to the OPX channel. That conversion is +exactly the kind of platform-specific logic the parameter abstraction is +there to hold. +::: + +## You only implement the platforms you use + +The base class raises `NotImplementedError` for every platform, so a +parameter only needs to implement the platforms it will actually run on. A +parameter can support `DUMMY` and `QICK` only; or `QICK` only; or even +`DUMMY` only for things that have no hardware analogue (a pure +configuration knob, say). Calling a parameter under an unimplemented +platform raises immediately, which surfaces missing support fast rather +than silently falling through. + +This is the common pattern in real toolboxes — see for example +`SaturationSpecDriveGain` in `CQEDToolbox`, which is QICK-only. + +## Reusing a parameter across operations + +A parameter class is defined once and instantiated wherever it is needed. +The same `QubitFrequency` shows up as an input to a spectroscopy operation +and an output of a Rabi calibration: + +```python +class ResonatorSpectroscopy(ProtocolOperation): + def __init__(self, params): + super().__init__() + self._register_inputs(qubit_frequency=QubitFrequency(params)) + # ... + +class PiSpectroscopy(ProtocolOperation): + def __init__(self, params): + super().__init__() + self._register_outputs(qubit_frequency=QubitFrequency(params)) + # ... +``` + +Because both instances point at the same persistence backend through +`params`, a write performed by `PiSpectroscopy` is visible to every later +operation that reads `QubitFrequency`. See {doc}`operations` for the +`_register_inputs` / `_register_outputs` API. + +## Real-world parameters: instrumentserver-backed + +Real toolbox parameters are usually a little more elaborate than the +example above. The `instrumentserver` helper +{py:func}`nestedAttributeFromString ` +lets the getter/setter resolve a dotted attribute path on the proxy, which +is convenient when the parameter manager organizes values under a +per-qubit subtree: + +```python +from instrumentserver.helpers import nestedAttributeFromString + + +@dataclass +class QubitFrequency(ProtocolParameterBase): + name: str = field(default="qubit_frequency", init=False) + description: str = field(default="Intermediate frequency of the qubit", init=False) + + def _qick_getter(self): + active_qubit = nestedAttributeFromString(self.params, "active.qubit")() + return nestedAttributeFromString(self.params, f"{active_qubit}.qubit.freq")() + + def _qick_setter(self, value): + active_qubit = nestedAttributeFromString(self.params, "active.qubit")() + nestedAttributeFromString(self.params, f"{active_qubit}.qubit.freq")(value) +``` + +The labcore-side API has not changed — the operation still just calls +`qubit_frequency()` — but the getter now resolves an "active qubit" +indirection and looks up a per-qubit attribute path. For a full catalogue +of this style of parameter, see +[`CQEDToolbox/protocols/parameters.py`](https://github.com/toolsforexperiments/CQEDToolbox/blob/main/src/cqedtoolbox/protocols/parameters.py). +That toolbox is a working real-world example built on labcore but is not +itself documented yet. + +## Correction parameters + +Some parameters control a *correction strategy* rather than hardware state +— for example, a noise tolerance threshold or the number of frequency +windows to scan through. These are declared as +{py:class}`CorrectionParameter ` +subclasses instead. Apart from that, they look identical to a regular +parameter: + +```python +from labcore.protocols import CorrectionParameter + + +@dataclass +class GaussianNoiseReductionFactor(CorrectionParameter): + name: str = field(default="gaussian_noise_reduction_factor", init=False) + description: str = field( + default="Factor by which the measurement noise std is divided each correction step", + init=False, + ) + + def _dummy_getter(self): + return self._value # in-memory storage, no hardware + + def _dummy_setter(self, v): + self._value = v +``` + +Operations register correction parameters via `_register_correction_params`; +they are excluded from the protocol's pre-execution hardware-parameter +verification because there is no hardware to verify against. See +{doc}`operations` for how corrections use these parameters. + +## Where to read next + +{doc}`operations` — how an operation declares its parameters and runs the +five-step lifecycle. diff --git a/docs/user_guide/protocols/protocols.md b/docs/user_guide/protocols/protocols.md new file mode 100644 index 0000000..b4d2a29 --- /dev/null +++ b/docs/user_guide/protocols/protocols.md @@ -0,0 +1,252 @@ +# Protocols + +A **protocol** is a tree of operations and (optional) conditions executed +in sequence. The simplest shape is one root branch with a flat list of +operations — that is what most real protocols use. Branches and conditions +are there for the smaller number of cases where the flow needs to be +dynamic. + +This page assumes you have read {doc}`parameters` and {doc}`operations`. + +## Picking a platform + +Call {py:func}`select_platform ` once +at the top of your script or notebook, before instantiating any +{py:class}`ProtocolBase `: + +```python +from labcore.protocols import select_platform + +select_platform("DUMMY") # in-memory, for tests and examples +# or +select_platform("QICK") # real RFSoC hardware +# or +select_platform("OPX") # Quantum Machines OPX +``` + +This is the global signal that tells parameters and operations which +platform-specific getter/setter to dispatch to. Instantiating a protocol +without first calling +{py:func}`select_platform ` raises +`ValueError("Please choose a platform")`. + +You only need to call this once per process. A notebook running +exploratory operations, a script running a full protocol, and a unit test +all pick their own platform at startup and stick with it. + +## A simple protocol — the flat case + +A protocol is a class that subclasses +{py:class}`ProtocolBase `, sets a +`root_branch`, and pushes operations onto it. Here is +[`QubitTuneup`](https://github.com/toolsforexperiments/CQEDToolbox/blob/main/src/cqedtoolbox/protocols/qubit_tuneup.py) +from `CQEDToolbox`, which is exactly the flat case: + +```python +from pathlib import Path + +from labcore.protocols.base import ProtocolBase, BranchBase +from cqedtoolbox.protocols.operations import ( + ResonatorSpectroscopy, ResonatorSpectroscopyVsGain, + SaturationSpectroscopy, PowerRabi, PiSpectroscopy, + ResonatorSpectroscopyAfterPi, ReadoutCalibration, + T1Operation, T2EOperation, T2ROperation, +) + + +class QubitTuneup(ProtocolBase): + + def __init__(self, params, report_path: Path = Path(".")): + super().__init__(report_path) + + self.root_branch = BranchBase("QubitTuneup") + self.root_branch.extend([ + ResonatorSpectroscopy(params), + ResonatorSpectroscopyVsGain(params), + SaturationSpectroscopy(params), + PowerRabi(params), + PiSpectroscopy(params), + ResonatorSpectroscopyAfterPi(params), + T1Operation(params), + T2ROperation(params), + T2EOperation(params), + ReadoutCalibration(params), + ]) +``` + +A few things worth pointing out: + +- The protocol's name is `self.__class__.__name__` by default — no need to + set it explicitly. It shows up in logs and as the title of the report. +- `params` flows down to every operation. It is the persistence handle + the parameters proxy through (typically an `instrumentserver` + parameter-manager proxy on real hardware; `None` on `DUMMY`). See + {doc}`parameters`. +- `BranchBase.extend([...])` adds a list of operations in one call; + `BranchBase.append(op)` adds them one at a time. Both return the branch + so you can chain. + +To run it: + +```python +qt = QubitTuneup(params=my_proxy, report_path=Path("./reports")) +qt.execute() +``` + +## Running and inspecting a protocol + +`execute()` walks the root branch, runs each operation through its full +lifecycle, and assembles a final HTML report. Three outputs are worth +checking: + +```python +qt.execute() + +qt.success # True / False / None + # None means execute() was not called +qt.executed_items # list of operations and conditions that actually ran + # (with their report_output filled in) +``` + +Before any operation runs, the protocol calls `verify_all_parameters()`, +which asks every input parameter to read from its persistence backend. If +any read raises (a missing parameter, an unset value), the protocol logs +the failure and exits with `success = False` without ever calling +`measure`. Correction parameters are skipped — there is no hardware to +verify them against. + +If a particular operation's `correct()` returns `FAILURE`, the protocol +stops at that operation, sets `success = False`, and assembles a report +that includes everything that ran up to the failure. + +## The protocol report + +At the end of `execute()`, the protocol writes a self-contained HTML +report to: + +``` +/_report/ +``` + +The report has a table of contents linking to one section per operation +or condition that ran, in execution order. Inside each section you will +find: + +- The operation's `report_output` rendered as Markdown +- Any figures the operation appended to its `figure_paths`, embedded + inline as base64 data URIs (so the file stands on its own and is + emailable) +- The check-results table the default `correct()` writes on every attempt +- Any "*old → new*" lines from registered success updates +- "ATTEMPT N" headers when an operation retried + +```{image} ../../_static/protocols/qubit_tuneup_report.png +:alt: A QubitTuneup protocol report +:align: center +``` + +:::{warning} +Re-running a protocol **overwrites** the previous report directory. Copy +or rename `/_report` before re-running if you +want to keep a prior run. +::: + +## Super-operations: a retry boundary around several operations + +A +{py:class}`SuperOperationBase ` +is a composite operation: a sequence of several operations that the +protocol treats as a single unit. The whole group shares one retry +boundary — if any sub-operation fails, the *super*-operation is what +retries, not the individual sub-operation. + +```python +from labcore.protocols import SuperOperationBase + +class CalibrationSuite(SuperOperationBase): + def __init__(self, params): + super().__init__() + self.operations = [ + ResonatorSpectroscopy(params), + PowerRabi(params), + PiSpectroscopy(params), + ] + + def evaluate(self) -> EvaluateResult: + # called after all sub-operations have run + # decide whether the calibration as a whole was good enough + ... +``` + +A super-operation participates in a protocol the same way a regular +operation does — push it onto a branch alongside individual operations: + +```python +self.root_branch.extend([ + CalibrationSuite(params), + T1Operation(params), +]) +``` + +Two things to keep in mind: + +- A super-operation does **not** have its own `measure` / `load_data` / + `analyze`. The sub-operations handle their own measurements; the super + only sees the aggregate when its `evaluate` and `correct` run. +- Conditions are not allowed inside a super-operation. Use a regular + branch if you need branching at that level. + +The dummy package ships +[`DummySuperOperation`](https://github.com/toolsforexperiments/labcore/blob/main/src/labcore/testing/protocol_dummy/dummy_protocol.py) +as a runnable reference. + +## Branches and conditions + +For most protocols the root branch with `extend([...])` is all you need. +Branches become useful when you need conditional routing — different +sequences of operations depending on something measured earlier in the +run. + +A {py:class}`Condition ` is a node in +the branch tree that evaluates a callable at runtime and routes execution +into one of two branches: + +```python +from labcore.protocols.base import Condition, BranchBase + +high_snr_branch = BranchBase("HighSNR") +high_snr_branch.append(PiSpectroscopy(params)) + +low_snr_branch = BranchBase("LowSNR") +low_snr_branch.append(PowerRabi(params)) +low_snr_branch.append(PiSpectroscopy(params)) + +snr_check = Condition( + condition=lambda: my_snr_param() > 5.0, + true_branch=high_snr_branch, + false_branch=low_snr_branch, + name="SNR Check", +) + +self.root_branch.extend([ + ResonatorSpectroscopy(params), + snr_check, +]) +``` + +When the protocol reaches `snr_check`, it calls the lambda, picks one of +the two branches, and walks into it. The unchosen branch is *not* +executed but is still validated by `verify_all_parameters` at startup — +parameter problems in either branch surface before the run begins. + +The chosen branch's name and the condition outcome show up in the report +as their own section, so it is easy to see which path was taken. + +## Where to read next + +- {mod}`labcore.testing.protocol_dummy` is a runnable catalogue of small + example operations and the `DummySuperOperation` protocol. +- [`CQEDToolbox/protocols/`](https://github.com/toolsforexperiments/CQEDToolbox/tree/main/src/cqedtoolbox/protocols) + is the largest real-world toolbox built on labcore. It is currently + undocumented but is a good source for full-shape parameter and + operation examples. diff --git a/notes/protocol_corrections_architecture.md b/notes/protocol_corrections_architecture.md new file mode 100644 index 0000000..655428b --- /dev/null +++ b/notes/protocol_corrections_architecture.md @@ -0,0 +1,340 @@ +# Protocol Corrections Architecture + +## Background + +The protocol system (`src/labcore/protocols/base.py`) orchestrates multi-step lab +measurements. Each `ProtocolOperation` runs a fixed workflow: + +``` +measure() → load_data() → analyze() → evaluate() → correct() +``` + +Before this change, `evaluate()` did two things: assessed results **and** mutated +hardware parameters. The retry mechanism was blunt — just re-run the same operation +with the same settings. + +## What Changed + +### 1. Separated concerns across `evaluate()` and `correct()` + +| Method | Responsibility | +|---|---| +| `evaluate()` | **Pure assessment.** Returns named check results + overall status. No side effects. | +| `correct()` | **Only place parameters are changed.** Applies found values on success, corrective actions on retry. | + +`correct()` is always called inside `execute()` after `evaluate()`. Its return value +(an `EvaluateResult`) is what the protocol executor sees. + +### 2. New types + +#### `CheckResult` +```python +@dataclass +class CheckResult: + name: str # e.g. "snr_check", "peak_exists" + passed: bool + description: str # e.g. "SNR=1.5, threshold=2.0" +``` + +#### `EvaluateResult` +```python +@dataclass +class EvaluateResult: + status: OperationStatus # SUCCESS / RETRY / FAILURE + checks: list[CheckResult] = [] # named check outcomes +``` +Return type for both `evaluate()` and `correct()`. + +#### `Correction` +```python +class Correction: + name: str = "" + description: str = "" + triggered_by: str = "" # name of the CheckResult that triggers this + + def can_apply(self) -> bool: + """Return False when strategy is exhausted → correct() escalates to FAILURE.""" + return True + + def apply(self) -> None: + """Apply the correction in-place. Called before the next retry attempt.""" + raise NotImplementedError +``` + +Subclass this for each corrective strategy. One **instance per operation**, created +in `__init__` and reused across retries so stateful strategies (e.g. stepping +through a frequency list) work correctly. + +**Example:** +```python +class FrequencySweepCorrection(Correction): + name = "scan_next_frequency_window" + description = "Step through candidate frequency windows until a peak is found" + triggered_by = "peak_exists" + + def __init__(self, freq_center_param, windows: list[float]): + self.freq_center_param = freq_center_param + self.windows = windows + self._idx = 0 + + def can_apply(self) -> bool: + return self._idx < len(self.windows) + + def apply(self) -> None: + self.freq_center_param(self.windows[self._idx]) + self._idx += 1 +``` + +#### `CorrectionParameter` +```python +class CorrectionParameter(ProtocolParameterBase): + is_correction: ClassVar[bool] = True + # Skips hardware params validation in __post_init__ + # Otherwise identical to ProtocolParameterBase — same callable interface, + # same platform-specific getter/setter pattern for unit differences. +``` + +Used for parameters that control correction strategy (window sizes, step counts, +noise tolerances) rather than actual hardware state. Subclass exactly like +`ProtocolParameterBase`. + +--- + +## Registration API + +Operations can use a registration-based path (covers most cases) or override +`evaluate()` / `correct()` directly for complex logic. + +### Registering checks + +```python +# In __init__: +self._register_check( + name="snr_check", + check_func=self._check_snr, + correction=self._snr_correction, # single Correction, or list[Correction], or None +) +self._register_check( + name="peak_exists", + check_func=self._check_peak, + correction=[self._freq_correction, self._fallback_correction], # fallback chain +) +``` + +The `correction` argument accepts: +- `None` — no correction; failed check → immediate FAILURE +- A single `Correction` instance — normalized to a list of one internally +- A `list[Correction]` — tried in order on each retry; first where `can_apply()` is True is used + +**Default `evaluate()`** runs all registered checks: +- All pass → `EvaluateResult(SUCCESS, checks)` +- Any fail → `EvaluateResult(RETRY, checks)` + +**Default `correct()`**: +- Appends a check summary table to `report_output` +- On RETRY: for each failed check, finds the **first** registered `Correction` where `can_apply()` is True: + - No corrections registered → returns `EvaluateResult(FAILURE, checks)` + - All corrections exhausted → returns `EvaluateResult(FAILURE, checks)` + - Otherwise → calls `apply()`, logs the correction +- On SUCCESS: applies all registered success updates (see below) +- On FAILURE: no-op + +### Registering success updates + +```python +# In __init__: +self._register_success_update( + param=self.frequency, + value_func=lambda: self.peak_freq, # called lazily at correct() time +) +``` + +On SUCCESS, `correct()` calls each registered `value_func`, writes the result to `param`, +records a `ParamImprovement`, and appends a line to `report_output`. Multiple updates are +applied in registration order. + +`value_func` is called lazily so it can safely reference attributes set during `analyze()` +(e.g. `self.fit_result`). + +`self.improvements` is reset to `[]` at the start of each `execute()` call, so it always +reflects only the current attempt. + +### Registering correction parameters + +```python +# In __init__: +self._register_correction_params( + window_size=WindowSizeParam(params), + max_steps=MaxStepsParam(params), +) +``` + +Stored in `self.correction_params`. Excluded from `verify_all_parameters()` (no +hardware to check). Accessible as attributes: `self.window_size()`. + +--- + +## Complete operation pattern + +```python +class FindResonatorOperation(ProtocolOperation): + SNR_THRESHOLD = 2.0 + + def __init__(self, params=None): + super().__init__() + self._register_inputs(center=ResonatorCenter(params)) + self._register_outputs(frequency=ResonatorFrequency(params)) + + # Correction strategies — persist across retries + self._freq_sweep = FrequencySweepCorrection( + freq_center_param=self.center, + windows=[5.0e9, 5.5e9, 6.0e9, 6.5e9], + ) + self._fallback_sweep = WideSweepCorrection(self.center) + self._increase_avg = IncreaseAveragingCorrection(self.averages) + + # Register checks → corrections (list = fallback chain) + self._register_check("peak_exists", self._check_peak, + [self._freq_sweep, self._fallback_sweep]) + self._register_check("snr_check", self._check_snr, self._increase_avg) + + # On success, write the found frequency automatically + self._register_success_update(self.frequency, lambda: self.peak_freq) + + # Correction strategy parameters (platform-aware knobs) + self._register_correction_params( + window_size=FrequencyWindowSize(params), + ) + + self.peak_freq: float | None = None + self.snr: float | None = None + + # --- platform-specific measurement (implement for QICK / OPX) --- + def _measure_dummy(self) -> Path: ... + def _load_data_dummy(self) -> None: ... + + def analyze(self) -> None: + # detect peaks, compute SNR — no param mutations here + ... + + # --- checks (pure assessment) --- + def _check_peak(self) -> CheckResult: + passed = self.peak_freq is not None + return CheckResult("peak_exists", passed, + f"{'peak at ' + str(self.peak_freq) if passed else 'no peak detected'}") + + def _check_snr(self) -> CheckResult: + snr = self.snr or 0.0 + passed = snr >= self.SNR_THRESHOLD + return CheckResult("snr_check", passed, + f"SNR={snr:.2f}, threshold={self.SNR_THRESHOLD}") + + # No correct() override needed — base class handles: + # RETRY → applies first applicable correction per failed check + # SUCCESS → writes self.frequency via _register_success_update + # + # Override correct() only for custom report messages or additional logic. +``` + +If extra reporting is needed on SUCCESS, override `correct()` and call `super()` first: + +```python +def correct(self, result: EvaluateResult) -> EvaluateResult: + result = super().correct(result) # check table + corrections + success updates + if result.status == OperationStatus.SUCCESS: + self.report_output.append( + f"Resonator found at {self.peak_freq:.3e} Hz (SNR={self.snr:.2f})\n" + ) + return result +``` + +--- + +## `SuperOperationBase` changes + +- Sub-operations call their own `correct()` internally (inside `execute()`). +- `SuperOperationBase.execute()` now returns `EvaluateResult`. +- `SuperOperationBase` has its own `correct()` — default is a no-op. Override for + super-level parameter changes. + +--- + +## Exported symbols (`protocols/__init__.py`) + +New exports added: +- `CheckResult` +- `Correction` +- `CorrectionParameter` +- `EvaluateResult` + +--- + +## Dummy package additions + +| File | Addition | +|---|---| +| `parameters.py` | `_DummyCorrectionParameterBase(CorrectionParameter)` — in-memory correction params | +| All 6 operation files | `evaluate()` returns `EvaluateResult`; parameter updates moved to `correct()` | +| `dummy_protocol.py` | `DummySuperOperation.evaluate()` returns `EvaluateResult` | + +--- + +## `_DummyCorrectionParameterBase` pattern + +```python +@dataclass +class _DummyCorrectionParameterBase(CorrectionParameter): + def __post_init__(self): + super().__post_init__() + self._value: float = 0.0 + + def _dummy_getter(self) -> float: + return self._value + + def _dummy_setter(self, v: float) -> None: + self._value = v + +# Concrete correction parameter: +@dataclass +class ResonatorWindowSize(_DummyCorrectionParameterBase): + name: str = field(default="resonator_window_size", init=False) + description: str = field(default="Frequency search window width (Hz)", init=False) +``` + +--- + +## What is NOT yet done + +- No new `CorrectionParameter` subclasses in the dummy package (the base class is + there; concrete examples should be added alongside real operations). +- The `_assemble_report()` HTML does not yet have a dedicated "Correction + Parameters" section — check tables appear in `report_output` via the default + `correct()`, but `correction_params` values are not rendered separately. +- Dummy operations have not yet been updated to use `_register_success_update` — + they still override `correct()` manually. That update is deferred. + +--- + +## Files changed + +### Initial corrections architecture +``` +src/labcore/protocols/base.py +src/labcore/protocols/__init__.py +src/labcore/testing/protocol_dummy/parameters.py +src/labcore/testing/protocol_dummy/gaussian.py +src/labcore/testing/protocol_dummy/cosine.py +src/labcore/testing/protocol_dummy/linear.py +src/labcore/testing/protocol_dummy/exponential.py +src/labcore/testing/protocol_dummy/exponential_decay.py +src/labcore/testing/protocol_dummy/exponentially_decaying_sine.py +src/labcore/testing/protocol_dummy/dummy_protocol.py +test/pytest/test_protocols.py +test/pytest/test_protocols_realistic.py +``` + +### Gap fixes (registration-based success updates + fallback corrections) +``` +src/labcore/protocols/base.py +test/pytest/test_protocols.py +``` diff --git a/notes/protocols_user_guide_plan.md b/notes/protocols_user_guide_plan.md new file mode 100644 index 0000000..89302b2 --- /dev/null +++ b/notes/protocols_user_guide_plan.md @@ -0,0 +1,339 @@ +# Plan — Protocols User Guide + +Working plan for the `protocol_guide` branch. Scope: write the user-facing +documentation for the `labcore.protocols` subsystem, including the +corrections feature that just landed in PR #105. + +## 1. Goals + +- Teach an **operation author** (a physicist writing measurement code) how to + use `labcore.protocols`: parameters, operations, the lifecycle, checks, + corrections, and how to assemble operations into a protocol. +- Give a "10 lines and it runs" first-impression that's true to the real API. +- Surface the corrections feature (the headline of PR #105) prominently — + it's tightly coupled to operations, so it lives on the operations page. +- Not in scope: framework-internals reference, contributor docs, end-user + GUI manual. + +Audience precedence: operation author > lab user running existing protocols +> framework contributor (already served by `notes/protocol_corrections_architecture.md`). + +## 2. File structure + +Replace the empty top-level `docs/user_guide/protocols.md` with a directory: + +``` +docs/user_guide/ +├── index.md (already in repo — update toctree) +└── protocols/ + ├── index.md (intro) + ├── parameters.md + ├── operations.md + └── building_protocols.md +``` + +Update `docs/user_guide/index.md` to point its toctree at `protocols/index` +(matches the existing `measurement/index`, `data/index`, `instruments/index` +pattern). + +Asset path for the report screenshot: `docs/_static/protocols/qubit_tuneup_report.png`. +The doc will reference it via a standard image directive; the file can be +added later without re-touching the docs. + +## 3. Code changes alongside the docs + +- **Add `select_platform()` helper** in `src/labcore/protocols/__init__.py`. + Wraps the global `PLATFORMTYPE` assignment so the public API is + `from labcore.protocols import select_platform; select_platform("DUMMY")` + instead of `proto_base.PLATFORMTYPE = ...`. Used by every snippet in the + doc. Five-minute change; removes a wart from the marketing snippet. + +No other code changes in scope. The TODO about `self.condition: str` and the +TODO about Conditions are out of scope for this branch. + +## 4. Per-page outlines + +### 4.1 `protocols/index.md` — Introduction + +Shape: snippet within the first scroll, then brief diagrams. + +``` +# Protocols + +(1 paragraph) What is a protocol? — defines it as a runnable sequence of +measurement steps, mentions QubitTuneup as a real example. + +## Run a protocol in 10 lines +(snippet — see §5) + +## How protocols are organized +(brief ASCII tree diagram: Protocol → Branch → Operation → {Parameters, + Corrections}; 2–3 sentences naming each) +→ See parameters.md, building_protocols.md + +## The lifecycle of an operation +(brief ASCII lifecycle diagram: measure → load_data → analyze → evaluate → + correct; 2–3 sentences) +→ See operations.md + +## Where to read next +parameters → operations → building_protocols +``` + +### 4.2 `protocols/parameters.md` + +``` +# Parameters + +## Why parameters? +Two problems they solve: +- Persistence across processes (notebook + script + protocol runner share + the same values via a pluggable backend; instrumentserver parameter + manager is the common one, but config files / other stores work too) +- Hardware translation (QICK takes a frequency in GHz directly; OPX has to + split it into IF + LO; the parameter is where that conversion lives) + +Analysis layer never touches this — it just calls `param()`. + +## The shape of a parameter +- Dataclass subclass of `ProtocolParameterBase` +- Fields: name, description, params (hardware handle, typed Any) +- Called QCoDeS-style: `param()` / `param(value)` +- Platform dispatch in `__call__` based on PLATFORMTYPE + +## Writing a parameter +Walkthrough: write `QubitFrequency` from scratch — DUMMY + QICK only. +Use the simple `self.params.qubit.f()` style, NOT `nestedAttributeFromString`. +Sidebar: "QICK takes the GHz value directly. A future OPX getter would +split into IF + LO and mix here." This grounds the §"Why parameters?" claim. + +## You only implement the platforms you use +The base class raises `NotImplementedError` per platform. Many real +parameters support DUMMY + QICK only; some are QICK-only +(`SaturationSpecDriveGain`); flux/`ECParam`/`ELParam`/`EJParam` are DUMMY-only. + +## Reusing a parameter across operations +Short: same `QubitFrequency` wired into two different operations' +`_register_inputs(...)`. + +## Real-world parameters: persistence backends +~10 lines. Show one snippet using `instrumentserver.helpers.nestedAttributeFromString` +inline so the doc is self-contained. Mention by name: +- The `instrumentserver` parameter manager as the common backend +- `CQEDToolbox/protocols/parameters.py` as a real-world catalogue (note: + CQEDToolbox is currently undocumented) + +## Correction parameters +Brief: `CorrectionParameter` subclass. Skips hardware verification. Example: +`GaussianNoiseReductionFactor`. Operations register them via +`_register_correction_params(...)`. + +## Where to read next +operations.md +``` + +### 4.3 `protocols/operations.md` + +Outline C: topical body, then a "putting it all together" appendix with the +full `gaussian_with_correction.py` inline. Single page, length is fine. + +``` +# Operations + +(1 paragraph) What an operation is; pointer back to the lifecycle diagram +on the index page. + +## The lifecycle of an operation +Reproduce the ASCII lifecycle diagram. 1–2 paragraphs per step: +- measure (writes raw data; platform-specific) +- load_data (pulls it back; platform-specific) +- analyze (computation, fitting, attaches results to self; no parameter + writes) +- evaluate (pure assessment; returns EvaluateResult with check results) +- correct (the only place parameters are written) + +## A minimal operation +A stripped-down GaussianFit (no corrections): measure + analyze + one +check, no Correction registered. Smallest thing that runs. + +## Registering inputs, outputs, and platform code +- `_register_inputs(...)` / `_register_outputs(...)` / `_register_correction_params(...)` +- `_measure_dummy` / `_measure_qick` / `_measure_opx` +- `_load_data_dummy` / `_load_data_qick` / `_load_data_opx` +- Platform dispatch is the same as for parameters + +## Checks: assessing the result +- `_register_check(name, check_func, correction)` +- `CheckResult(name, passed, description)` +- Default `evaluate()`: all pass → SUCCESS, any fail → RETRY + +## Corrections: doing something between retries +- `Correction` subclass: `name`, `description`, `triggered_by`, + `can_apply()`, `apply()` +- One instance per operation, persists across retries (state lives in the + correction) +- Walk through `_ReduceNoiseLevelCorrection` from gaussian_with_correction +- Fallback chain: pass `list[Correction]`; first applicable one used + +## Writing back on success +- `_register_success_update(param=..., value_func=lambda: ...)` +- Lazy: value_func runs at correct() time +- When this is enough, you don't override correct() at all + +## When to override correct() +- Custom report messages +- Logic that doesn't fit success-update or correction +- Always call `super().correct(result)` first + +## Adding to the report from an operation +- `self.report_output.append(...)` for markdown strings and figure paths +- Default `correct()` already adds a check table on RETRY/FAILURE and + parameter-improvement lines on SUCCESS +- Show the SNR-on-success/failure pattern from `gaussian_with_correction` + +## Putting it all together +Full `gaussian_with_correction.py` inline (~150 lines). Light annotations +calling out which earlier section each piece corresponds to. + +## Where to read next +building_protocols.md +``` + +### 4.4 `protocols/building_protocols.md` + +``` +# Building Protocols + +(1 paragraph) A protocol is a tree of operations + branches. Simplest case +is one root branch with a flat list. Branches and conditions are there +when the flow needs to be dynamic. + +## Picking a platform +- `select_platform("DUMMY")` / `("QICK")` / `("OPX")` +- Required before instantiating any Protocol +- Top of script / notebook +- (uses the helper added alongside this doc) + +## A simple protocol — the flat case +Walkthrough of `QubitTuneup` from CQEDToolbox: +- Subclass `ProtocolBase` +- Set `self.root_branch = BranchBase("name")` +- `self.root_branch.extend([Op(params), Op(params), ...])` +- `params` flows down to every operation + +## Running and inspecting a protocol +- `protocol.execute()` — runs the tree +- `protocol.success` — True / False / None +- `verify_all_parameters()` runs before execute and bails if any + parameter is missing or invalid + +## The protocol report +- Auto-assembled HTML at the end of `execute()` +- `report_path` argument → where it lands; default cwd +- Self-contained: figures embedded as base64 data URIs (one file, mailable) +- TOC + per-operation sections + condition routing + retry attempts visible +- Show a screenshot: `docs/_static/protocols/qubit_tuneup_report.png` + (placeholder — real asset added later) +- :::{warning} + Re-running a protocol **overwrites** the previous report directory. + Copy or rename `/_report` before re-running + if you want to keep a prior run. + ::: + +## Super-operations: a retry boundary around several operations +- `SuperOperationBase` — composite operation that groups N operations under + one retry boundary +- Sub-operations have their own measure/load_data/analyze; the super does + not +- Use case: a calibration suite where the full sequence should retry as a + unit +- One small worked example; mention `DummySuperOperation` as a runnable + reference and the `CalibrationSuite` from the docstring + +## Branches and conditions +- `BranchBase`: `extend([...])` for a sequence +- `Condition(condition=callable, true_branch=..., false_branch=...)` for + dynamic routing +- Show the SNR-based routing example from the Condition docstring +- (No mention of the `self.condition: str` field — it's being phased out) + +## Where to read next +- The dummy package (`labcore.testing.protocol_dummy`) — runnable catalogue +- `CQEDToolbox/protocols/` — real-world reference (currently undocumented) +``` + +## 5. The 10-line snippet (intro page) + +Shape B: one-operation protocol, mirrors `QubitTuneup`. Final form: + +```python +from labcore.protocols import select_platform, ProtocolBase, BranchBase +from labcore.testing.protocol_dummy.gaussian_with_correction import ( + GaussianWithCorrectionOperation, +) + +select_platform("DUMMY") + +class HelloProtocol(ProtocolBase): + def __init__(self): + super().__init__() + self.root_branch = BranchBase("hello") + self.root_branch.extend([GaussianWithCorrectionOperation()]) + +HelloProtocol().execute() +``` + +10 functional lines. Demonstrates: platform selection, ProtocolBase subclass, +root branch, an operation with a registered correction (which fires twice +before the SNR check passes — visible in logs / report). + +Annotation in the doc highlights: +- `select_platform` is required before any Protocol can be instantiated +- The operation contains a correction strategy → see operations.md +- The HTML report lands in cwd → see building_protocols.md + +## 6. Domain artifacts (already in place) + +- `CONTEXT.md` — populated with: Protocol, Operation, Parameter, Correction + Parameter, Check, Correction, Branch, Platform, Report +- `docs/adr/0001-parameters-abstract-persistence-and-hardware.md` — + records the rationale for the parameter abstraction (persistence + + hardware translation), three rejected alternatives, and consequences + +## 7. Silent omissions + +- The `self.condition: str` legacy field on `ProtocolOperation` (being + phased out per TODO at base.py:758) +- OPX getter/setter implementations (no real OPX hardware to validate + against yet) +- Framework-internals: `_RegisteredCheck`, `_RegisteredSuccessUpdate`, + `_flatten_branch_for_execution`, `_collect_all_operations_from_branch`, + `_assemble_report` internals +- The `qick_path` field on dummy parameters (looks like a leak; not user-facing) + +## 8. Implementation order + +1. **`select_platform()` helper** in `protocols/__init__.py` — first, so the + snippet works. +2. **`docs/user_guide/index.md` toctree update** — point at `protocols/index`. +3. **`docs/user_guide/protocols/index.md`** — write the intro, validate the + snippet runs end-to-end against the new helper. +4. **`docs/user_guide/protocols/parameters.md`**. +5. **`docs/user_guide/protocols/operations.md`**. +6. **`docs/user_guide/protocols/building_protocols.md`**. +7. **Delete the empty `docs/user_guide/protocols.md`** placeholder file. +8. **Local doc build** to confirm everything renders, ASCII diagrams hold up, + internal links resolve. + +## 9. Open items (handled later) + +- Screenshot of an actual report HTML at + `docs/_static/protocols/qubit_tuneup_report.png` — author runs + `QubitTuneup` once and saves a screenshot. +- Decide whether to publish CQEDToolbox docs (out of scope for this + branch). +- The `condition: str` cleanup and the `Condition` API stabilization (out + of scope). +- A `select_platform`-style change for `report_path` ergonomics (e.g. + timestamped report dirs) — out of scope, but the warning admonition + in §4.4 documents the current behavior honestly. diff --git a/notes/to_records_mismatch.md b/notes/to_records_mismatch.md new file mode 100644 index 0000000..97b83ca --- /dev/null +++ b/notes/to_records_mismatch.md @@ -0,0 +1,18 @@ +# `to_records` silent mismatch behavior + +## Location +`src/labcore/data/datadict.py` — `DataDictBase.to_records()` (~line 164) + +## Issue +When fields have mismatched outer dimensions (e.g. `x=[1,2,3]`, `z=[10,20]`), +`to_records` does not raise. Instead it falls back to `nrecs=1` and wraps all +arrays in an extra outer dimension, treating everything as a single nested record. + +## Why it matters +`add_data()` calls `to_records` before `validate()`, so the mismatch is never +caught. A `ValueError` is only raised if you set `values` directly on the dict +and then call `validate()`. + +## Options +- Add an explicit length check in `to_records` and raise `ValueError` on mismatch. +- Document the behavior in the docstring so callers know to use `validate()` directly. diff --git a/src/labcore/protocols/__init__.py b/src/labcore/protocols/__init__.py index 29390b2..4285d23 100644 --- a/src/labcore/protocols/__init__.py +++ b/src/labcore/protocols/__init__.py @@ -40,3 +40,6 @@ from labcore.protocols.base import ( SuperOperationBase as SuperOperationBase, ) +from labcore.protocols.base import ( + select_platform as select_platform, +) diff --git a/src/labcore/protocols/base.py b/src/labcore/protocols/base.py index c102c19..66f5272 100644 --- a/src/labcore/protocols/base.py +++ b/src/labcore/protocols/base.py @@ -27,6 +27,25 @@ class PlatformTypes(Enum): PLATFORMTYPE: PlatformTypes | None = None +def select_platform(platform: PlatformTypes | str) -> None: + """Select the hardware platform for subsequent protocol execution. + + Must be called once before instantiating any ``ProtocolBase`` subclass. + Accepts either a ``PlatformTypes`` member or its name as a string + (case-insensitive). + """ + global PLATFORMTYPE + if isinstance(platform, str): + try: + platform = PlatformTypes[platform.upper()] + except KeyError as err: + valid = ", ".join(p.name for p in PlatformTypes) + raise ValueError( + f"Unknown platform {platform!r}. Valid options: {valid}." + ) from err + PLATFORMTYPE = platform + + @dataclass class ProtocolParameterBase: """ diff --git a/src/labcore/testing/protocol_dummy/gaussian_with_correction.py b/src/labcore/testing/protocol_dummy/gaussian_with_correction.py new file mode 100644 index 0000000..53a7bd0 --- /dev/null +++ b/src/labcore/testing/protocol_dummy/gaussian_with_correction.py @@ -0,0 +1,245 @@ +""" +GaussianWithCorrectionOperation — demonstrates the Correction mechanism. + +When the SNR check fails, a _ReduceNoiseLevelCorrection is applied before the +next attempt. Each application divides the measurement noise std by +`noise_reduction_factor` (a CorrectionParameter, default 3.0): + + noise_std: 5.0 → 1.67 → 0.56 + +With amplitude ≈ 10 and SNR_THRESHOLD = 2: + + SNR ≈ amplitude / (4 * noise_std) + 5.0 → ~0.5 FAIL + 1.67 → ~1.5 FAIL + 0.56 → ~4.5 PASS + +If the correction is exhausted (can_apply() returns False) before SNR passes, +correct() escalates the status to FAILURE and the protocol stops. +""" + +from __future__ import annotations + +import logging +from pathlib import Path +from typing import Any, cast + +import matplotlib.pyplot as plt +import numpy as np + +from labcore.analysis import DatasetAnalysis +from labcore.analysis.fit import FitResult +from labcore.analysis.fitfuncs.generic import Gaussian +from labcore.data.datadict_storage import datadict_from_hdf5 +from labcore.measurement.record import dependent, independent, recording +from labcore.measurement.storage import run_and_save_sweep +from labcore.measurement.sweep import Sweep +from labcore.protocols.base import ( + CheckResult, + Correction, + EvaluateResult, + OperationStatus, + ParamImprovement, + ProtocolOperation, +) +from labcore.testing.protocol_dummy.parameters import ( + GaussianAmplitude, + GaussianCenter, + GaussianNoiseReductionFactor, + GaussianOffset, + GaussianSigma, +) + +plt.switch_backend("agg") + +logger = logging.getLogger(__name__) + + +class _ReduceNoiseLevelCorrection(Correction): + """ + Divides the operation's noise std by noise_reduction_factor on each application. + + Demonstrates a stateful Correction: _applications persists across retries + so the correction knows when it has been exhausted. + """ + + name = "reduce_noise_level" + description = "Divide measurement noise std by the noise_reduction_factor parameter" + triggered_by = "snr_check" + + def __init__( + self, operation: "GaussianWithCorrectionOperation", max_applications: int = 3 + ) -> None: + self.operation = operation + self.max_applications = max_applications + self._applications = 0 + + def can_apply(self) -> bool: + return self._applications < self.max_applications + + def apply(self) -> None: + factor = self.operation.noise_reduction_factor() + self.operation._noise_std /= factor + self._applications += 1 + logger.info( + f"[_ReduceNoiseLevelCorrection] noise_std → {self.operation._noise_std:.3f} " + f"(application {self._applications}/{self.max_applications})" + ) + + +class GaussianWithCorrectionOperation(ProtocolOperation): + """ + Gaussian fit operation that uses the registered-check + Correction system. + + Starts with high measurement noise (SNR guaranteed to fail). Each failed + snr_check triggers _ReduceNoiseLevelCorrection, which divides the noise std + by noise_reduction_factor. After enough corrections the SNR passes and the + fitted amplitude is written to the output parameter. + + Args: + params: Instrument params (None for DUMMY platform). + max_corrections: Maximum number of noise-reduction steps before the + correction is exhausted and the operation fails permanently. + """ + + SNR_THRESHOLD = 2 + + # Type annotations for dynamically registered parameters + amplitude: GaussianAmplitude + noise_reduction_factor: GaussianNoiseReductionFactor + + def __init__(self, params: Any = None, max_corrections: int = 3) -> None: + super().__init__() + + self._register_inputs( + center=GaussianCenter(params), + sigma=GaussianSigma(params), + offset=GaussianOffset(params), + ) + self._register_outputs(amplitude=GaussianAmplitude(params)) + + # CorrectionParameter: how aggressively noise is reduced each step + self._register_correction_params( + noise_reduction_factor=GaussianNoiseReductionFactor(params) + ) + self.noise_reduction_factor(3.0) # set initial value + + # Internal noise level — starts high to guarantee first attempt fails + self._noise_std: float = 5.0 + + # The stateful correction strategy + self._noise_reduction = _ReduceNoiseLevelCorrection( + self, max_applications=max_corrections + ) + + # Register the check → correction mapping + self._register_check( + name="snr_check", + check_func=self._check_snr, + correction=self._noise_reduction, + ) + + self.independents = {"x_values": []} + self.dependents = {"y_values": []} + self.fit_result: FitResult | None = None + self.snr: float | None = None + + # ------------------------------------------------------------------ checks + + def _check_snr(self) -> CheckResult: + snr = self.snr if self.snr is not None else 0.0 + return CheckResult( + name="snr_check", + passed=snr >= self.SNR_THRESHOLD, + description=f"SNR={snr:.3f}, threshold={self.SNR_THRESHOLD}", + ) + + # ------------------------------------------------------- platform-specific + + def _measure_dummy(self) -> Path: + true_amplitude = 10.0 + true_center = 0.5 + true_sigma = 2.0 + noise_std = self._noise_std + + x_values = np.linspace(-10, 10, 100) + + @recording(independent("x"), dependent("y")) + def measure_gaussian(x_val: float) -> tuple[float, float]: + y_clean = true_amplitude * np.exp( + -((x_val - true_center) ** 2) / (2 * true_sigma**2) + ) + return x_val, y_clean + np.random.normal(0, noise_std) + + loc, _ = run_and_save_sweep( + Sweep(x_values, measure_gaussian), "data", self.name + ) + return Path(loc) + + def _load_data_dummy(self) -> None: + assert self.data_loc is not None + path = self.data_loc / "data.ddh5" + if not path.exists(): + raise FileNotFoundError(f"File {path} does not exist") + data = datadict_from_hdf5(path) + self.independents["x_values"] = data["x"]["values"] + self.dependents["y_values"] = data["y"]["values"] + + def analyze(self) -> None: + assert self.data_loc is not None + with DatasetAnalysis(self.data_loc, self.name) as ds: + x = np.asarray(self.independents["x_values"]) + y = np.asarray(self.dependents["y_values"]) + + fit = Gaussian(x, y) + self.fit_result = cast(FitResult, fit.run()) + fit_curve = self.fit_result.eval() + residuals = y - fit_curve + + amplitude = self.fit_result.params["A"].value + noise = np.std(residuals) + self.snr = float(np.abs(amplitude / (4 * noise))) + + fig, ax = plt.subplots() + ax.set_title(f"Gaussian fit (noise_std={self._noise_std:.2f})") + ax.plot(x, y, "o", markersize=3, label="data") + ax.plot(x, fit_curve, "-", linewidth=2, label="fit") + ax.legend() + + ds.add(fit_curve=fit_curve, fit_result=self.fit_result, snr=self.snr) + ds.add_figure(self.name, fig=fig) + self.figure_paths.append( + ds._new_file_path(ds.savefolders[1], self.name, suffix="png") + ) + + # ----------------------------------------------------------------- correct + + def correct(self, result: EvaluateResult) -> EvaluateResult: + """ + On SUCCESS: write the fitted amplitude to the output parameter. + On RETRY: the base class routes to _ReduceNoiseLevelCorrection + (which divides self._noise_std by noise_reduction_factor). + If the correction is exhausted, the base class escalates to FAILURE. + """ + # Base handles: check table in report, correction routing, exhaustion + result = super().correct(result) + + if result.status == OperationStatus.SUCCESS: + assert self.fit_result is not None + old = self.amplitude() + new = float(self.fit_result.params["A"].value) + logger.info(f"Updating {self.amplitude.name}: {old} → {new:.3f}") + self.amplitude(new) + self.improvements = [ParamImprovement(old, new, self.amplitude)] + self.report_output.append( + f"Fit **SUCCESSFUL** (SNR={self.snr:.3f}). " + f"{self.amplitude.name}: {old} → {new:.3f}\n" + ) + else: + snr_str = f"{self.snr:.3f}" if self.snr is not None else "N/A" + self.report_output.append( + f"Fit **UNSUCCESSFUL** (SNR={snr_str}). " + f"noise_std={self._noise_std:.3f}\n" + ) + + return result diff --git a/src/labcore/testing/protocol_dummy/parameters.py b/src/labcore/testing/protocol_dummy/parameters.py index 98253b9..8b17d18 100644 --- a/src/labcore/testing/protocol_dummy/parameters.py +++ b/src/labcore/testing/protocol_dummy/parameters.py @@ -1,6 +1,6 @@ from dataclasses import dataclass, field -from labcore.protocols.base import ProtocolParameterBase +from labcore.protocols.base import CorrectionParameter, ProtocolParameterBase @dataclass @@ -23,6 +23,21 @@ def _dummy_setter(self, v: float) -> None: self._value = v +@dataclass +class _DummyCorrectionParameterBase(CorrectionParameter): + """In-memory correction parameter for the dummy package.""" + + def __post_init__(self) -> None: + super().__post_init__() + self._value: float = 0.0 + + def _dummy_getter(self) -> float: + return self._value + + def _dummy_setter(self, v: float) -> None: + self._value = v + + # --------------------------------------------------------------------------- # Gaussian parameters: A * exp(-((x - x0)^2) / (2 * sigma^2)) # --------------------------------------------------------------------------- @@ -56,6 +71,15 @@ class GaussianAmplitude(_DummyParameterBase): qick_path: str = field(default="", init=False) +@dataclass +class GaussianNoiseReductionFactor(_DummyCorrectionParameterBase): + name: str = field(default="gaussian_noise_reduction_factor", init=False) + description: str = field( + default="Factor by which the measurement noise std is divided each correction step", + init=False, + ) + + # --------------------------------------------------------------------------- # Cosine parameters: A * cos(2*pi*f*x + phi) + of # ---------------------------------------------------------------------------