Skip to content

Upgrades from your work#2

Open
grace-shane wants to merge 56 commits into
just-shane:masterfrom
grace-shane:master
Open

Upgrades from your work#2
grace-shane wants to merge 56 commits into
just-shane:masterfrom
grace-shane:master

Conversation

@grace-shane
Copy link
Copy Markdown
Collaborator

made a few upgrades after getting the bearer token from CC

grace-shane and others added 30 commits April 7, 2026 12:18
… tracking (#13)

* Implement web dashboard for tool library display

* fix: PlexClient takes api_secret, env-var credentials, G5 test env

Addresses item 1 from BRIEFING.md:

- PlexClient.__init__ now accepts api_secret and sets the
  X-Plex-Connect-Api-Secret header when provided
- API_KEY and API_SECRET are read from PLEX_API_KEY and
  PLEX_API_SECRET environment variables (no more hardcoded key)
- TENANT_ID switched to G5 (b406c8c4-...) — the tenant we
  actually have access to; Grace UUID kept inline as a comment
- USE_TEST flipped to True — all dev work goes against
  test.connect.plex.com per BRIEFING
- __main__ hard-fails with a clear message if either
  credential env var is missing

NOTE: the previously committed key (k3SmLW3y...) is still in
git history and should be rotated in the Plex Developer Portal
before production deployment.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: rewrite UI as minimal endpoint tester

Replaces the gradient/glass dashboard with a flat, neutral
endpoint tester in the spirit of Postman/Insomnia. The old UI
was decorative; this one is functional.

Backend (app.py)
- New /api/plex/raw proxy route: forwards an arbitrary path and
  HTTP method to Plex via the authenticated PlexClient so the
  browser can test any endpoint without ever seeing credentials.
  Returns http_status, elapsed_ms, size_bytes, headers, and body
  in a single envelope.
- New /api/config route exposing non-secret config (base URL,
  environment, tenant, credential presence) to the UI.
- app.py picks up API_SECRET and passes it to PlexClient.

Frontend (templates/, static/)
- New layout: left rail with preset endpoints + history,
  main area with method selector + URL bar + query row +
  tabbed response pane (Body / Headers / Raw).
- Status strip shows HTTP status pill, elapsed ms, response
  size, method, and path.
- Ctrl/Cmd+Enter sends. Copy and Clear buttons.
- In-memory history (last 20), click to restore any response.
- Preset endpoints for mdm/v1/parts, suppliers, tenants,
  purchase-orders, workcenters, and the blocked tooling/*
  endpoints (tagged 403).
- JSON body is syntax-highlighted via a small regex pass.
- Fusion 360 local loader and file/folder upload preserved.

Design principles
- Zero gradients, zero backdrop-filter, zero box-shadow glows,
  zero hover transforms, zero pulse animations, zero emoji.
- Single solid accent (#3b82f6) used only for the Send button
  and focus rings. Semantic color only for status (green /
  amber / red pills).
- 4px radii, 1px borders, flat panels.
- System font stack for UI, ui-monospace for code.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: link TODO.md items to their GitHub Issues

Each unchecked item now points to its tracking issue
(#1-#12). Adds a note at the top pointing at the live
issues list. Also adds #12 (rotate exposed API key
before production deployment).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: clarify tenant routing is the only IT blocker

Earlier docs incorrectly attributed the 403s on tooling/v1/*
endpoints to a missing API collection subscription. Per the
actual Grace Engineering situation, the only open IT blocker
is tenant routing — credentials currently land on G5 (another
company's read-only data) regardless of the
X-Plex-Connect-Tenant-Id header.

The 403s are now documented as a WORKING HYPOTHESIS:
suspected to be tenant-scoping, will resolve when tenant
routing lands. Cannot be verified from G5 because we have no
authority to write there.

Changes:
- Plex_API_Reference.md: rewrite the "Blocked Endpoints"
  callout to point at tenant routing, mark the 403 resolution
  as a hypothesis
- TODO.md: update the Phase 3 BLOCKED line to point at tenant
  routing instead of collection subscription
- templates/index.html: drop the "403" tags on tooling/v1/*
  presets in the UI rail — the cause is transient tenant
  scoping, not inherent to the endpoints

GitHub issues #1-#6 have been updated to match:
- #1 retitled: tenant routing (was: enable API collections)
- #2: removed blocker framing (read path works on G5 today)
- #3, #6: added "blocked" label (write path blocked on tenant)
- #4, #5: blocker text rewritten to tenant-scoping hypothesis

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: track BRIEFING.md in git

The agent context briefing now lives in source control rather
than as an untracked local file. Keeps it visible to the team
and ensures it stays in sync with code changes via PR review.

Note: a few sections in BRIEFING.md are now stale relative to
work completed in this branch (e.g. "PlexClient missing
api_secret" under What's built, the subscription-vs-tenant
attribution under Gotchas). These will be cleaned up in a
follow-up.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: tenant diagnostic suite (whoami, list, get)

Adds a small read-only test suite that verifies which Plex
tenant our credentials are actually scoped to. This is the
baseline check we should run first whenever the connection
state is in question — and the visible "is the right tenant
connected?" indicator until IT completes the routing change
for Grace Engineering.

New: plex_diagnostics.py
- list_tenants(client)        — GET /mdm/v1/tenants
- get_tenant(client, id)      — GET /mdm/v1/tenants/{id}
- tenant_whoami(client, id)   — composite check that calls
  the two endpoints above and compares the visible tenants
  against the known Grace and G5 UUIDs, returning a structured
  report with a clear `match` enum and a one-line `summary`.
- KNOWN_TENANTS dict + GRACE_TENANT_ID / G5_TENANT_ID constants
  (tenant IDs are not secrets — safe to commit).
- Standalone __main__ runs the suite and pretty-prints the
  report. Reconfigures stdout to UTF-8 first so em-dashes
  don't blow up on a Windows cp1252 console.

New routes in app.py
- GET /api/diagnostics/tenant            → tenant_whoami
- GET /api/diagnostics/tenants/list      → raw list_tenants
- GET /api/diagnostics/tenants/<uuid>    → raw get_tenant by ID

UI (templates/index.html)
- New "Diagnostics" section in the rail, placed first so it's
  the most prominent section. Two preset buttons: tenant_whoami
  and list_tenants. The whoami response renders in the existing
  body pane via the rewrite from d449db1 — its `summary` field
  is the human-readable status line.

Logic branches verified against fake clients: g5, grace,
no_data (auth fail), wrapped dict.data response, unknown tenant,
empty list. All six pass.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs(briefing): clean stale sections, add diagnostics and UI

Updates BRIEFING.md to reflect the state of the branch:

What's built section
- Removed "NEEDS UPDATE: PlexClient missing api_secret" — done
- Added env-var credential rule
- Added a new plex_diagnostics.py section
- Added a new app.py section describing the endpoint tester UI

Immediate TODO section
- Marked item 1 (PlexClient constructor fix) as DONE
- Renumbered remaining items and cross-referenced them to GH Issues
  (#2, #3, #4, #7, #8)
- Added a note pointing readers at the live issue tracker

Gotchas section
- Removed "PlexClient missing api_secret" gotcha — fixed
- Added env-var requirement gotcha (hard-fail behaviour)
- Added pointer to issue #12 about rotating the historical key
- Replaced the wrong "tooling 403 = subscription issue" attribution
  with the correct tenant-scoping hypothesis (issue #1)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test: pytest suite covering PlexClient, diagnostics, loader, routes

65 tests, 0 network calls, runs in <1s. All green locally.

New files
- requirements.txt          flask + requests (runtime deps)
- requirements-dev.txt      pulls requirements.txt + pytest
- pytest.ini                test discovery + verbose output
- tests/__init__.py
- tests/conftest.py         injects dummy PLEX_API_KEY/SECRET BEFORE
                            any module-level reads in app.py / plex_api.py
                            (otherwise the import-time guard would fail
                            test collection). Provides a FakePlexClient
                            fixture that records calls and returns canned
                            responses without ever touching the network.
- tests/test_plex_api.py            16 tests
- tests/test_plex_diagnostics.py    21 tests
- tests/test_tool_library_loader.py 16 tests
- tests/test_app_routes.py          12 tests

Coverage highlights
- PlexClient header construction — locks in the BRIEFING item 1 fix:
  api_secret is included as X-Plex-Connect-Api-Secret only when provided,
  tenant header only when provided, all three present when full credentials
  passed. Test/prod URL switch verified.
- tenant_whoami composite check — all 6 logic branches (grace, g5,
  configured/unknown, other, no_data, empty list) plus all four response
  shape variants Plex might return (bare list, {data:[...]}, {items:[...]},
  {rows:[...]}, single object).
- tool_library_loader — happy path, malformed JSON, missing data key,
  data is not a list, stale file (mtime backdated past 25h limit),
  custom max_age window, abort_on_stale=True aborts the whole run vs
  abort_on_stale=False skips stale and continues, empty directory,
  missing directory, no .json files.
- Flask routes — index, /api/config envelope, all three diagnostics
  routes mocked through patched module-level functions, /api/plex/raw
  proxy (missing path returns 400, GET forwards with auth headers,
  query params except 'path' are forwarded, 4xx propagates as
  envelope status='error'), /api/plex/discover wired to discover_all.

No tests hit the real Plex API. Everything is mocked at the module
boundary or routed through FakePlexClient.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* ci: pytest on pull requests and pushes to master

GitHub Actions workflow that:
- Triggers on pull_request to master and on direct push to master
- Sets up Python 3.11 (minimum version for the dict[str,...] | None
  syntax used in tool_library_loader.py)
- pip-caches requirements*.txt
- Installs requirements-dev.txt (which pulls requirements.txt)
- Runs pytest

Job is named 'pytest'. The status check that branch protection
should require is 'tests / pytest'.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Two small things bundled together so the dev loop is friction-free
on a fresh machine.

bootstrap.py
- New optional dotenv-style loader. Reads KEY=VALUE pairs from a
  gitignored .env.local in the project root and injects them into
  os.environ via setdefault — meaning real shell env vars always
  win, never overridden.
- Imported at the very top of plex_api.py BEFORE its module-level
  os.environ.get() reads, so PLEX_API_KEY / PLEX_API_SECRET pulled
  from .env.local are picked up correctly.
- Missing file is a no-op. Comments (# ...) and blank lines are
  skipped. Matched surrounding quotes are stripped. CRLF tolerated.
- Returns the count of variables actually injected, for diagnostics.

Why
- Previously, every shell that wanted to run app.py had to export
  PLEX_API_KEY and PLEX_API_SECRET first. Spawned subprocesses
  (like Claude Preview) couldn't always inherit them. .env.local
  gives a single per-machine source of truth that survives shell
  restarts and is invisible to git.

Tests
- tests/test_bootstrap.py — 16 new tests covering missing file,
  basic parsing, multi-pair, value-with-=, comment skipping, blank
  line skipping, lines-without-= skipping, double-quote strip,
  single-quote strip, mismatched quotes preserved, internal quotes
  preserved, setdefault preserves existing env, partial override,
  whitespace stripping, CRLF line endings.
- All 81 tests pass locally (65 existing + 16 new).

.gitignore
- Added .env, .env.local, .env.*.local
- Added editor/IDE noise (.vscode/, .idea/, *.swp)
- Added Python tooling noise (.pytest_cache/, .coverage, htmlcov/,
  .tox/, *.egg-info/, build/, dist/)

.env.example
- New committed template showing the expected variable names with
  pointer to developers.plex.com. Copy → .env.local → fill in.

.claude/launch.json
- Claude Preview launch config so `preview_start plex-api` works
  out of the box. This was the loose end from the previous PR.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Background
----------
PlexClient.get() previously caught all HTTPErrors and returned None.
This made tenant_whoami report match="no_data" with summary="credentials
likely invalid" whenever the underlying call returned 401, 403, 404, 5xx
or hit a network failure — even though the actual error was a clean 401
from Plex's gateway. The diagnostic suite was hiding the truth and
forced us to debug via curl + the proxy route.

Changes
-------
plex_api.py
- New PlexClient.get_envelope() method. Returns a structured envelope
  {ok, status, reason, body, elapsed_ms, url, error} so callers can
  distinguish:
    * 2xx success (with parsed JSON, text, or empty body)
    * HTTP errors (401, 403, 404, 5xx) — body is preserved
    * Network failures (DNS, timeout, connection refused) — status=0
    * JSON parse failures (text/html responses) — falls through to text
  Never raises, never swallows.
- PlexClient.get() refactored to delegate to get_envelope() for
  uniformity. Behaviour unchanged: returns parsed JSON on success or
  None on any failure. Legacy stdout logging on errors is preserved
  so existing tests and call sites are unaffected.

plex_diagnostics.py
- tenant_whoami() now calls client.get_envelope() directly so HTTP
  errors surface as new match values:
    * "auth_failed"     — for 401 / 403
    * "request_failed"  — for network errors and other 4xx/5xx
  Both branches return helpful summary strings pointing the operator
  at the actual problem (PLEX_API_KEY/SECRET for auth, network/host
  reachability for request failures).
- Report now includes a list_tenants_envelope key with ok/status/
  reason/elapsed_ms/error so the UI can show the underlying HTTP
  metadata even on success.

tests/
- conftest.py FakePlexClient grows a get_envelope() method that
  synthesizes a 200 OK envelope around set_response bodies, plus a
  new set_envelope() for injecting specific error envelopes.
- test_plex_api.py adds 16 tests for get_envelope (200, 401, 403,
  404, 500, ConnectionError, Timeout, text body fallback, empty
  body, url propagation, elapsed_ms) and 3 for the refactored get()
  legacy interface.
- test_plex_diagnostics.py adds 9 tests for the new branches:
    * 401 → auth_failed (+ summary mentions PLEX_API_KEY/SECRET)
    * 403 → auth_failed
    * auth_failed preserves envelope metadata
    * auth_failed does not waste a get_tenant call
    * 404 → request_failed
    * 500 → request_failed
    * network error → request_failed (+ summary contains "could not reach")
    * Timeout → request_failed
    * request_failed preserves envelope metadata
  Plus 1 test that the success path includes the envelope metadata.
- Existing no_data tests updated for the new summary text.

Total: 105 tests pass locally (24 net new).

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
)

The earlier "tenant routing" hypothesis was wrong. Empirical
testing with Courtney's new Fusion2Plex Consumer Key shows that
the 403/401 errors we were seeing on tooling/v1/* and other
endpoints are PER-PRODUCT SUBSCRIPTION at the dev portal level,
exactly as Plex_API_Reference.md originally said. The tenant
routing detour was a misread on my part — apology embedded in
BRIEFING.md.

Plex 401 vs 404 is the only signal that distinguishes
"unsubscribed product" from "bad credentials": both bad creds
and unsubscribed-product return 401 REQUEST_NOT_AUTHENTICATED
at the gateway, while subscribed-but-resource-missing returns
404 RESOURCE_NOT_FOUND.

Verified access matrix for the Fusion2Plex app:

  Path                                  Status   Subscribed?
  ------------------------------------- ------   -----------
  mdm/v1/tenants                        401      No
  mdm/v1/parts                          401      No
  mdm/v1/suppliers                      401      No
  purchasing/v1/purchase-orders         401      No
  production/v1/control/workcenters     401      No
  manufacturing/v1/operations           404      Yes (MES)
  tooling/v1/tools                      404      Yes (Tooling)
  tooling/v1/tool-assemblies            404      Yes (Tooling)
  tooling/v1/tool-inventory             404      Yes (Tooling)

So Tooling and Standalone MES are now reachable. We still need
Courtney to approve the Fusion2Plex app for Common APIs,
Purchasing, and Production Control before any of the consumable
upsert side of the sync can happen.

Changes
-------
Plex_API_Reference.md
- Replaced the "Tenant Routing Suspected" callout with an
  accurate "API Product Subscription Model" section
- Added the access matrix as a permanent reference table
- Spelled out the 401-vs-404 disambiguation rule

BRIEFING.md
- Current Situation rewritten to reflect Fusion2Plex app,
  31-day key expiration, partial subscription state
- Tenants table reframed as historical reference
- Replaced 403-suspected-tenant-routing block with the verified
  access matrix
- Gotchas updated:
  * Removed the wrong tenant-scoping gotcha
  * Added 401-vs-401-vs-404 explanation
  * Marked the previously-hardcoded k3SmLW3y key as dead
  * Added env var override gotcha
- Immediate TODO updated to reflect that tooling/v1/* is no
  longer blocked (issue #4 work can begin)

TODO.md
- Phase 3 BLOCKED line rewritten to reflect partial
  subscription state and the corrected hypothesis

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds a hard safety guard at the /api/plex/raw layer that refuses
mutating HTTP methods (POST/PUT/PATCH/DELETE) when the server is
running against a production Plex environment, unless the operator
explicitly opts in by setting PLEX_ALLOW_WRITES=1.

Why
---
Empirical testing this hour established that the Fusion2Plex
Consumer Key authenticates against connect.plex.com (PRODUCTION)
on the real Grace Engineering tenant (58f781ba-…). A casual
write — even one triggered by a stray click in the UI — could
affect actual manufacturing operations. We currently have no
test environment for this app.

Changes
-------
app.py
- New module-level constants:
    WRITES_ALLOWED  — true iff PLEX_ALLOW_WRITES env var is set
    IS_PRODUCTION   — true iff client.base does not contain "test."
    WRITE_METHODS   — frozenset of POST, PUT, PATCH, DELETE
- New _is_write_blocked(method) helper returning (blocked, reason).
  GET is never blocked. Mutating methods are blocked iff
  IS_PRODUCTION and not WRITES_ALLOWED.
- /api/plex/raw enforces the guard before any forwarding. Refused
  requests return HTTP 403 with a structured error envelope:
    { status, http_status: 0, method, url, message,
      guard: "PLEX_ALLOW_WRITES", is_production, writes_allowed }
- /api/config exposes is_production and writes_allowed so the UI
  can render an appropriate banner.
- __main__ prints a loud warning banner at startup when running
  against a production environment, indicating whether writes
  are blocked or enabled.

Tests — 14 net new (119 total, all passing)
- 8 tests under TestProductionWriteGuard:
    * GET always allowed in production
    * POST/PUT/PATCH/DELETE blocked in prod default
    * POST allowed in prod when writes enabled
    * POST allowed in test environment regardless
    * /api/config exposes guard state
- 6 tests under TestIsWriteBlocked covering the helper directly,
  including method case-insensitivity

How to enable writes when you actually need them
------------------------------------------------
  $env:PLEX_ALLOW_WRITES = "1"     # PowerShell
  export PLEX_ALLOW_WRITES=1        # bash
  py app.py

Rotate the env var off as soon as you're done.

Next PR: full migration to USE_TEST=False, new Grace tenant UUID,
KNOWN_TENANTS update, and a doc rewrite to retract the I/l misread
hypothesis chain. This PR is intentionally tiny and lands first
so the guard exists before any other code touches production.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Switches the codebase to its actual operating reality after the
debugging marathon: the Fusion2Plex Consumer Key authenticates
against connect.plex.com (PRODUCTION) on the Grace tenant
58f781ba-1691-4f32-b1db-381cdb21300c. There is no test environment
for this app. Reads work; writes are blocked by PR #17's proxy
guard unless PLEX_ALLOW_WRITES=1.

Code
----
plex_api.py
- New module-level constant GRACE_TENANT_ID = the verified Grace
  UUID returned by GET mdm/v1/tenants on 2026-04-07
- TENANT_ID reads from PLEX_TENANT_ID env var, defaults to
  GRACE_TENANT_ID
- USE_TEST reads from PLEX_USE_TEST env var, defaults to False
- API_SECRET docstring clarified — Plex authenticates on the key
  alone for the Fusion2Plex app; the secret header is harmless to
  send but optional
- __main__ banner now prints WARNING when running against PROD,
  including current writes state
- explore_parts() call commented out in __main__ — that helper
  unconditionally pulls 19 MB of unfiltered parts data

plex_diagnostics.py
- KNOWN_TENANTS gains the verified GRACE_TENANT_ID and a
  GRACE_OLD_TENANT_ID labeled "Grace (stale UUID — replace with
  verified one)" so old configs surface a clear diagnostic instead
  of "unknown"

UI
- New env-chips container in templates/index.html holding two pills:
  the existing environment chip (TEST/PROD) and a new writes-chip
  (READ ONLY / WRITES ON) that's only visible in production
- CSS: env-chip.prod gets a stronger red background, font-weight
  bumped. New .writes-chip styles for blocked (green) and allowed
  (red) states.
- script.js loadConfig() reads is_production and writes_allowed
  from /api/config and renders the chips with helpful tooltips
  pointing at PLEX_ALLOW_WRITES

.env.example
- Documents PLEX_TENANT_ID, PLEX_USE_TEST, PLEX_ALLOW_WRITES
- Points at developers.plex.com → My Apps → Fusion2Plex
- Explains the production-by-default model

Docs
----
BRIEFING.md — major rewrite
- Current Situation reflects production reality, real Grace tenant
  ID, write guard, no-test-environment fact, 31-day key cycle
- Tenants table reorganized — verified Grace UUID front and center,
  old wrong UUID kept with clear "stale" label, G5 marked as
  another company's old test data
- Auth section updated — secret is OPTIONAL not "second factor"
- Access matrix VERIFIED (200s on mdm/v1/tenants, parts, suppliers,
  purchase-orders; 404s on tooling/v1/*, manufacturing/v1/*,
  production/v1/control/workcenters)
- New 401 vs 404 explainer
- New section "History of incorrect hypotheses" — postmortem of the
  four wrong turns this debugging session took, all rooted in one
  cause: I misread `l` as `I` when reading the API key from a
  screenshot. Lessons documented so future-me doesn't repeat them.
- Gotchas updated — every-read-hits-prod warning, no-pagination on
  mdm/v1/parts (19.6 MB) and purchasing/v1/purchase-orders (44 MB)
  empirically verified, write guard documented, l-vs-I image
  reading lesson

Plex_API_Reference.md
- Section 3 retitled "Verified Endpoints & Access Matrix"
- Real PROD numbers replace the previous (wrong) tooling-subscribed
  table
- Adds explicit 401-vs-404 reading guide
- Adds the no-pagination gotcha as a permanent reference

TODO.md
- Phase 3 BLOCKED line corrected — IT blocker resolved, what
  remains is finding the right URL patterns for tooling/manufacturing/
  production-control endpoints (those still 404)
- Each Phase 3 item now reflects what's reachable vs. what isn't

Tests — 128 pass locally, 7 net new
- TestKnownTenants: GRACE_TENANT_ID is the verified UUID,
  GRACE_OLD_TENANT_ID is preserved with "stale" label, all known
  IDs are distinct
- TestModuleDefaults: PLEX_TENANT_ID env-var pickup with default,
  PLEX_USE_TEST handling for "1", "true", garbage, and unset

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…#19)

Why
---
bootstrap.py uses os.environ.setdefault() so a real shell env var
ALWAYS wins over .env.local. That's correct for production deploy
where credentials should come from the host's secure environment,
not a file.

But it's wrong for local dev. If a stale shell env var (e.g. an
old PLEX_API_KEY set via setx in the Windows registry years ago)
silently shadows .env.local, debugging "why does this 401" can
burn an hour. We learned this the hard way over the past few hours.

run_dev.py is the OPPOSITE of bootstrap.py: it uses direct
assignment to override the shell, then runs app.py via runpy so
the existing __main__ block (production warning banner, app.run)
fires correctly.

Production deployment is unchanged — it still uses `py app.py`
directly, which respects bootstrap.setdefault() and lets the host
shell env take precedence.

Files
-----
run_dev.py
- New module-level constants PROJECT_ROOT and DEFAULT_ENV_FILE
- force_override_from_env_local(path=None) — parameterizable
  loader (mirrors bootstrap.load_env_local's API). Returns the
  count of os.environ keys actually added or changed (already-
  correct values count as zero).
- main() — calls the loader, prints a one-liner summary, then
  runpy.run_path("app.py", run_name="__main__") so app.py runs
  exactly as if you had typed `py app.py` — but with .env.local
  having already won.

.claude/launch.json
- Claude Preview now points at run_dev.py instead of app.py.
  preview_start spawns a subprocess with the parent shell's env,
  which means it inherits stale registry values, and run_dev.py
  is what fixes that for the dev loop.

tests/test_run_dev.py — 13 new tests
- TestMissingFile: missing file is a no-op (no exception)
- TestOverrideSemantics: overrides existing var, sets when unset,
  returns 0 changes when already correct, partial override of
  multiple vars
- TestParsing: comments, blank lines, lines without =, double
  quotes, single quotes, value with =
- TestRunDevVsBootstrap: pin down the differing semantics by
  exercising both load_env_local (setdefault) and
  force_override_from_env_local (direct assignment) on the same
  fixture file with the same pre-existing shell value, asserting
  they give opposite results

141 tests pass locally (13 net new).

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…-items (#20)

Empirical discovery via aggressive endpoint probing on the live
Fusion2Plex app on production. The original Plex_API_Reference.md
referenced tooling/v1/* and the recent BRIEFING claimed mdm/v1/parts
with typeName=Tool — both wrong.

Verified facts (against connect.plex.com, Grace tenant 58f781ba-...,
Fusion2Plex Consumer Key, on 2026-04-07):

Tooling location
----------------
inventory/v1/inventory-definitions/supply-items returns 2,516 records.
Of these:
- category="Tools & Inserts" → 1,109 (cutting tools, inserts)
- group="Machining"          → 1,039
- group="Tool Room"          →   104

Schema (identity-only, no geometry):
- category, description, group, id, inventoryUnit,
  supplyItemNumber, type

The Fusion → Plex sync will create supply-items keyed by
supplyItemNumber (= vendor part number), not parts. Geometry stays
in Fusion. The previous "build_part_payload(tool)" plan is replaced
with "build_supply_item_payload(fusion_tool)".

URL pattern convention
----------------------
Two shapes used in production:
1. Master data, flat:    <namespace>/v1/<resource>
   e.g. mdm/v1/parts, mdm/v1/operations, mdm/v1/suppliers
2. Definitions, nested:  <namespace>/v1/<namespace>-definitions/<resource>
   e.g. production/v1/production-definitions/workcenters
        inventory/v1/inventory-definitions/supply-items

Workcenter ↔ machine mapping
----------------------------
The 21 MILL workcenters map directly to physical Brother Speedio
machines via workcenterCode (= the machine number / DNC IP last
octet):
- 879 → Brother Speedio 879 → FTP 192.168.25.79
- 880 → Brother Speedio 880 → FTP 192.168.25.80

This is a clean linkage between Fusion's network shares (which the
loader reads) and Plex workcenter records.

Operations / Routings
---------------------
mdm/v1/operations exists (122 records) but the schema is minimal:
just code, id, inventoryType, type. No FK to tools, parts, or
routings. The routings concept may not be exposed in this app's
API surface at all. Issue #5 may need to descope or use CSV upload.

Files
-----
Plex_API_Reference.md
- Section 3 retitled with the URL pattern convention as a primary
  reference
- Verified-working endpoint table replaces the wrong tooling table
- New "Where tooling data actually lives" section
- New "Workcenter ↔ machine mapping" section with the Brother
  Speedio identification
- Filter behavior caveat (typeName/limit/category all ignored)

BRIEFING.md
- Plex API access matrix section rewritten with the supply-items
  finding and URL pattern convention
- New section explaining that tools are NOT in mdm/v1/parts (which
  is finished products only) but in inventory-definitions
- Workcenter mapping table added
- Immediate TODO renumbered: build_supply_item_payload replaces
  build_part_payload, tool assembly handling deprioritized

TODO.md
- Phase 3 entries updated to point at the verified endpoints
- #4 (tool assemblies) and #5 (routings) noted as possibly not
  achievable in this app's API surface — investigate or descope

No code changes, no test changes. Coding starts after this lands.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Closes #2 (read baseline tooling inventory).

plex_api.py
- New TOOLING_CATEGORY = "Tools & Inserts" constant
- New extract_supply_items(client, category=None) — pulls
  inventory/v1/inventory-definitions/supply-items, normalizes the
  response shape (handles both bare list and dict-wrapped), and
  filters client-side to category="Tools & Inserts" by default
- Saves a CSV snapshot to outputs/plex_supply_items.csv
- Server-side filters are silently ignored on this endpoint, so
  we always pull the full ~614 KB and filter in Python

app.py
- /api/plex/supply_items added to the api_extract switch — exposes
  extract_supply_items via the existing extractor route pattern
- New /api/fusion/tools/stats — type and vendor distribution
  across all loaded Fusion libraries, plus consumable vs
  non-consumable counts. Useful for verifying load before any
  sync work.
- New /api/fusion/tools/consumables — returns the filtered list
  of Fusion tools that should actually go to Plex (excluding
  holders and probes per BRIEFING). Field names normalized to
  snake_case (product-id → product_id) for Python consumers.
- New NON_CONSUMABLE_TYPES set captures the holder/probe filter
  rule in one place so it can be reused later by the sync code

UI (templates/index.html)
- Plex presets section refreshed with the verified URL patterns:
  - production/v1/production-definitions/workcenters (replaces
    the old production/v1/control/workcenters which 404'd)
  - inventory/v1/inventory-definitions/supply-items (new — where
    cutting tools actually live)
  - inventory/v1/inventory-definitions/locations (verified bonus)
  - mdm/v1/customers, mdm/v1/buildings, mdm/v1/operations (verified)
- Removed the dead tooling/v1/* presets (those paths don't work
  on this app's product surface)
- New "extract_supply_items" button in Extractors section
- New "tools_stats" and "consumables_only" buttons in Fusion 360
  local section for quick exploration of the local libraries

Tests — 154 pass, 13 net new

tests/test_plex_api.py
- TestExtractSupplyItems (7 tests):
  * default filters to "Tools & Inserts"
  * filter can be disabled with empty string
  * filter can be overridden to other categories
  * returns None on network error
  * calls the correct endpoint path
  * normalizes dict-wrapped response shapes
  * writes the CSV snapshot to OUTPUT_DIR (monkeypatched to tmp)

tests/test_app_routes.py
- TestSupplyItemsExtractor (2 tests): route delegates to
  extract_supply_items and handles None safely
- TestFusionToolsStats (2 tests): aggregates correctly across
  multiple libraries, handles empty input
- TestFusionToolsConsumables (2 tests): excludes holders/probes,
  normalizes product-id → product_id

No real Plex calls, no real network. All mocked through
FakePlexClient and patch.object on load_all_libraries.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ask (#22)

Caught at runtime during the live verification of PR #21. Clicking
extract_supply_items in the UI returned HTTP 500 with:

  'charmap' codec can't encode character '\u2192' in position N

Root cause: print() statements in plex_api.py use the Unicode arrow
character → (U+2192). On a Windows cp1252 console, that character
isn't representable. When Flask captures stdout from a request
handler, the encode failure raises UnicodeEncodeError mid-request,
which the route's exception handler turns into a 500.

pytest never caught this because pytest's capsys uses UTF-8 by
default. Only the live Flask process under cp1252 stdout hits it.

Fix
---
Two layers:

1. app.py and run_dev.py: sys.stdout.reconfigure(encoding="utf-8")
   at the very top, wrapped in try/except for Pythons that don't
   expose .reconfigure(). Same approach we already use in
   plex_diagnostics.py's __main__.

2. plex_api.py: replace all 7 occurrences of → with the ASCII ->
   in print statements (extract_purchase_orders, extract_parts,
   extract_workcenters, extract_operations, extract_supply_items,
   discover_all, explore_parts). Belt-and-suspenders defence in
   case the reconfigure ever fails.

Tests
-----
TestStdoutEncoding (2 new tests):
- test_app_module_attempts_stdout_reconfigure asserts the
  sys.stdout.reconfigure call is present in app.py source
- test_no_unicode_arrows_in_plex_api_print_statements scans
  plex_api.py for U+2192 and fails the build if any reappear

156 tests pass locally (2 net new).

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Final cleanup so the repo state matches reality and the next agent
session has a clean entry point.

README.md
- Full rewrite. Replaces the original generic project description
  with the current verified state:
  * Fusion2Plex on production, real Grace tenant 58f781ba-...
  * Tooling endpoint = inventory/v1/inventory-definitions/supply-items
    (1,109 cutting tools verified)
  * Brother Speedio mapping (workcenter codes 879/880 = FTP IPs)
  * Quick-start instructions: clone, .env.local, py run_dev.py
  * Production safety section explaining the write guard
  * Pointers to BRIEFING.md / Plex_API_Reference.md / TODO.md /
    GitHub Issues
  * Contributing workflow (branch / PR / auto-merge)

BRIEFING.md
- Immediate TODO #3 marked DONE (PR #21 closed issue #2). Lists
  the verified extract_supply_items result: 1,109 records, 30 KB,
  1.4s round trip.
- New "Architectural decisions still pending" callout listing
  issues #4 and #5 as needing user input before any code work.
- New "Session log" section at the bottom — first entry covers
  today's full session: 11 PRs merged, 156 tests passing, what
  changed, what's left for tomorrow, and 5 lessons distilled
  from the session that future-me should not repeat.

CLAUDE.md (NEW)
- Thin entry-point file that tells any AI agent (or new human dev)
  what to read first and in what order:
    1. BRIEFING.md (primary context)
    2. Plex_API_Reference.md
    3. Fusion360_Tool_Library_Reference.md
    4. TODO.md
- Lists hard rules: never read creds from images, never bypass
  the write guard, always run pytest, use claude/<name> branches.
- Quick command reference for the dev loop and PR creation.
- "When in doubt" section pointing at tenant_whoami.

No code changes. 156 pytest tests still pass.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…gnore large dirs (#24)

- Move BRIEFING.md, Plex_API_Reference.md, Fusion360_Tool_Library_Reference.md
  into docs/ (git detects all 3 as renames; history preserved)
- Add docs/validate_library_spec.md — design spec for the pre-sync
  validation gate (implementation to be tracked as a new GitHub issue)
- Update link paths in README.md and CLAUDE.md to match the new location
- Gitignore data/ (Fusion API reference PDFs, ~10 MB), outputs/
  (runtime extractor output), and .claude/worktrees/ (Claude Code scratch)

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…spec (#26)

- docs/BRIEFING.md architecture diagram: replace discredited endpoints
  (mdm/v1/parts, tooling/v1/tool-assemblies, production/v1/control/workcenters)
  with the verified paths (inventory/v1/inventory-definitions/supply-items,
  production/v1/production-definitions/workcenters). The diagram now matches
  README and the History §3 postmortem. Also add the validate_library gate
  to the data-flow box, referencing #25.
- docs/BRIEFING.md test count: 119+ → 156
- docs/Plex_API_Reference.md line 5: drop stale `plexonline.com` reference
  (that's the classic UI, not the REST gateway) in favor of connect.plex.com
- docs/Plex_API_Reference.md Section 4 (Target State): rewrite to reference
  the verified supply-items + workcenters definition paths, link to issues
  #3 and #6, and add the validation gate step
- docs/validate_library_spec.md line 5: replace #XX placeholder with the
  real implementation issue #25
- TODO.md Phase 3 item #1: flip checkbox to [x] and note PR #21 closed it

No code changes. 156/156 pytest still green.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ound #25 (#27)

- Intro: add file-layout note pointing new readers at docs/ and flag
  validate_library_spec.md as a required read-before-code file
- Immediate TODO: insert issue #25 (validate_library pre-sync gate) as
  item 4, before the build_supply_item_payload + match/upsert work.
  validate_library is a gate for all write-side work (#3, #7), so it
  has to land first. Renumbers items 5–9 accordingly.
- Session log: add 2026-04-08 entry covering the docs reorg (PR #24),
  drift cleanup (PR #26), and issue #25 opening. Captures three new
  lessons: (1) worktree-vs-main-workspace visibility gotcha, (2) git
  rename detection is automatic at commit time — no git mv needed,
  (3) open issues before writing specs that reference them.

No code changes. 156/156 pytest still green.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Three entry points share one validation engine: CLI, programmatic call
from tool_library_loader, and POST/GET /api/fusion/validate. Implements
every rule in docs/validate_library_spec.md — library-level structure
and duplicate checks plus per-tool required-field, vendor, geometry,
and post-process rules. Supplier lookup is module-cached and gracefully
degrades when the API is unreachable. 59 new pytest cases; 215 total.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(validate): implement validate_library.py pre-sync gate (#25)

Three entry points share one validation engine: CLI, programmatic call
from tool_library_loader, and POST/GET /api/fusion/validate. Implements
every rule in docs/validate_library_spec.md — library-level structure
and duplicate checks plus per-tool required-field, vendor, geometry,
and post-process rules. Supplier lookup is module-cached and gracefully
degrades when the API is unreachable. 59 new pytest cases; 215 total.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: add Notion pointer + session protocol to BRIEFING and CLAUDE

The Fusion2Plex Notion page was moved and renamed — the new URL is
.../Grace-Engineering-Fusion2Plex-33c3160a3abf81f1aac0e58101952be5.
Record the current URL in docs/BRIEFING.md ("Notion pages" section)
along with the session protocol (read Current State at start, update
it + append to Decision Log at end). CLAUDE.md gets a short pointer
at the end of the read-order list so new sessions see it immediately.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…#30)

Bring the README in line with the current architecture: Fusion JSON
is still the source of truth, but it now lands in Supabase (enriched
store) first and only an identity slice continues on to Plex's
supply-items endpoint. Updates the intro, architecture diagram, and
Status table; adds a Supabase row and a Phase B "validate_library
complete" line; bumps the test count from 156 to 215; adds a
validate_library CLI quick-start block; and updates the docs list
to reflect that validate_library.py is now implemented, not just
specced.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
 #31) (#32)

Stand up the `fusion2plex_*` Supabase layer that normalizes Fusion 360
JSON from 8 vendor library variants into a canonical 3-table store, as
the staging step before `build_supply_item_payload` (#3) pushes to Plex.

Schema (applied to bulletforge via MCP, replaces stale earlier attempt):
  - fusion2plex_libraries      one row per ingested .json file
  - fusion2plex_tools          typed geometry cols, all lengths in mm
  - fusion2plex_cutting_presets FK to tools, ON DELETE CASCADE

Normalization rules implemented in sync_supabase.py (all 8 from spec):
  1. inches -> mm on dimensional geometry (dimensionless fields unchanged)
  2. product_id: strip leading/trailing whitespace only (Sandvik preserved)
  3. preset_guid: strip surrounding curly braces (Sandvik)
  4. vendor casing preserved verbatim
  5. JSON null passthrough on all FLOAT preset fields (Guhring)
  6. type IN ('holder','probe') filtered at ingest
  7. shaft.segments JSONB passthrough; absent -> NULL; [] preserved
  8. post_process.comment via .get (Sandvik omits it)

Client layer:
  - supabase_client.py: thin requests-based PostgREST wrapper. Skips
    supabase-py because its transitive pyiceberg dep needs MSVC on
    py3.14/Windows. Same HTTP pattern as plex_api.py.
  - delete() refuses unfiltered calls as a safety guard.
  - SUPABASE_URL / SUPABASE_SERVICE_ROLE_KEY from .env.local via
    bootstrap.py.

RLS: service role bypasses (ingest writes); anon deny on libraries,
anon SELECT-only on tools + presets for the future React UI.
Trigger function pinned to stable search_path to clear lint 0011.

Smoke test:
  - scripts/load_sample.py against BROTHER SPEEDIO ALUMINUM.json
  - dry-run confirms 21 tools + 25 presets after filter
  - HARVEY sample: 0.062 in -> 1.5748 mm, v_f_leadIn -> v_f_lead_in

Tests: +47 (203 total, all green). FakeSupabaseClient stubs every
ingest op so no network traffic in CI. Dummy SUPABASE_URL +
SUPABASE_SERVICE_ROLE_KEY added to tests/conftest.py.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Renames the three Supabase tables to drop the fusion2plex_ prefix
in preparation for the dedicated `datum` Supabase project (where
collision avoidance with bulletforge no longer applies):

  fusion2plex_libraries       -> libraries
  fusion2plex_tools           -> tools
  fusion2plex_cutting_presets -> cutting_presets

Coordinated cutover with the Supabase project migration today —
this PR + the new datum project + an .env.local flip land together.

No behavior change. 262 pytest tests still green.
Bundles bulletforge migrations 20260408171007 + 20260408171051 with
the fusion2plex_ prefix dropped, ready to apply via the Supabase SQL
editor against the dedicated `datum` project.

Schema is identical to bulletforge: libraries / tools / cutting_presets,
same columns, same constraints, same RLS shape (libraries deny anon,
tools + cutting_presets anon SELECT, service role bypasses everywhere).
Build out the two Postman collections to cover the full known scope of
the Plex Connect API (verified + probe) and the local Flask harness, and
add docs/Postman_Collections.md as the authoritative day-to-day reference.

- Plex API — Datum: 12 -> 33 requests, organized via [AUTH]/[MDM]/[INV]/
  [PROD]/[PURCH]/[WRITE]/[PROBE] name prefixes (the minimal Postman MCP
  tier doesn't expose folder creation, so prefixes are the workaround).
  Adds per-id GETs, the customers/contacts/buildings/employees readers,
  the supply-items/all and locations reads, the workcenter generic GET,
  the PO filtered template, a DELETE supply-item template, the issue #6
  workcenter PUT placeholder, and 5 [PROBE] entries that re-run the
  unverified namespaces (tooling/manufacturing/quality/sales).
- Fusion 360 Tool Libraries — Datum: 10 -> 14 requests. Adds three new
  validation variants (single file, with live Plex supplier lookup,
  upload-only POST) and the Raw Plex Proxy DELETE template.
- All 22 existing requests renamed to the [NS] convention (preserving
  IDs, bodies, and Postman test scripts via PATCH-style updateCollection
  Request — none of the in-collection state was destroyed).
- New docs/Postman_Collections.md (~280 lines): full endpoint catalog
  with verified-vs-probe status, the safe write workflow, the
  collection variable list, the auth pre-request script, the add-new-
  request playbook, and the update protocol.
- Linked from CLAUDE.md (new step 4 in the read order), BRIEFING.md
  (file layout note), and Plex_API_Reference.md (companion banner).

262 tests still pass.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Full non-destructive connectivity verification of every non-write
endpoint in the Plex collection, plus a Get-by-ID chain test that
dumped schemas for all 6 verified resources. Key findings folded
into the repo docs; Postman descriptions and the Notion Plex Data
Model child page are updated in the same session.

Connectivity sweep results (2026-04-09, read-only):
- 23 GETs: 18 returned 200, 5 returned 404, zero 401s, zero errors
- 6 Get-by-ID chain tests: all returned 200, every per-id view returns
  exactly the same fields as the list view (no hidden detail)
- 1 new endpoint discovered: scheduling/v1/jobs (200, ~15.8s response,
  schema TBD -- potential unblock for #5 if it carries tool references)
- 7 additional paths probed, all 404 (purchase-orders-lines, on-hand,
  containers, parts-buckets, assets, assemblies, container-types)
- Filter no-op confirmed: unfiltered and ?updatedAfter filtered PO
  responses were byte-identical at 44.2 MB (downgraded from
  "UNVERIFIED" to "CONFIRMED NO-OP")

Critical architectural finding:
- inventory/v1/inventory-definitions/supply-items has NO cross-references
  to any other resource. Zero supplierId, no locationId, no partId,
  no workcenterId, no operationId. Supply-items are identity-only:
  {category, description, group, id, inventoryUnit, supplyItemNumber,
  type}. You cannot derive a tool's vendor from Plex alone -- Datum
  must keep vendor data in Supabase as the source of truth.
- This is a confirmation, not a course-correction -- the Datum
  architecture already treats Supabase as the vendor authority.
- Also killed the "PO lines as a vendor back-channel" hypothesis:
  purchasing/v1/purchase-orders-lines returns 404.

Fresh record counts captured:
- parts 16,921 (was 16,913 on 2026-04-07, +8)
- suppliers 1,575, customers 109, contacts 299, buildings 4,
  employees 641, inventory-locations 1,270
- (supply-items 2,516 / 1,109 tools and workcenters 143 unchanged)

Files updated:
- docs/Plex_API_Reference.md -- access matrix rewritten with 2026-04-09
  verification layer, full schemas, new "Probed -- returned 404"
  table, new section 3.5 on supply-item cross-refs
- docs/BRIEFING.md -- access matrix refreshed, "Where tooling data
  actually lives" rewritten around the no-FK finding, new session log
  entry capturing the sweep and the stale-shell-key foot-gun
- docs/Postman_Collections.md -- all tables updated with fresh counts,
  new [SCHED] row, expanded [PROBE] table, new section 4.5 cross-ref map

Postman collection (updated in-place via updateCollectionRequest, not
in this diff): 23 request descriptions rewritten with 2026-04-09 dates
and full field schemas; 3 new requests created ([SCHED] List Jobs,
[PROBE] inventory/v1/on-hand, [PROBE] purchasing/v1/purchase-orders-lines).
Plex collection now has 36 requests total.

Notion (updated in-place): Decision Log entry appended to the Datum
project page; new child page "Plex Data Model -- Cross-References"
created with the FK map, probed-404 table, and ASCII diagram;
"Postman Collections -- Datum" child page refreshed with 2026-04-09
state and critical-finding banner. Current State block updated to
list scheduling/v1/jobs deep-dive and #3 as the two next-action
candidates.

262 tests still green (docs-only changes).

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Claude Code writes a per-machine permissions cache at
.claude/settings.local.json that previously wasn't gitignored. The file
is different on every developer's machine and regenerated on demand,
so committing it would pollute shared repo state. The existing entry
only covered .claude/worktrees/, so add a specific line for the
settings file alongside it rather than broadening to all of .claude/
(that would also hide anything else we might later track there, e.g.
shared .claude/commands/ slash-command definitions).

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
)

Vite + React + TypeScript + Tailwind v4 + shadcn/ui scaffold in web/.
Connects to the Supabase `datum` project via anon key (RLS: read-only
on tools + cutting_presets, deny on libraries).

Pages:
- Tool browser (/) — searchable table with type filter pills
- Tool detail (/tools/:id) — geometry, identity, post-processor,
  cutting presets with Vc/fz/RPM/Vf/coolant
- Libraries (/libraries) — card grid, gracefully handles anon-deny

Also: .gitignore gains node_modules/, launch.json gains web dev server.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
)

* feat(web): React UI scaffold — tool browser, detail view, libraries

Vite + React + TypeScript + Tailwind v4 + shadcn/ui scaffold in web/.
Connects to the Supabase `datum` project via anon key (RLS: read-only
on tools + cutting_presets, deny on libraries).

Pages:
- Tool browser (/) — searchable table with type filter pills
- Tool detail (/tools/:id) — geometry, identity, post-processor,
  cutting presets with Vc/fz/RPM/Vf/coolant
- Libraries (/libraries) — card grid, gracefully handles anon-deny

Also: .gitignore gains node_modules/, launch.json gains web dev server.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: Classic Web Services discovery + scheduling/v1/jobs results

- New doc: docs/Plex_Classic_API_Request.md — pass-along request for
  Classic Web Service credentials (Web Service User + Company Code).
  Classic SOAP API at plexonline.com can access Part Operations, tool
  assignments, DCS attachments, and routing data that the REST API
  does not expose.

- BRIEFING.md updates:
  - scheduling/v1/jobs deep-dive: 114,684 records, 18 fields, zero
    tool/operation/workcenter FKs. Does not unblock #5.
  - New "Plex Classic Web Services" section with endpoint, auth model,
    capability comparison table, and access request status.
  - Issues #4/#5/#6 status updated: REST-blocked, Classic path pending.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: update Classic API request with IAM auth discovery

WSDL endpoint is live (not 404) but redirects to Rockwell IAM login
page — Classic Plex now uses IAM SSO instead of legacy username/password.
Updated the request doc to ask for IAM service account credentials and
guidance on programmatic (non-browser) auth flow.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: update Classic API request with ASMX system error finding

Authenticated browser test of the WSDL endpoint returns a Plex system
error page (not 404, not login redirect). Session is valid but the
ASMX endpoint throws server-side. Updated the request doc to ask
whether Classic Web Services is enabled for Grace's subscription and
what the correct endpoint/auth method is post-IAM migration.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* docs: finalize Classic API request with full ASMX test results

Four test angles documented: unauthenticated (login redirect),
authenticated in-tab (system error), session-GUID-prefixed URL
(forced re-login), Classic UI kiosk (address bar locked). ASMX
endpoint exists but is non-functional for Grace. Escalation to
Plex support required.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… (#43)

Add APS (Autodesk Platform Services) OAuth + Data Management client
that reads Fusion 360 tool libraries directly from the XWERKS hub
cloud, eliminating the Fusion/Desktop Connector local install dependency.

- aps_client.py: 3-legged OAuth 2.0, signed S3 downloads, hub traversal,
  file-backed token persistence (.aps_tokens.json, gitignored)
- app.py: 10 new /api/aps/* routes (status, login, callback, hubs,
  projects, folders, cam-tools, libraries, download, sync)
- /api/aps/sync POSTs all 8 cloud libraries into Supabase in one call
  (155 tools, 631 presets verified live)
- 29 new tests covering tokens, OAuth, signed S3, persistence, parsing

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds `sync.py` — APS cloud-first, local ADC fallback CLI for scheduled
Fusion tool library sync into Supabase. Validates each library before
write, supports --dry-run and --local flags, clean exit codes for
Task Scheduler / cron. `pyproject.toml` makes the repo pip-installable
with a `datum-sync` console script entry point.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: nightly sync CLI entrypoint + pyproject.toml packaging (#9)

Adds `sync.py` — APS cloud-first, local ADC fallback CLI for scheduled
Fusion tool library sync into Supabase. Validates each library before
write, supports --dry-run and --local flags, clean exit codes for
Task Scheduler / cron. `pyproject.toml` makes the repo pip-installable
with a `datum-sync` console script entry point.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: bootstrap.py walks parent chain to find .env.local (#36)

Worktrees no longer need their own copy of .env.local. The loader
walks up the directory tree from the script's location until it
finds the nearest .env.local, so a single file at the repo root
serves all worktrees underneath it. Explicit path= arg still wins.

6 new tests covering walk-up, closest-ancestor preference, and
explicit-path override.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
grace-shane and others added 26 commits April 10, 2026 16:06
) (#47)

* feat: add --log-file flag to sync.py for persistent nightly logs

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: handle empty/corrupt JSON files in tool_library_loader

865 (HAAS VF4SS).json is 0 bytes on disk — json.load() raises
OSError [Errno 22] on Python 3.14/Windows. Now checks st_size
before opening and catches OSError alongside JSONDecodeError.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat: add --log-file flag to sync.py for persistent nightly logs

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: handle empty/corrupt JSON files in tool_library_loader

865 (HAAS VF4SS).json is 0 bytes on disk — json.load() raises
OSError [Errno 22] on Python 3.14/Windows. Now checks st_size
before opening and catches OSError alongside JSONDecodeError.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: reference catalog + geometry-based tool enrichment

New Supabase table `reference_catalog` stores 82k+ vendor catalog
tools (Harvey, Helical, Garr, Guhring, Sandvik, etc.) from hsmtools
downloads. `ingest_reference.py` bulk-loads Fusion JSON catalogs.
`enrich.py` cross-references shop tools missing product_id against
the catalog by (type, DC, NOF) geometry match. Adds `update()` to
SupabaseClient. 13 new tests (328 total).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Enrichment now runs before validation — tools missing product-id
get matched against the 82k reference catalog by geometry before
the validator sees them. 40/57 missing tools now auto-enriched.
Remaining failures are taps (no catalog), custom form mills, and
duplicate product-id collisions (needs smarter dedup next).

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… rename (#50, #51, #52, #53) (#59)

- #53: Fix empty Libraries tab by adding anon-read RLS policy; library
  cards now link to Tools filtered by library name
- #52: Add TYPE_RENAMES map in sync_supabase.py to rename Fusion's
  "slot mill" to "slitting saw" at ingest time
- #51: Replace tool-type filter pills with a compact dropdown
- #50: Add mm/in toggle on Tools and Tool Detail pages; converts all
  dimensional values (geometry, fz, Vf) in the browser

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…nk, banner (#55, #56, #57, #58, #60) (#61)

- #55: "Grace Engineering" text is now a link to graceeng.com
- #56: All table headers are clickable to sort asc/desc with arrow indicators
- #57: Type filter is now a multi-select; selected types shown as
  dismissible badges with a "clear all" option
- #58: Unit toggle defaults to inches and persists preference in
  localStorage across sessions
- #60: Red banner above the tools table lists tools modified in the
  last 24 hours (up to 5 named, rest counted)

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… link fix (#62)

- Type filter: replace ugly native multi-select with custom dropdown
  that has checkboxes; selected types shown as dismissible badges
- Recent banner: now links to /recent page with card for each tool
  modified in the last 24h based on Fusion Hub lastModifiedTime (not
  Supabase updated_at)
- Add source_modified_at column to libraries table (migration 0003),
  populated from APS tip.attributes.lastModifiedTime during sync
- Grace Engineering link corrected to https://www.graceeng.com

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ns (#64)

* feat: Scripts page — generate Fusion 360 script to fix missing descriptions

Adds /scripts page that:
- Identifies tools with empty descriptions (13 of 155)
- Generates descriptions from geometry (e.g. 3/8" BULL NOSE END MILL 3FL 3" OAL)
  using fractional inches for common sizes
- Outputs a ready-to-run Fusion 360 Python script that updates descriptions
  by GUID via adsk.cam.ToolLibraries API
- Copy-to-clipboard button + instructions for Script Manager

Fixes descriptions at the source — Fusion saves → cloud sync → next
nightly sync picks them up with real descriptions.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: Scripts page — reference catalog lookup for vendor + part# suggestions

- Queries 82k reference catalog by (type, DC, NOF) for each tool
  missing data; shows up to 5 matches sorted by OAL proximity
- Exact OAL matches marked with green asterisk
- Each tool card has editable Vendor + Part # fields, pre-filled from
  best match or defaulted to "MSC"
- Clickable match buttons to switch between suggestions
- Accept/skip checkbox per tool — only accepted tools go into script
- Fusion script now updates description, vendor, AND product-id
- Added anon-read RLS policy on reference_catalog (migration 0004)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- web/public/_redirects: /* → /index.html for react-router deep links
- tsconfig.app.json: ignoreDeprecations 6.0 to silence TS 7 baseUrl warning

Prep for datum.graceops.dev deploy via Cloudflare Pages GitHub integration.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
First deploy at datum.shanewaid.workers.dev served the legacy Flask
templates/index.html because wrangler had no config and fell back to
repo-root auto-detection. This adds the explicit config.

- wrangler.jsonc at repo root — assets directory web/dist,
  not_found_handling: single-page-application for react-router
- Remove web/public/_redirects (redundant with wrangler's SPA handling)

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ll/deselect all (#65, #66) (#72)

- #65: <title>web</title> → <title>Datum</title>; replaced Vite default
  favicon.svg with a 🎯 emoji SVG (bullseye = datum reference point).
- #66: Select all / Deselect all buttons above the tool card list on
  the Scripts page.

Closes #65
Closes #66
- Added `relativeTime()` helper in `lib/utils.ts` — "5 min ago",
  "2 hours ago", "3 days ago", etc.
- LibrariesPage: prominent "Last sync: X ago" indicator next to the
  page title, computed as max(ingested_at) across all libraries.
- Per-card "Ingested" label renamed to "Last synced" with relative
  time; full timestamp on hover via title attribute.

Scoped to successful-sync visibility per the issue. Failure tracking
is deferred (needs a new sync_runs table; can be a follow-up).

Closes #71
…) (#77)

Migration 0005 adds five columns to public.tools backing the inventory
display work for #49 / #75:

- plex_linked_by   (manual | writeback | sync)
- plex_linked_at
- qty_on_hand      (running balance from item-adjustments)
- qty_tracked      (distinguishes 'linked but no history' from 'unknown')
- qty_synced_at

Plus a partial index on plex_supply_item_id for reverse lookups, and
column COMMENTs documenting intent in the DB.

Supersedes the separate tool_plex_links table originally planned in
 #74 — tools already carries plex_supply_item_id from 0001, so linkage
is a property of the tool rather than a separate entity. Flattening
avoids a join on every ToolsPage load. #74 closed as superseded.

Also gitignore scratch/ to match the ad-hoc probe workflow (probe
scripts for #49 currently live there).

Applied to datum Supabase project via MCP apply_migration; verified
all columns + check constraint + comments landed correctly.
328 tests green.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Probed item-adjustments across all 1,109 Grace tools (2026-04-15):
quantity is delivered pre-signed by Plex, so sync sums it directly
with no transactionType sign-table lookup. Known transactionTypes:
PO Receipt, Checkout, Correction, Check In (+ 1 null). 31/1,109
tools have adjustment history.

- sync_tool_inventory.py: reads tools.plex_supply_item_id, calls
  inventory/v1-beta1/inventory-history/item-adjustments per tool,
  writes qty_on_hand/qty_tracked/qty_synced_at. --dry-run, logging,
  exit codes 0/1/2. Unknown transactionType values are flagged as
  warnings (still summed — pre-signed contract).
- docs/Plex_API_Reference.md: new §3.6 documenting the endpoint,
  the sign contract, and the enumerated transactionType table.
  Added three new rows (item-adjustments, inventory-tracking/
  containers, container-adjustments) to the verified-endpoints table.
- pyproject.toml: datum-sync-inventory console script.
- 27 new pytest cases (355 total green).

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(sync): tool inventory qty sync (Plex -> Supabase) — #75

Probed item-adjustments across all 1,109 Grace tools (2026-04-15):
quantity is delivered pre-signed by Plex, so sync sums it directly
with no transactionType sign-table lookup. Known transactionTypes:
PO Receipt, Checkout, Correction, Check In (+ 1 null). 31/1,109
tools have adjustment history.

- sync_tool_inventory.py: reads tools.plex_supply_item_id, calls
  inventory/v1-beta1/inventory-history/item-adjustments per tool,
  writes qty_on_hand/qty_tracked/qty_synced_at. --dry-run, logging,
  exit codes 0/1/2. Unknown transactionType values are flagged as
  warnings (still summed — pre-signed contract).
- docs/Plex_API_Reference.md: new §3.6 documenting the endpoint,
  the sign contract, and the enumerated transactionType table.
  Added three new rows (item-adjustments, inventory-tracking/
  containers, container-adjustments) to the verified-endpoints table.
- pyproject.toml: datum-sync-inventory console script.
- 27 new pytest cases (355 total green).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(db): add plex_supply_items staging table (0006)

Staging layer mirroring the 6-field Plex supply-item POST payload
shape, joined 1:1 to tools via fusion_guid. plex_id is NULL until
#3 writeback captures the Plex-assigned UUID on POST.

Applied via Supabase MCP 2026-04-15. Follow-ups tracked in #79
(populate from tools), #80 (keep in sync), #81 (UI review).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…81/#67) (#84)

* feat(sync): populate_plex_supply_items staging module (#79)

Python module that reads tools from Supabase, computes the 6-field Plex
supply-item payload, and upserts into plex_supply_items staging table.
No Plex HTTP calls — pure Fusion → Supabase staging.

- build_supply_item_row(): maps tools.description, product_id, type → 3 derived columns
- tool_type_to_group(): type → Plex group mapping (default "Machining")
- populate_supply_items(): batch read/compute/upsert with skip/fail tracking
- CLI with --dry-run, -v, --log-file; exit codes 0/1/2
- 23 new tests (pure helpers, integration, CLI)
- Console script: datum-populate-supply-items

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(ui): On-hand qty column + inventory status filter (#76)

Add qty_on_hand display to ToolsPage and ToolDetailPage with three
states: count (X pcs), Not tracked (linked but no history), and dash
(not linked). Includes:

- Sortable On hand column with NULL-last ordering
- Inventory status multi-select filter (In stock / Out of stock /
  Not tracked / Not linked) persisted to localStorage
- Qty card on ToolDetailPage with relative-time sync stamp
- Tool type updated with qty_on_hand, qty_tracked, qty_synced_at fields

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(sync): post-sync hook populates plex_supply_items staging (#80)

After nightly Fusion ingest completes, sync.py now calls
populate_supply_items() to refresh the plex_supply_items staging table.
Hook is non-fatal — failures are logged but don't change the sync exit
code. Skipped on --dry-run and when no libraries succeeded.

- 4 new tests covering: hook fires, dry-run skips, no-success skips,
  failure is non-fatal
- Updated existing CLI tests to mock the new hook imports

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(ui): Plex staging payload card on ToolDetailPage (#81)

Show the computed plex_supply_items row on the tool detail page so a
human can eyeball the payload before #3 writeback goes live. Card shows
all 6 payload fields (category, group, description, supply item #,
inventory unit, type) plus a Posted/Not posted badge and Plex UUID
when available.

- Added PlexSupplyItem type
- Fetches plex_supply_items by fusion_guid on detail page load
- Card only renders when staging data exists

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat(ui): Build a Library page — construct Fusion libs from reference catalog (#67)

New /build-library page that lets users search the 85k reference catalog,
select tools into a cart, and export as a valid Fusion 360 .json library.

- Server-side Supabase queries with pagination (50 per page)
- Vendor and type multi-select filters, text search with debounce
- Cart system: click rows to toggle, "Add all visible", "Clear all"
- Export generates valid Fusion JSON with geometry, post-process,
  and start-values fields; converts mm back to inches when original
  unit was inches
- Added ReferenceRow type, route, and nav link

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
#87)

Fixes content drift in the long-form docs after a stretch of fast-moving
sprints (APS integration, React UI, nightly deploy, Plex staging pipeline,
validate_library gate) landed but the docs still read as if those things
were future work.

- docs/validate_library_spec.md — header now reflects landed status (PR
  #28, 2026-04-08) and points at the correct repo (grace-shane/Datum,
  not the upstream plex-api fork).
- docs/BRIEFING.md — architecture diagram rewritten: APS as primary
  source, ADC as fallback; Supabase staging + enrichment + React UI
  now explicit. "What's built" gains aps_client, validate_library,
  supabase_client/sync_supabase, enrich, Plex staging, nightly deploy,
  React UI. Test count 156 → 262. "Immediate TODO" reorganised into
  Done / Active / Blocked-on-Classic-API; #25 moved to Done; GCP
  migration (#85) added as an active stream.
- TODO.md — Phase 3 #3/#4/#5/#6 descriptions updated to match current
  reality (blocked on Classic Web Services). Phase 5 #9/#10/#11/#12 all
  checked off. New "Built beyond the original roadmap" section captures
  Supabase staging, APS, React UI, enrichment, Plex staging pipeline,
  qty sync, Classic Web Services discovery. New Phase 6 section mirrors
  the GCP migration umbrella (#85).
- CLAUDE.md — removed "A scheduled deploy yet (Phase 5 work, issues
  #9-#11)" from "Things this repo does NOT have" since that work
  shipped in PRs #44 and #47.

No code changes, no behavior changes, docs-only.

Co-authored-by: Shane Waid <shane.waid32@gmail.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* feat(infra): GCP provisioning scripts + migration and reorg plans

scripts/gcp/ — idempotent bash provisioning for the Datum GCP footprint:
custom-mode VPC (scalable CIDRs, reserved secondary ranges for future GKE,
Cloud NAT for private egress), Private Service Connection for Cloud SQL,
per-VM service accounts with least-privilege IAM, Secret Manager slots
(aps-refresh-token runtime-only), Cloud SQL db-f1-micro Postgres 15 on
private IP, and two VMs (e2-micro datum-runtime always-on, e2-standard-2
datum-dev on-schedule) on Ubuntu 24.04 LTS with IAP-only SSH. env.sh
parameterises the one file that changes between accounts. 99-teardown.sh
deletes everything in reverse dependency order with project-ID confirmation.

docs/GCP_MIGRATION.md — umbrella plan (#85): architecture, VM topology,
service mapping, Secret Manager layout + per-SA IAM split, affected-code
map, migration sequence, open questions.

docs/REORG_AND_STACK.md — pre-migration cleanup plan: scope boundaries
(React UI is debug + show-and-tell, no mobile), datum/ package layout,
SQLAlchemy 2.0 + psycopg3 swap as the highest-leverage stack change,
sequencing that leaves Plex writes for last.

Related: #85 (GCP migration umbrella), #25 (validate_library — already
landed, plan notes it is migration-compatible as-is).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(infra): wait for SA propagation before IAM bindings in 04

Creation of a service account returns immediately, but the SA is not
always visible to add-iam-policy-binding for ~30s. Binding too soon
surfaces as "Policy modification failed" with a misleading
lint-condition hint — confusing on a fresh project.

Moved the propagation sleep from end-of-04 to between SA creation and
the binding loop where the race actually is. Bumped 10s → 30s to cover
the observed window with margin.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Shane Waid <shane.waid32@gmail.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Adds scripts/gcp/10-populate-secrets.sh for interactive population of
Secret Manager slots from an auth'd machine (Shane's Legion). Skips
slots that already have a version, skips aps-refresh-token always (the
runtime SA rotates it), and tolerates per-secret failures so one bad
gcloud call does not abort the rest of the run.

Adds docs/NEXT_SESSION.md with canned prompts for the datum-dev Claude
session: Cloud Scheduler start/stop and the Supabase → Cloud SQL
migration. Linked from GCP_MIGRATION.md so the boot chain picks it up.

Co-authored-by: Shane Waid <shane.waid32@gmail.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
scripts/gcp/08-scheduler.sh creates two idempotent Cloud Scheduler jobs
that start datum-dev weekdays at 07:00 CT and stop it at 19:00 CT.
HTTP target against compute.googleapis.com with OAuth using the runtime
SA. IAM binding is scoped to the datum-dev instance only (not project-
level), so the scheduler identity can still only touch that one VM.

docs/GCP_MIGRATION.md: new "Cost" section at the bottom with the
compute-hour math (730h → ~260h, ~$50/mo → ~$15/mo). Updated the two
in-line references that previously said "5pm CT" to 19:00 CT so the
architecture diagram and VM topology table match the schedule the
script actually creates.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Migration 0006 enabled RLS on plex_supply_items but shipped no policy,
so the anon browser client at ToolDetailPage.tsx:109-113 silently
returns nothing when loading the Plex Staging Payload card.

Adds the matching anon SELECT policy. Data exposed is already
derivable from the anon-readable tools rows, so no new surface.

Closes the remaining acceptance criterion on #81.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
web/src/pages/ToolsPage.tsx:456 — switch the tools-table wrapper from
`rounded-md overflow-x-auto` to `rounded-lg overflow-hidden`:

- `rounded-lg` (8px) matches the button and input radius elsewhere in
  the design system, so the table reads as a single contained panel
  rather than a borderline square frame.
- `overflow-hidden` lets the rounded corners actually clip child
  content (header-row border, first/last-row hover highlights, etc.).
  The shadcn `Table` primitive already provides its own inner
  `overflow-x-auto` scroll container, so horizontal scrolling is
  unaffected.

Closes #83.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(ui): bump main tool table corner radius (#83)

web/src/pages/ToolsPage.tsx:456 — switch the tools-table wrapper from
`rounded-md overflow-x-auto` to `rounded-lg overflow-hidden`:

- `rounded-lg` (8px) matches the button and input radius elsewhere in
  the design system, so the table reads as a single contained panel
  rather than a borderline square frame.
- `overflow-hidden` lets the rounded corners actually clip child
  content (header-row border, first/last-row hover highlights, etc.).
  The shadcn `Table` primitive already provides its own inner
  `overflow-x-auto` scroll container, so horizontal scrolling is
  unaffected.

Closes #83.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(todo): Plex writes ship last, behind Plex-mimic mock (#92)

Phase 3 now carries an explicit ordering rule: every real write to
connect.plex.com ships last and is blocked on #92 (the Plex-mimic
mock HTTP server). #3 and #6 bullets are updated with the new
blocker and the current prereq status:

- #3: all Supabase-side prereqs landed (#79 / #80 / #81 closed via
  PRs #82 / #84 / #90); only the HTTP POST remains, deferred behind
  the mimic.
- #6: GET workcenter is verified. PUT/PATCH investigation happens
  against the mimic first, not the live tenant.

Rationale lives in the new Claude memory entry plex_writes_last.md
and in blocker comments on #3 / #6.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix(ui): bump main tool table corner radius (#83)

web/src/pages/ToolsPage.tsx:456 — switch the tools-table wrapper from
`rounded-md overflow-x-auto` to `rounded-lg overflow-hidden`:

- `rounded-lg` (8px) matches the button and input radius elsewhere in
  the design system, so the table reads as a single contained panel
  rather than a borderline square frame.
- `overflow-hidden` lets the rounded corners actually clip child
  content (header-row border, first/last-row hover highlights, etc.).
  The shadcn `Table` primitive already provides its own inner
  `overflow-x-auto` scroll container, so horizontal scrolling is
  unaffected.

Closes #83.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(todo): Plex writes ship last, behind Plex-mimic mock (#92)

Phase 3 now carries an explicit ordering rule: every real write to
connect.plex.com ships last and is blocked on #92 (the Plex-mimic
mock HTTP server). #3 and #6 bullets are updated with the new
blocker and the current prereq status:

- #3: all Supabase-side prereqs landed (#79 / #80 / #81 closed via
  PRs #82 / #84 / #90); only the HTTP POST remains, deferred behind
  the mimic.
- #6: GET workcenter is verified. PUT/PATCH investigation happens
  against the mimic first, not the live tenant.

Rationale lives in the new Claude memory entry plex_writes_last.md
and in blocker comments on #3 / #6.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(ui): add Corner R column to ToolsPage before On hand

web/src/pages/ToolsPage.tsx — surface tools.geo_re on the main tool
table, positioned between Flutes and On hand.

- Header "Corner R ({dimUnit})" — terse like Dia / OAL; full name
  "Corner radius (RE)" already lives on ToolDetailPage.tsx:261
- Cell uses the same `fmt()` path as Dia / OAL so the mm ↔ in unit
  toggle applies (4 decimals imperial, 2 mm)
- Sort wired in via the existing numeric case (NULL sorts last
  regardless of direction, matching the rest of the numeric columns)
- Empty-state colSpan bumped 9 → 10

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Ten-task TDD plan landing the local Plex-mock that blocks #3 / #6:

1. PLEX_BASE_URL override in plex_api.py (additive, lands first)
2. tools/plex_mock/ package scaffold + .gitignore
3. SQLite capture store
4. Snapshot capture CLI + initial canned GETs
5. Flask server — GET handlers (snapshot-served)
6. POST/PUT/PATCH capture handlers
7. End-to-end rehearsal (datum-sync against the mock)
8. Diff CLI with payload-shape fixture
9. Console scripts, README, Plex_API_Reference update
10. systemd unit + datum-runtime deploy doc

Each task is bite-sized (2-5 min steps), TDD where applicable,
concrete before/after code in every step. Execution mode (subagent-
driven vs inline) picked separately.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…#92) (#96)

* feat(plex-api): PLEX_BASE_URL override + base_url client kwarg (#92)

* fix(plex-api): strip base_url kwarg; guard env tests against ambient PLEX_BASE_URL (#92)

* feat(plex-mock): package scaffold for Plex-mimic mock (#92)

* feat(plex-mock): SQLite capture store (#92)

* fix(plex-mock): narrow CaptureStore.append return type with assert (#92)

* feat(plex-mock): snapshot capture CLI — script only; snapshots to follow on creds-having VM (#92)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(plex-mock): Flask server + GET snapshot handlers (#92)

* fix(plex-mock): JSON error context, threaded server, stateless contract comment (#92)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(plex-mock): POST/PUT/PATCH capture handlers (#92)

* fix(plex-mock): guard 409 before capture so failed POSTs aren't persisted (#92)

* feat(plex-mock): capture-diff CLI with payload-shape fixture (#92)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix(plex-mock): diff CLI reports checked-row count; exit 3 on zero rows (#92)

* docs(plex-mock): console scripts, README, PLEX_BASE_URL reference (#92)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat(plex-mock): systemd unit + datum-runtime deploy doc (#92)

* fix(plex-mock): restore Task 1 + Task 2 changes clobbered by b98b4d7

The Task 6 fix commit (b98b4d7) unintentionally reverted:
- plex_api.py — the PLEX_BASE_URL override + base_url kwarg from Task 1
- tests/test_plex_api.py — the 8 new tests covering those (including the
  autouse fixture and empty-base_url edge case)
- .gitignore — the 5 lines covering tools/plex_mock/captures/ and *.db

Restored via `git checkout 38cf0ce -- plex_api.py tests/test_plex_api.py`
and `git checkout 8b36517 -- .gitignore`. Nothing after b98b4d7 on this
branch had touched these files, so the restoration is clean.

Test count goes 416 → 424 (the 8 clobbered test_plex_api.py tests).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
…, docs (#97)

* fix(plex-mock): surface malformed bodies + missing snapshots as errors

Two silent-failure classes the #96 review surfaced:

- Write handlers used `request.get_json(silent=True) or {}` — any
  JSON parse error, wrong Content-Type, or non-object body (array,
  scalar) was captured as `{}` and returned 201/200. The diff CLI
  then reported generic "missing fields" drift instead of flagging
  that the sync had sent something unparseable. Strict parsing now
  rejects with 400 + actionable detail, and the capture store stays
  empty. Defeats the whole point of the mock as a safety layer.
- `_load_snapshot` returned `[]` when the snapshot file was missing.
  The mock would boot and silently serve empty supply-items and
  workcenters lists — every PUT 404s, every diff reports 0 rows
  CLEAN without explaining why. Now raises FileNotFoundError at
  `create_app` time with a pointer to `capture_snapshots`.

+9 tests: malformed JSON / array / scalar rejected on POST, PUT,
workcenter PUT/PATCH; 404 ordering preserved on PUT + malformed body;
missing-snapshot error message contains the filename and remediation.

Test count: 424 → 433.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(app): harden production write guard against PLEX_BASE_URL override

PR #96 added PLEX_BASE_URL so the sync can point at the Plex-mimic
mock. Side effect: the substring heuristic

    IS_PRODUCTION = "test." not in client.base

became operator-controllable. Any PLEX_BASE_URL containing "test."
silently flips IS_PRODUCTION to False and disarms the /api/plex/raw
write guard even when pointing at real Plex through a proxy or CDN.

Fix: exact match against plex_api.BASE_URL (case- and trailing-slash-
insensitive). Mock URLs, test.connect.plex.com, and any unrecognised
endpoint now fail closed — IS_PRODUCTION is True only for the actual
production URL. CLAUDE.md "Never bypass the production write guard"
hard rule now has teeth that an env var can't blunt.

+7 tests covering prod match, test env, mock URL, "test." in hostname,
suffix-trick adversarial URLs, empty string, and uppercase prod.

Test count: 433 → 440.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(plex-mock): drop reference to unshipped REHEARSAL_NOTES.md

Validation-window protocol step 3 required rehearsal notes in
`tools/plex_mock/REHEARSAL_NOTES.md` — a file nobody shipped with
PR #96. Either the gate was vacuously satisfiable (everyone skips
a step that points at a missing file) or permanently blocking (you
can never enable real writes until someone writes the magic file).
Neither is useful.

Reword: the rehearsal log lives in the PR description of whatever
PR flips writes on. Also tightens step 1 from "identical capture
sets" (not operationalised — the diff CLI only compares against a
fixture, not cross-run) to "matching row counts" (the diff output
prints that), which is something a reviewer can actually verify.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(plex-mock): systemd deploy prereqs; drop unused EnvironmentFile

Two issues the #96 review caught in the systemd deploy walkthrough:

- README assumed a `datum` user, `/opt/datum`, and a venv with the
  console script installed — none of which docs/GCP_MIGRATION.md
  establishes. A reader following the walkthrough would hit
  `chown datum:datum` (no user), then `systemctl start` (no venv).
  Add a one-time prereqs block: useradd, directories, repo clone,
  venv bootstrap, snapshot capture.
- The unit loaded `EnvironmentFile=/opt/datum/.env.local`, but the
  mock binary has zero Plex env var dependencies (serves local
  snapshots, writes to own SQLite). Loading .env.local needlessly
  exposed real Plex credentials to a process that doesn't need
  them. Dropped; snapshot capture on a creds-having host stays the
  only place that needs the credentials.

No code changes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Shane Waid <shane.waid32@gmail.com>
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant