From 6cc8cc8b0f31e55e1f11c30c1188bf25b02945b0 Mon Sep 17 00:00:00 2001 From: grace-shane Date: Tue, 7 Apr 2026 12:18:37 -0400 Subject: [PATCH 01/56] Endpoint tester UI, tenant diagnostics, env-var credentials, GH issue tracking (#13) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Implement web dashboard for tool library display * fix: PlexClient takes api_secret, env-var credentials, G5 test env Addresses item 1 from BRIEFING.md: - PlexClient.__init__ now accepts api_secret and sets the X-Plex-Connect-Api-Secret header when provided - API_KEY and API_SECRET are read from PLEX_API_KEY and PLEX_API_SECRET environment variables (no more hardcoded key) - TENANT_ID switched to G5 (b406c8c4-...) — the tenant we actually have access to; Grace UUID kept inline as a comment - USE_TEST flipped to True — all dev work goes against test.connect.plex.com per BRIEFING - __main__ hard-fails with a clear message if either credential env var is missing NOTE: the previously committed key (k3SmLW3y...) is still in git history and should be rotated in the Plex Developer Portal before production deployment. Co-Authored-By: Claude Opus 4.6 (1M context) * feat: rewrite UI as minimal endpoint tester Replaces the gradient/glass dashboard with a flat, neutral endpoint tester in the spirit of Postman/Insomnia. The old UI was decorative; this one is functional. Backend (app.py) - New /api/plex/raw proxy route: forwards an arbitrary path and HTTP method to Plex via the authenticated PlexClient so the browser can test any endpoint without ever seeing credentials. Returns http_status, elapsed_ms, size_bytes, headers, and body in a single envelope. - New /api/config route exposing non-secret config (base URL, environment, tenant, credential presence) to the UI. - app.py picks up API_SECRET and passes it to PlexClient. Frontend (templates/, static/) - New layout: left rail with preset endpoints + history, main area with method selector + URL bar + query row + tabbed response pane (Body / Headers / Raw). - Status strip shows HTTP status pill, elapsed ms, response size, method, and path. - Ctrl/Cmd+Enter sends. Copy and Clear buttons. - In-memory history (last 20), click to restore any response. - Preset endpoints for mdm/v1/parts, suppliers, tenants, purchase-orders, workcenters, and the blocked tooling/* endpoints (tagged 403). - JSON body is syntax-highlighted via a small regex pass. - Fusion 360 local loader and file/folder upload preserved. Design principles - Zero gradients, zero backdrop-filter, zero box-shadow glows, zero hover transforms, zero pulse animations, zero emoji. - Single solid accent (#3b82f6) used only for the Send button and focus rings. Semantic color only for status (green / amber / red pills). - 4px radii, 1px borders, flat panels. - System font stack for UI, ui-monospace for code. Co-Authored-By: Claude Opus 4.6 (1M context) * docs: link TODO.md items to their GitHub Issues Each unchecked item now points to its tracking issue (#1-#12). Adds a note at the top pointing at the live issues list. Also adds #12 (rotate exposed API key before production deployment). Co-Authored-By: Claude Opus 4.6 (1M context) * fix: clarify tenant routing is the only IT blocker Earlier docs incorrectly attributed the 403s on tooling/v1/* endpoints to a missing API collection subscription. Per the actual Grace Engineering situation, the only open IT blocker is tenant routing — credentials currently land on G5 (another company's read-only data) regardless of the X-Plex-Connect-Tenant-Id header. The 403s are now documented as a WORKING HYPOTHESIS: suspected to be tenant-scoping, will resolve when tenant routing lands. Cannot be verified from G5 because we have no authority to write there. Changes: - Plex_API_Reference.md: rewrite the "Blocked Endpoints" callout to point at tenant routing, mark the 403 resolution as a hypothesis - TODO.md: update the Phase 3 BLOCKED line to point at tenant routing instead of collection subscription - templates/index.html: drop the "403" tags on tooling/v1/* presets in the UI rail — the cause is transient tenant scoping, not inherent to the endpoints GitHub issues #1-#6 have been updated to match: - #1 retitled: tenant routing (was: enable API collections) - #2: removed blocker framing (read path works on G5 today) - #3, #6: added "blocked" label (write path blocked on tenant) - #4, #5: blocker text rewritten to tenant-scoping hypothesis Co-Authored-By: Claude Opus 4.6 (1M context) * docs: track BRIEFING.md in git The agent context briefing now lives in source control rather than as an untracked local file. Keeps it visible to the team and ensures it stays in sync with code changes via PR review. Note: a few sections in BRIEFING.md are now stale relative to work completed in this branch (e.g. "PlexClient missing api_secret" under What's built, the subscription-vs-tenant attribution under Gotchas). These will be cleaned up in a follow-up. Co-Authored-By: Claude Opus 4.6 (1M context) * feat: tenant diagnostic suite (whoami, list, get) Adds a small read-only test suite that verifies which Plex tenant our credentials are actually scoped to. This is the baseline check we should run first whenever the connection state is in question — and the visible "is the right tenant connected?" indicator until IT completes the routing change for Grace Engineering. New: plex_diagnostics.py - list_tenants(client) — GET /mdm/v1/tenants - get_tenant(client, id) — GET /mdm/v1/tenants/{id} - tenant_whoami(client, id) — composite check that calls the two endpoints above and compares the visible tenants against the known Grace and G5 UUIDs, returning a structured report with a clear `match` enum and a one-line `summary`. - KNOWN_TENANTS dict + GRACE_TENANT_ID / G5_TENANT_ID constants (tenant IDs are not secrets — safe to commit). - Standalone __main__ runs the suite and pretty-prints the report. Reconfigures stdout to UTF-8 first so em-dashes don't blow up on a Windows cp1252 console. New routes in app.py - GET /api/diagnostics/tenant → tenant_whoami - GET /api/diagnostics/tenants/list → raw list_tenants - GET /api/diagnostics/tenants/ → raw get_tenant by ID UI (templates/index.html) - New "Diagnostics" section in the rail, placed first so it's the most prominent section. Two preset buttons: tenant_whoami and list_tenants. The whoami response renders in the existing body pane via the rewrite from d449db1 — its `summary` field is the human-readable status line. Logic branches verified against fake clients: g5, grace, no_data (auth fail), wrapped dict.data response, unknown tenant, empty list. All six pass. Co-Authored-By: Claude Opus 4.6 (1M context) * docs(briefing): clean stale sections, add diagnostics and UI Updates BRIEFING.md to reflect the state of the branch: What's built section - Removed "NEEDS UPDATE: PlexClient missing api_secret" — done - Added env-var credential rule - Added a new plex_diagnostics.py section - Added a new app.py section describing the endpoint tester UI Immediate TODO section - Marked item 1 (PlexClient constructor fix) as DONE - Renumbered remaining items and cross-referenced them to GH Issues (#2, #3, #4, #7, #8) - Added a note pointing readers at the live issue tracker Gotchas section - Removed "PlexClient missing api_secret" gotcha — fixed - Added env-var requirement gotcha (hard-fail behaviour) - Added pointer to issue #12 about rotating the historical key - Replaced the wrong "tooling 403 = subscription issue" attribution with the correct tenant-scoping hypothesis (issue #1) Co-Authored-By: Claude Opus 4.6 (1M context) * test: pytest suite covering PlexClient, diagnostics, loader, routes 65 tests, 0 network calls, runs in <1s. All green locally. New files - requirements.txt flask + requests (runtime deps) - requirements-dev.txt pulls requirements.txt + pytest - pytest.ini test discovery + verbose output - tests/__init__.py - tests/conftest.py injects dummy PLEX_API_KEY/SECRET BEFORE any module-level reads in app.py / plex_api.py (otherwise the import-time guard would fail test collection). Provides a FakePlexClient fixture that records calls and returns canned responses without ever touching the network. - tests/test_plex_api.py 16 tests - tests/test_plex_diagnostics.py 21 tests - tests/test_tool_library_loader.py 16 tests - tests/test_app_routes.py 12 tests Coverage highlights - PlexClient header construction — locks in the BRIEFING item 1 fix: api_secret is included as X-Plex-Connect-Api-Secret only when provided, tenant header only when provided, all three present when full credentials passed. Test/prod URL switch verified. - tenant_whoami composite check — all 6 logic branches (grace, g5, configured/unknown, other, no_data, empty list) plus all four response shape variants Plex might return (bare list, {data:[...]}, {items:[...]}, {rows:[...]}, single object). - tool_library_loader — happy path, malformed JSON, missing data key, data is not a list, stale file (mtime backdated past 25h limit), custom max_age window, abort_on_stale=True aborts the whole run vs abort_on_stale=False skips stale and continues, empty directory, missing directory, no .json files. - Flask routes — index, /api/config envelope, all three diagnostics routes mocked through patched module-level functions, /api/plex/raw proxy (missing path returns 400, GET forwards with auth headers, query params except 'path' are forwarded, 4xx propagates as envelope status='error'), /api/plex/discover wired to discover_all. No tests hit the real Plex API. Everything is mocked at the module boundary or routed through FakePlexClient. Co-Authored-By: Claude Opus 4.6 (1M context) * ci: pytest on pull requests and pushes to master GitHub Actions workflow that: - Triggers on pull_request to master and on direct push to master - Sets up Python 3.11 (minimum version for the dict[str,...] | None syntax used in tool_library_loader.py) - pip-caches requirements*.txt - Installs requirements-dev.txt (which pulls requirements.txt) - Runs pytest Job is named 'pytest'. The status check that branch protection should require is 'tests / pytest'. Co-Authored-By: Claude Opus 4.6 (1M context) --------- Co-authored-by: Claude Opus 4.6 (1M context) --- .github/workflows/test.yml | 33 ++ .gitignore | 2 + BRIEFING.md | 208 +++++++++++ Plex_API_Reference.md | 8 +- TODO.md | 26 +- app.py | 241 +++++++++++++ plex_api.py | 33 +- plex_diagnostics.py | 212 +++++++++++ pytest.ini | 8 + requirements-dev.txt | 2 + requirements.txt | 2 + static/css/style.css | 567 ++++++++++++++++++++++++++++++ static/js/script.js | 455 ++++++++++++++++++++++++ templates/index.html | 134 +++++++ tests/__init__.py | 0 tests/conftest.py | 78 ++++ tests/test_app_routes.py | 203 +++++++++++ tests/test_plex_api.py | 85 +++++ tests/test_plex_diagnostics.py | 202 +++++++++++ tests/test_tool_library_loader.py | 176 ++++++++++ 20 files changed, 2651 insertions(+), 24 deletions(-) create mode 100644 .github/workflows/test.yml create mode 100644 .gitignore create mode 100644 BRIEFING.md create mode 100644 app.py create mode 100644 plex_diagnostics.py create mode 100644 pytest.ini create mode 100644 requirements-dev.txt create mode 100644 requirements.txt create mode 100644 static/css/style.css create mode 100644 static/js/script.js create mode 100644 templates/index.html create mode 100644 tests/__init__.py create mode 100644 tests/conftest.py create mode 100644 tests/test_app_routes.py create mode 100644 tests/test_plex_api.py create mode 100644 tests/test_plex_diagnostics.py create mode 100644 tests/test_tool_library_loader.py diff --git a/.github/workflows/test.yml b/.github/workflows/test.yml new file mode 100644 index 0000000..d52594e --- /dev/null +++ b/.github/workflows/test.yml @@ -0,0 +1,33 @@ +name: tests + +on: + pull_request: + branches: [master] + push: + branches: [master] + +jobs: + test: + name: pytest + runs-on: ubuntu-latest + + steps: + - name: Checkout + uses: actions/checkout@v4 + + - name: Set up Python + uses: actions/setup-python@v5 + with: + python-version: "3.11" + cache: pip + cache-dependency-path: | + requirements.txt + requirements-dev.txt + + - name: Install dependencies + run: | + python -m pip install --upgrade pip + pip install -r requirements-dev.txt + + - name: Run pytest + run: pytest diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..7a60b85 --- /dev/null +++ b/.gitignore @@ -0,0 +1,2 @@ +__pycache__/ +*.pyc diff --git a/BRIEFING.md b/BRIEFING.md new file mode 100644 index 0000000..db78f48 --- /dev/null +++ b/BRIEFING.md @@ -0,0 +1,208 @@ +# Grace Engineering — Plex API: Claude Code Briefing + +This is the primary context document for AI-assisted development sessions. +Read this first, then read plex_api.py and tool_library_loader.py. + +--- + +## What this project is + +Nightly automation that syncs Autodesk Fusion 360 tool library data into +Rockwell Automation Plex Smart Manufacturing (ERP). Fusion 360 JSON files +on a local network share are the absolute source of truth. The script reads +them and pushes tooling data to Plex via REST API every night at midnight. + +--- + +## Repo: https://github.com/grace-shane/plex-api + +Forked from just-shane/plex-api. Grace Engineering's working copy. + +--- + +## Current situation + +- Connected and authenticating successfully — but to the WRONG tenant (G5) +- G5 is real production data belonging to another company — READ ONLY, no writes +- IT (Courtney) is resolving tenant access for Grace Engineering +- No new credentials needed — switching tenants = enabling one header +- Use https://test.connect.plex.com (test. prefix) for all development + +--- + +## Auth — three headers required +X-Plex-Connect-Api-Key: # identifies the app +X-Plex-Connect-Api-Secret: # second factor, same credential +X-Plex-Connect-Tenant-Id: # tenant routing — omit = defaults to G5 + +Keys and secrets are managed here in Claude Code via environment variables. +Never hardcode credentials. Never commit credentials. + +### Tenants + +| Name | Tenant ID | Status | +|-----------------|----------------------------------------|-------------------------------| +| Grace Eng. | a6af9c99-bce5-4938-a007-364dc5603d08 | Target — waiting on IT | +| G5 | b406c8c4-cef0-4d62-862c-1758b702cd02 | Currently connected — READ ONLY | + +--- + +## Architecture + +Fusion 360 .json (network share, via ADC) +└── tool_library_loader.py reads + validates JSON, stale-file guard +└── transform layer (build_part_payload, build_assembly_payload) +└── plex_api.py / PlexClient pushes to Plex REST API +├── mdm/v1/parts (consumable tools) +├── mdm/v1/suppliers (resolve vendor UUIDs) +├── tooling/v1/tool-assemblies (BLOCKED — see below) +└── production/v1/control/workcenters + +### Industry hierarchy (Plex data model) + +1. Purchased consumables — cutting tools as bought parts (end mills, drills, etc.) +2. Tool assemblies — consumable + holder paired together +3. Routings / operations — assemblies mapped to machining ops +4. Jobs — ops executed on the shop floor +5. Manufactured parts — end product, with full tool traceability + +--- + +## Plex API endpoints + +### Working (test environment) + +| Endpoint | Notes | +|----------------------------------------|------------------------------------------------| +| GET mdm/v1/tenants | Returns tenants for credential. Currently G5. | +| GET mdm/v1/parts | NO pagination — always filter status=Active | +| GET mdm/v1/suppliers | Returns UUIDs, not supplier codes | +| GET purchasing/v1/purchase-orders | URL-encode spaces in filter values | +| GET production/v1/control/workcenters | Target for pocket/turret assignment pushes | + +### 403 responses — suspected tenant routing, not subscription + +- tooling/v1/tools +- tooling/v1/tool-assemblies +- tooling/v1/tool-inventory + +Working hypothesis: these 403s will resolve once IT completes the tenant +routing change for Grace Engineering. Cannot verify until tenant access lands, +since G5 is another company's data and we have no authority to test writes +there. The tenant change is the **only** open IT blocker. + +--- + +## Fusion 360 JSON schema (key fields) + +Source file: BROTHER SPEEDIO ALUMINUM.json (28 entries, root "data" array) + +| Field | Maps to Plex | Notes | +|------------------------|-------------------------------------|------------------------------------| +| guid | External reference key | Use for dedup on re-sync | +| type | Item sub-category | Filter out "holder" and "probe" | +| description | Part description | | +| product-id | Part number | Vendor part number, key for PO link| +| vendor | Supplier (resolve to UUID first) | | +| post-process.number | Pocket / turret number | Critical for workcenter doc update | +| geometry.DC | Cutting diameter | Blocked endpoint | +| geometry.OAL | Overall length | Blocked endpoint | +| geometry.NOF | Number of flutes | Blocked endpoint | +| holder (object) | Assembly component / BOM link | Blocked endpoint | + +Tool type distribution in active library: +- flat end mill: 12 | holder: 6 | bull nose end mill: 4 | drill: 2 +- face mill: 1 | form mill: 1 | slot mill: 1 | probe: 1 + +Sync filter: include only type != "holder" AND type != "probe" + +--- + +## What's built + +### plex_api.py +- PlexClient base class with throttling (200 calls/min rate limit) +- Constructor takes api_key, api_secret, tenant_id, use_test +- Sets X-Plex-Connect-Api-Key, X-Plex-Connect-Api-Secret, and + X-Plex-Connect-Tenant-Id headers +- Credentials read from PLEX_API_KEY / PLEX_API_SECRET env vars +- get() and get_paginated() methods +- Extraction functions: extract_purchase_orders, extract_parts, extract_workcenters +- discover_all() endpoint probe utility + +### plex_diagnostics.py +- list_tenants(client) — GET /mdm/v1/tenants +- get_tenant(client, id) — GET /mdm/v1/tenants/{id} +- tenant_whoami(client, configured_id) — composite check that compares + visible tenants against the known Grace and G5 UUIDs and returns a + structured report. Run this first to verify tenant routing. + +### tool_library_loader.py +- load_library(path) — loads single .json, returns data array +- load_all_libraries(directory) — globs all .json files in CAMTools dir +- Stale file guard — aborts if files older than 25h (ADC sync stall detection) +- PermissionError and JSONDecodeError handling (ADC mid-sync file locks) +- report_library_contents() — diagnostic summary + +### app.py + templates/static +- Flask endpoint tester UI at http://localhost:5000 +- Left rail: Diagnostics (run first), Plex presets, Extractors, Fusion local +- Top: method selector + URL bar + query params + Send (Ctrl/Cmd+Enter) +- Tabbed response pane (Body / Headers / Raw), copy and clear, history +- /api/plex/raw proxy lets the UI hit any Plex endpoint via PlexClient + without exposing credentials to the browser +- /api/diagnostics/tenant runs tenant_whoami from plex_diagnostics + +--- + +## Immediate TODO (in priority order) + +All items below are mirrored as GitHub Issues — see +https://github.com/grace-shane/plex-api/issues for live status. + +1. ~~Fix PlexClient constructor — add api_secret, include header~~ DONE +2. Read baseline tooling inventory from mdm/v1/parts — issue #2 (unblocked, + read-only — can start today on G5) +3. build_part_payload(tool: dict) -> dict — issue #3 + Maps Fusion tool object to mdm/v1/parts POST body +4. resolve_supplier_uuid(vendor_name: str) -> str — issue #3 + Looks up supplier UUID from mdm/v1/suppliers (safe to test on G5 read) +5. build_assembly_payload(tool: dict, holder: dict) -> dict — issue #4 + Draft only — endpoints currently 403 (suspected tenant scoping) +6. Core sync logic — upsert with guid-based dedup — issue #7 +7. Error handling + logging to network share text file — issue #8 + +--- + +## Gotchas — read before touching anything + +- **G5 is production data. Read only. No writes, no mutations.** +- PLEX_API_KEY and PLEX_API_SECRET must be set in the environment before + running plex_api.py or app.py — both will hard-fail with a clear message + if they are missing +- The previously hardcoded API key (k3SmLW3y…) is still in git history on + master and must be rotated before production deployment — see issue #12 +- mdm/v1/parts has NO server-side pagination — unfiltered = entire DB pulled +- supplierId in responses is a UUID, not a supplier code (MSC != "MSC001") +- URL-encode spaces in filter strings (MRO SUPPLIES -> MRO%20SUPPLIES) +- API key must be in header — URL parameter returns 401 +- PowerShell: use Invoke-RestMethod, not curl (alias doesn't pass headers) +- Tooling 403s on tooling/v1/* are SUSPECTED to be tenant scoping, not API + collection subscription. Working hypothesis only — cannot verify until + tenant routing lands. See issue #1. +- Fusion Tool objects from CAM API are copies, not references +- ADC stale file guard will abort sync if network share files are > 25h old +- BROTHER SPEEDIO ALUMINUM.json is committed to repo for reference only — + sync script must always read from network share, not this file + +--- + +## DNC / machine connections (for future NC program push work) + +| Machine | Protocol | Address | +|----------------------|----------------|-----------------------------| +| Brother Speedio 879 | FTP | 192.168.25.79 | +| Brother Speedio 880 | FTP | 192.168.25.80 | +| Citizen / Tsugami | RS-232 → TCP | Moxa NPort 5150/5250 | +| Haas VMCs | Ethernet | Sigma 5 native | + diff --git a/Plex_API_Reference.md b/Plex_API_Reference.md index 8b27cad..3acb82b 100644 --- a/Plex_API_Reference.md +++ b/Plex_API_Reference.md @@ -40,10 +40,12 @@ The target architecture requires pushing Fusion 360 data to the Tooling/Workcent | Purchasing | `purchasing/v1/purchase-orders` | Returns full PO headers (e.g., tooling orders from MSC). | | Production | `production/v1/control/workcenters` | Discovered on Dev Portal. Replaces old 404 manufacturing endpoint. | -### ⚠️ Blocked Endpoints (Action Required) -> +### ⚠️ 403 Responses — Tenant Routing Suspected + > [!IMPORTANT] -> **ACTION REQUIRED**: IT (Courtney) must enable the **Tooling** and **Manufacturing** API collections for the currently active App in the Plex Developer Portal. Initial testing returned 403 authorization failures. The Tooling endpoint documentation remains completely hidden from the public developer portal until you authenticate with a subscribed developer account. +> **ACTION REQUIRED**: IT (Courtney) must complete the tenant routing change so Grace Engineering credentials land on the Grace tenant (`a6af9c99-bce5-4938-a007-364dc5603d08`) instead of G5 (`b406c8c4-cef0-4d62-862c-1758b702cd02`). This is the **only** open IT blocker. +> +> The 403s observed on the endpoints below are suspected to be tenant-scoping rather than API collection subscription. **This is a working hypothesis** — we cannot verify it until tenant access is resolved, because G5 is another company's production data and we have no authority to test writes there. Re-run `discover_all()` once tenant routing lands to confirm. - `tooling/v1/tools` - `tooling/v1/tool-assemblies` diff --git a/TODO.md b/TODO.md index a3ca46c..cd7659e 100644 --- a/TODO.md +++ b/TODO.md @@ -2,6 +2,9 @@ This document outlines the step-by-step implementation plan for the Autodesk Fusion 360 tool library to Plex Manufacturing Cloud synchronization project. +> **Live tracking:** All unchecked items below are mirrored as GitHub Issues. +> See for current status, comments, and blockers. + ## Phase 1: API Discovery & Authentication - [x] Set up Postman and discover relevant Plex API endpoints. @@ -17,26 +20,27 @@ This document outlines the step-by-step implementation plan for the Autodesk Fus ## Phase 3: Plex API Source-of-Truth Implementation -- [ ] Implement API call to retrieve current tooling inventory from Plex (master list) to prep for overwrite. -- [ ] Implement API call to update/create purchased parts (focused first on **consumables** like cutting tools) in Plex. -- [ ] Implement API call to create/update Tool Assemblies, assigning the purchased consumable parts to them. -- [ ] Implement API call to link Tool Assemblies to Routings/Operations. -- [ ] Implement API call to update tooling within the specific Workcenter Document (`production/v1/control/workcenters`). -- [ ] **BLOCKED**: Waiting on IT (Courtney) to enable Tooling & Manufacturing APIs in the Developer Portal. +- [ ] Implement API call to retrieve current tooling inventory from Plex (master list) to prep for overwrite. → [#2](https://github.com/grace-shane/plex-api/issues/2) +- [ ] Implement API call to update/create purchased parts (focused first on **consumables** like cutting tools) in Plex. → [#3](https://github.com/grace-shane/plex-api/issues/3) +- [ ] Implement API call to create/update Tool Assemblies, assigning the purchased consumable parts to them. → [#4](https://github.com/grace-shane/plex-api/issues/4) +- [ ] Implement API call to link Tool Assemblies to Routings/Operations. → [#5](https://github.com/grace-shane/plex-api/issues/5) +- [ ] Implement API call to update tooling within the specific Workcenter Document (`production/v1/control/workcenters`). → [#6](https://github.com/grace-shane/plex-api/issues/6) +- [ ] **BLOCKED**: Waiting on IT (Courtney) to complete tenant routing so credentials land on Grace Engineering instead of G5. Hypothesis: the 403s on `tooling/v1/*` endpoints will resolve once tenant access is fixed. → [#1](https://github.com/grace-shane/plex-api/issues/1) ## Phase 4: Data Mapping & Sync Logic - [x] Create a mapping definition between Fusion 360 data structures and Plex API payload requirements (Completed in `Fusion360_Tool_Library_Reference.md`). -- [ ] Implement the core synchronization logic: +- [ ] Implement the core synchronization logic: → [#7](https://github.com/grace-shane/plex-api/issues/7) - Utilize the Fusion JSON file output as the explicit Source of Truth relative to Plex. - Push updates for purchased consumables to the master inventory list. - Link those consumables into Tool Assemblies. - Ensure those assemblies dynamically flow down to the Routing and then the Job when run in the shop, linking tools directly to manufactured parts. - Push final setups to the workcenter documents. -- [ ] Add basic error handling and logging (e.g., logging successful syncs or failed API calls to a text file on the network share). +- [ ] Add basic error handling and logging (e.g., logging successful syncs or failed API calls to a text file on the network share). → [#8](https://github.com/grace-shane/plex-api/issues/8) ## Phase 5: Automation & Deployment -- [ ] Finalize the synchronization script. -- [ ] Deploy the script to a server or always-on PC with access to the network share. -- [ ] Schedule the script to run daily at midnight (e.g., using Windows Task Scheduler). +- [ ] Finalize the synchronization script. → [#9](https://github.com/grace-shane/plex-api/issues/9) +- [ ] Deploy the script to a server or always-on PC with access to the network share. → [#10](https://github.com/grace-shane/plex-api/issues/10) +- [ ] Schedule the script to run daily at midnight (e.g., using Windows Task Scheduler). → [#11](https://github.com/grace-shane/plex-api/issues/11) +- [ ] Rotate the Plex API key before production (previous key is still in git history). → [#12](https://github.com/grace-shane/plex-api/issues/12) diff --git a/app.py b/app.py new file mode 100644 index 0000000..d6256ee --- /dev/null +++ b/app.py @@ -0,0 +1,241 @@ +from flask import Flask, render_template, jsonify, request +import os +import json +import time +import traceback +import requests + +# Import our existing scripts +from plex_api import ( + PlexClient, + API_KEY, + API_SECRET, + TENANT_ID, + USE_TEST, + discover_all, + extract_parts, + extract_purchase_orders, + extract_workcenters, + extract_operations, +) +from tool_library_loader import load_all_libraries +from plex_diagnostics import tenant_whoami, list_tenants, get_tenant + +app = Flask(__name__) + +# Initialize Plex Client +client = PlexClient( + api_key=API_KEY, + api_secret=API_SECRET, + tenant_id=TENANT_ID, + use_test=USE_TEST, +) + + +@app.route('/') +def index(): + """Serve the main dashboard HTML.""" + return render_template('index.html') + + +# ───────────────────────────────────────────── +# Raw proxy — lets the UI hit ANY Plex endpoint +# through the authenticated PlexClient without +# ever exposing credentials to the browser. +# ───────────────────────────────────────────── +@app.route('/api/plex/raw', methods=['GET', 'POST', 'PUT', 'DELETE', 'PATCH']) +def api_plex_raw(): + """ + Proxy an arbitrary Plex REST call. + + Query params (for the tester): + path — full path after the base URL, e.g. "mdm/v1/parts" + ... — all other query params are forwarded as-is to Plex + + For non-GET, JSON body from the client is forwarded as-is. + Always returns {status, http_status, elapsed_ms, size_bytes, headers, body}. + """ + path = (request.args.get('path') or '').strip().lstrip('/') + if not path: + return jsonify({ + "status": "error", + "message": "Missing required 'path' query param (e.g. mdm/v1/parts)", + }), 400 + + # Forward all query params EXCEPT our own 'path' marker. + forwarded_params = {k: v for k, v in request.args.items() if k != 'path'} + + url = f"{client.base}/{path}" + method = request.method.upper() + + body = None + if method in ('POST', 'PUT', 'PATCH'): + body = request.get_json(silent=True) + + started = time.perf_counter() + try: + r = requests.request( + method=method, + url=url, + headers=client.headers, + params=forwarded_params, + json=body, + timeout=30, + ) + elapsed_ms = int((time.perf_counter() - started) * 1000) + + # Try to parse JSON, fall back to text + try: + parsed = r.json() + except ValueError: + parsed = r.text + + return jsonify({ + "status": "success" if r.ok else "error", + "http_status": r.status_code, + "http_reason": r.reason, + "elapsed_ms": elapsed_ms, + "size_bytes": len(r.content), + "url": r.url, + "method": method, + "headers": dict(r.headers), + "body": parsed, + }) + except requests.exceptions.RequestException as e: + elapsed_ms = int((time.perf_counter() - started) * 1000) + return jsonify({ + "status": "error", + "http_status": 0, + "elapsed_ms": elapsed_ms, + "url": url, + "method": method, + "message": str(e), + }), 502 + + +@app.route('/api/plex/discover') +def api_discover(): + """Run discover_all on Plex.""" + try: + report = discover_all(client) + return jsonify({"status": "success", "data": report}) + except Exception as e: + return jsonify({"status": "error", "message": str(e), "trace": traceback.format_exc()}), 500 + + +# ───────────────────────────────────────────── +# Diagnostics — read-only sanity checks +# ───────────────────────────────────────────── +@app.route('/api/diagnostics/tenant') +def api_diagnostics_tenant(): + """ + Composite tenant diagnostic. + + Calls /mdm/v1/tenants and (if a TENANT_ID is configured) /mdm/v1/tenants/{id}, + then compares the result against the known Grace and G5 UUIDs so the UI can + show a clear "is this the right tenant?" status. Read-only and safe. + """ + try: + report = tenant_whoami(client, TENANT_ID) + return jsonify({"status": "success", "data": report}) + except Exception as e: + return jsonify({"status": "error", "message": str(e), "trace": traceback.format_exc()}), 500 + + +@app.route('/api/diagnostics/tenants/list') +def api_diagnostics_tenants_list(): + """Raw GET /mdm/v1/tenants — list all tenants visible to the credential.""" + try: + data = list_tenants(client) + return jsonify({"status": "success", "data": data}) + except Exception as e: + return jsonify({"status": "error", "message": str(e), "trace": traceback.format_exc()}), 500 + + +@app.route('/api/diagnostics/tenants/') +def api_diagnostics_tenant_get(tenant_id): + """Raw GET /mdm/v1/tenants/{id} — fetch a single tenant by UUID.""" + try: + data = get_tenant(client, tenant_id) + return jsonify({"status": "success", "data": data}) + except Exception as e: + return jsonify({"status": "error", "message": str(e), "trace": traceback.format_exc()}), 500 + + +@app.route('/api/plex/') +def api_extract(endpoint_type): + """Run one of the extraction tools.""" + try: + if endpoint_type == 'parts': + data = extract_parts(client) + elif endpoint_type == 'purchase_orders': + data = extract_purchase_orders(client, date_from="2025-01-01") + elif endpoint_type == 'workcenters': + data = extract_workcenters(client) + elif endpoint_type == 'operations': + data = extract_operations(client) + else: + return jsonify({"status": "error", "message": "Unknown endpoint"}), 400 + + return jsonify({ + "status": "success", + "count": len(data) if data else 0, + "data": data[:100] if data else [] # Return first 100 for UI performance + }) + except Exception as e: + return jsonify({"status": "error", "message": str(e), "trace": traceback.format_exc()}), 500 + + +@app.route('/api/fusion/tools', methods=['GET', 'POST']) +def api_fusion_tools(): + """Load Fusion 360 libraries.""" + try: + libs = {} + if request.method == 'POST': + for key, uploaded_file in request.files.items(): + if uploaded_file.filename.endswith('.json'): + content = uploaded_file.read().decode('utf-8') + try: + raw = json.loads(content) + if 'data' in raw and isinstance(raw['data'], list): + libs[uploaded_file.filename.replace('.json', '')] = raw['data'] + except Exception as e: + print(f"Error parsing {uploaded_file.filename}: {e}") + else: + abort_on_stale = request.args.get('abort_on_stale', 'true').lower() == 'true' + libs = load_all_libraries(abort_on_stale=abort_on_stale) + + # Transform the dict of libraries into a UI-friendly list + summary = [] + for name, tools in libs.items(): + summary.append({ + "library_name": name, + "tool_count": len(tools), + "tools_sample": tools[:5] # Send a sample for the UI + }) + + return jsonify({ + "status": "success", + "library_count": len(libs), + "data": summary + }) + except Exception as e: + return jsonify({"status": "error", "message": str(e), "trace": traceback.format_exc()}), 500 + + +@app.route('/api/config') +def api_config(): + """Expose non-secret client config to the UI (base URL, tenant, env).""" + return jsonify({ + "base_url": client.base, + "environment": "test" if USE_TEST else "production", + "tenant_id": TENANT_ID, + "has_key": bool(API_KEY), + "has_secret": bool(API_SECRET), + }) + + +if __name__ == '__main__': + # Run the server on port 5000 + print("Starting UX Test Server...") + app.run(debug=True, host='0.0.0.0', port=5000) diff --git a/plex_api.py b/plex_api.py index 38276f2..c92af2a 100644 --- a/plex_api.py +++ b/plex_api.py @@ -17,11 +17,16 @@ # ───────────────────────────────────────────── # CONFIGURATION — fill these in # ───────────────────────────────────────────── -API_KEY = "k3SmLW3y3mhqJiG6osixbYUmiPsHfB51" # from developers.plex.com → My Apps -TENANT_ID = "a6af9c99-bce5-4938-a007-364dc5603d08" # leave blank for default tenant (your PCN) +# Credentials come from environment variables — never hardcode/commit. +# PLEX_API_KEY — Consumer Key from developers.plex.com → My Apps +# PLEX_API_SECRET — Consumer Secret, paired with the key +API_KEY = os.environ.get("PLEX_API_KEY", "") +API_SECRET = os.environ.get("PLEX_API_SECRET", "") +# Tenant IDs are not secrets — safe to commit. G5 is what we currently have access to. +TENANT_ID = "b406c8c4-cef0-4d62-862c-1758b702cd02" # G5 (read-only) — Grace UUID = a6af9c99-bce5-4938-a007-364dc5603d08 BASE_URL = "https://connect.plex.com" TEST_URL = "https://test.connect.plex.com" -USE_TEST = False # flip to True to hit test environment first +USE_TEST = True # all dev work goes against test.connect.plex.com OUTPUT_DIR = "C:/projects/plex-api/outputs" TOOL_LIB_DIR = "Z:\\Engineering\\Tooling\\Fusion_Libraries" # Mapped drive path containing JSON files @@ -30,13 +35,15 @@ # BASE CLIENT # ───────────────────────────────────────────── class PlexClient: - def __init__(self, api_key, tenant_id="", use_test=False): + def __init__(self, api_key, api_secret="", tenant_id="", use_test=False): self.base = TEST_URL if use_test else BASE_URL self.headers = { "X-Plex-Connect-Api-Key": api_key, "Content-Type": "application/json", "Accept": "application/json", } + if api_secret: + self.headers["X-Plex-Connect-Api-Secret"] = api_secret if tenant_id: self.headers["X-Plex-Connect-Tenant-Id"] = tenant_id @@ -271,18 +278,18 @@ def discover_all(client): status = r.status_code note = "" if status == 200: - note = "✅ Available" + note = "[OK] Available" elif status == 401: - note = "❌ Auth error" + note = "[ERR] Auth error" elif status == 403: - note = "🔒 Not subscribed" + note = "[LOCK] Not subscribed" elif status == 404: - note = "❓ Not found" + note = "[?] Not found" else: - note = f"⚠️ HTTP {status}" + note = f"[!] HTTP {status}" except Exception as e: status = 0 - note = f"❌ Exception: {e}" + note = f"[ERR] Exception: {e}" print(f" {note:25s} {collection}/{version}/{resource}") report.append({ @@ -341,8 +348,14 @@ def explore_parts(client): if __name__ == "__main__": + if not API_KEY or not API_SECRET: + raise SystemExit( + "Missing credentials. Set PLEX_API_KEY and PLEX_API_SECRET environment variables." + ) + client = PlexClient( api_key=API_KEY, + api_secret=API_SECRET, tenant_id=TENANT_ID, use_test=USE_TEST, ) diff --git a/plex_diagnostics.py b/plex_diagnostics.py new file mode 100644 index 0000000..05c5c57 --- /dev/null +++ b/plex_diagnostics.py @@ -0,0 +1,212 @@ +""" +plex_diagnostics.py +Plex Connect — diagnostic checks +================================ +Small suite of read-only checks against the Plex API to verify connectivity, +authentication, and tenant routing. Used as a sanity layer before any sync +work and as the visible "is the right tenant connected?" indicator in the UI. + +All functions are read-only and safe to run against any tenant — including +G5, where we have read access only. +""" + +from typing import Any + +# ───────────────────────────────────────────── +# Known tenants +# Tenant IDs are not secrets — committing them is fine. These labels are +# used to make the whoami report human-readable. +# ───────────────────────────────────────────── +GRACE_TENANT_ID = "a6af9c99-bce5-4938-a007-364dc5603d08" +G5_TENANT_ID = "b406c8c4-cef0-4d62-862c-1758b702cd02" + +KNOWN_TENANTS = { + GRACE_TENANT_ID: "Grace Engineering", + G5_TENANT_ID: "G5", +} + + +# ───────────────────────────────────────────── +# Raw endpoint wrappers +# ───────────────────────────────────────────── +def list_tenants(client) -> Any: + """ + GET /mdm/v1/tenants + + Returns the list of tenants visible to the active credential. + For a correctly-scoped credential this is typically a single tenant + (the one your API key is bound to). Useful for confirming which + tenant the credential actually lands on. + """ + return client.get("mdm", "v1", "tenants") + + +def get_tenant(client, tenant_id: str) -> Any: + """ + GET /mdm/v1/tenants/{id} + + Returns the full record for a specific tenant. 404 if the tenant + does not exist or is not visible to the credential. + """ + return client.get("mdm", "v1", f"tenants/{tenant_id}") + + +# ───────────────────────────────────────────── +# Composite check — the main diagnostic +# ───────────────────────────────────────────── +def tenant_whoami(client, configured_tenant_id: str = "") -> dict: + """ + Composite tenant diagnostic. + + Calls list_tenants() and (if a configured ID is provided) get_tenant(), + then compares the visible tenant(s) against the known Grace and G5 UUIDs + so the UI can show a clear "is this the right tenant?" status. + + Returns a structured report: + { + "configured_tenant_id": "", + "configured_tenant_label": "Grace Engineering" | "G5" | "unknown", + "visible_tenants": [{id, code, name, label}, ...], + "list_tenants_raw": , + "get_tenant_raw": , + "match": "grace" | "g5" | "configured" | + "other" | "no_data", + "summary": "", + } + """ + report: dict = { + "configured_tenant_id": configured_tenant_id or "", + "configured_tenant_label": KNOWN_TENANTS.get(configured_tenant_id, "unknown"), + "visible_tenants": [], + "list_tenants_raw": None, + "get_tenant_raw": None, + "match": "no_data", + "summary": "", + } + + # ── Step 1: list_tenants ──────────────────── + listed = list_tenants(client) + report["list_tenants_raw"] = listed + + if listed is None: + report["summary"] = ( + "list_tenants returned no data — credentials likely invalid, " + "or test.connect.plex.com is unreachable." + ) + return report + + # Normalize the response. Plex sometimes wraps lists in {data|items|rows}. + if isinstance(listed, list): + items = listed + elif isinstance(listed, dict): + items = ( + listed.get("items") + or listed.get("data") + or listed.get("rows") + or [listed] # single tenant returned as a bare object + ) + else: + items = [] + + visible: list[dict] = [] + for t in items: + if not isinstance(t, dict): + continue + tid = t.get("id") or t.get("tenantId") or t.get("Id") + visible.append({ + "id": tid, + "code": t.get("code") or t.get("Code"), + "name": t.get("name") or t.get("Name"), + "label": KNOWN_TENANTS.get(tid, "unknown"), + }) + report["visible_tenants"] = visible + + # ── Step 2: get_tenant for the configured ID ──────────────── + if configured_tenant_id: + report["get_tenant_raw"] = get_tenant(client, configured_tenant_id) + + # ── Step 3: match logic ───────────────────── + visible_ids = {t["id"] for t in visible if t.get("id")} + + if not visible_ids: + report["match"] = "no_data" + report["summary"] = ( + "list_tenants returned a response but no tenant IDs could be parsed. " + "Check the raw response in this report." + ) + return report + + if GRACE_TENANT_ID in visible_ids: + report["match"] = "grace" + report["summary"] = ( + "[OK] Connected to Grace Engineering. Tenant routing is resolved — " + "you may flip TENANT_ID in plex_api.py to the Grace UUID and " + "begin write-path testing." + ) + return report + + if G5_TENANT_ID in visible_ids: + report["match"] = "g5" + report["summary"] = ( + "[WARN] Connected to G5 (read-only, another company's data). " + "Awaiting IT (Courtney) to complete tenant routing for Grace. " + "All writes are prohibited until this resolves — see issue #1." + ) + return report + + if configured_tenant_id and configured_tenant_id in visible_ids: + report["match"] = "configured" + report["summary"] = ( + f"Connected to the configured tenant " + f"({report['configured_tenant_label']}), which is neither " + f"Grace nor G5. Verify this is intentional." + ) + return report + + report["match"] = "other" + report["summary"] = ( + "Connected to an unrecognized tenant. Inspect visible_tenants in " + "this report and confirm the credential routing is what you expect." + ) + return report + + +# ───────────────────────────────────────────── +# Standalone test +# ───────────────────────────────────────────── +if __name__ == "__main__": + import json + import sys + + # Force UTF-8 stdout so em-dashes / brackets in summary strings don't + # blow up on a Windows cp1252 console. + try: + sys.stdout.reconfigure(encoding="utf-8") + except Exception: + pass + + from plex_api import PlexClient, API_KEY, API_SECRET, TENANT_ID, USE_TEST + + if not API_KEY or not API_SECRET: + raise SystemExit( + "Missing credentials. Set PLEX_API_KEY and PLEX_API_SECRET " + "environment variables before running this diagnostic." + ) + + client = PlexClient( + api_key=API_KEY, + api_secret=API_SECRET, + tenant_id=TENANT_ID, + use_test=USE_TEST, + ) + + print(f"Plex Diagnostics — {'TEST' if USE_TEST else 'PRODUCTION'}") + print(f"Base URL: {client.base}") + print(f"Configured TENANT_ID: {TENANT_ID}\n") + + report = tenant_whoami(client, TENANT_ID) + + print("─" * 60) + print(report["summary"]) + print("─" * 60) + print(json.dumps(report, indent=2, default=str)) diff --git a/pytest.ini b/pytest.ini new file mode 100644 index 0000000..ee23f6c --- /dev/null +++ b/pytest.ini @@ -0,0 +1,8 @@ +[pytest] +testpaths = tests +python_files = test_*.py +python_classes = Test* +python_functions = test_* +addopts = -v --tb=short +filterwarnings = + ignore::DeprecationWarning diff --git a/requirements-dev.txt b/requirements-dev.txt new file mode 100644 index 0000000..a266747 --- /dev/null +++ b/requirements-dev.txt @@ -0,0 +1,2 @@ +-r requirements.txt +pytest>=8.0 diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000..a0d407c --- /dev/null +++ b/requirements.txt @@ -0,0 +1,2 @@ +flask>=3.0 +requests>=2.31 diff --git a/static/css/style.css b/static/css/style.css new file mode 100644 index 0000000..faabb30 --- /dev/null +++ b/static/css/style.css @@ -0,0 +1,567 @@ +/* + * plex-api · endpoint tester + * Flat, neutral, no gradients, no glass, no glow. + * Single blue accent. Semantic color only for status. + */ + +:root { + /* surface */ + --bg-0: #0b0b0d; /* base */ + --bg-1: #111115; /* panel */ + --bg-2: #17171c; /* panel hover / input */ + --bg-3: #1d1d23; /* chip */ + --border: #24242b; + --border-strong: #2e2e36; + + /* text */ + --fg-0: #f2f2f3; + --fg-1: #c9c9cf; + --fg-2: #8a8a94; + --fg-3: #55555e; + + /* accents (solid, single hue) */ + --accent: #3b82f6; + --accent-hover: #2563eb; + --accent-fg: #ffffff; + + /* semantic */ + --ok: #22c55e; + --warn: #eab308; + --err: #ef4444; + --info: #38bdf8; + + /* http methods */ + --get: #22c55e; + --post: #eab308; + --put: #38bdf8; + --patch: #a855f7; + --delete: #ef4444; + --internal: #8a8a94; + + /* metrics */ + --rail-w: 280px; + --radius: 4px; + --radius-lg: 6px; + + /* fonts */ + --font-ui: ui-sans-serif, system-ui, -apple-system, "Segoe UI", Roboto, sans-serif; + --font-mono: ui-monospace, SFMono-Regular, "SF Mono", Menlo, Consolas, "Liberation Mono", monospace; +} + +* { box-sizing: border-box; margin: 0; padding: 0; } + +html, body { + height: 100%; +} + +body { + background: var(--bg-0); + color: var(--fg-0); + font-family: var(--font-ui); + font-size: 13px; + line-height: 1.5; + -webkit-font-smoothing: antialiased; + overflow: hidden; +} + +button, input, select, textarea { + font: inherit; + color: inherit; +} + +button { cursor: pointer; } + +/* ───────────────────────────────── + Layout + ───────────────────────────────── */ +.app { + display: grid; + grid-template-columns: var(--rail-w) 1fr; + height: 100vh; +} + +/* ───────────────────────────────── + Left rail + ───────────────────────────────── */ +.rail { + background: var(--bg-1); + border-right: 1px solid var(--border); + display: flex; + flex-direction: column; + overflow: hidden; +} + +.rail-header { + padding: 14px 16px; + display: flex; + align-items: center; + justify-content: space-between; + border-bottom: 1px solid var(--border); + flex-shrink: 0; +} + +.brand { + font-size: 13px; + font-weight: 600; + letter-spacing: -0.01em; + color: var(--fg-0); +} + +.env-chip { + font-family: var(--font-mono); + font-size: 10px; + padding: 3px 7px; + border-radius: var(--radius); + background: var(--bg-3); + color: var(--fg-2); + border: 1px solid var(--border); + text-transform: uppercase; + letter-spacing: 0.05em; +} + +.env-chip.test { color: var(--warn); border-color: rgba(234, 179, 8, 0.3); } +.env-chip.prod { color: var(--err); border-color: rgba(239, 68, 68, 0.3); } + +.rail-section { + padding: 12px 12px 16px; + border-bottom: 1px solid var(--border); +} + +.rail-section.rail-history { + border-bottom: none; + flex: 1; + min-height: 0; + display: flex; + flex-direction: column; +} + +.rail-label { + font-size: 10px; + font-weight: 600; + text-transform: uppercase; + letter-spacing: 0.08em; + color: var(--fg-2); + padding: 4px 4px 8px; + display: flex; + align-items: center; + justify-content: space-between; +} + +.rail-sub { + display: flex; + gap: 6px; + margin-top: 6px; + padding: 0 2px; +} + +.preset-list { + list-style: none; + display: flex; + flex-direction: column; + gap: 1px; +} + +.preset { + width: 100%; + display: flex; + align-items: center; + gap: 8px; + padding: 6px 8px; + background: transparent; + border: 1px solid transparent; + border-radius: var(--radius); + text-align: left; + color: var(--fg-1); + font-family: var(--font-mono); + font-size: 11.5px; + transition: background 0.08s ease, border-color 0.08s ease; +} + +.preset:hover { + background: var(--bg-2); +} + +.preset:focus-visible { + outline: none; + border-color: var(--accent); +} + +.preset .m { + flex-shrink: 0; + font-size: 9.5px; + font-weight: 700; + letter-spacing: 0.02em; + padding: 2px 5px; + border-radius: 3px; + background: var(--bg-3); + color: var(--fg-2); + min-width: 34px; + text-align: center; +} + +.preset .m-get { color: var(--get); } +.preset .m-post { color: var(--post); } +.preset .m-put { color: var(--put); } +.preset .m-patch { color: var(--patch); } +.preset .m-delete { color: var(--delete); } +.preset .m-int { color: var(--internal); } + +.preset .p { + flex: 1; + white-space: nowrap; + overflow: hidden; + text-overflow: ellipsis; + color: var(--fg-1); +} + +.preset .tag { + font-size: 9px; + font-weight: 600; + padding: 1px 5px; + border-radius: 3px; + background: rgba(239, 68, 68, 0.1); + color: var(--err); + border: 1px solid rgba(239, 68, 68, 0.2); +} + +/* history list */ +.history-list { + list-style: none; + display: flex; + flex-direction: column; + gap: 1px; + overflow-y: auto; + flex: 1; + min-height: 0; +} + +.history-empty { + padding: 10px; + color: var(--fg-3); + font-size: 11px; + text-align: center; +} + +.history-item { + width: 100%; + display: flex; + align-items: center; + gap: 6px; + padding: 5px 8px; + background: transparent; + border: 1px solid transparent; + border-radius: var(--radius); + font-family: var(--font-mono); + font-size: 11px; + color: var(--fg-1); + text-align: left; +} + +.history-item:hover { background: var(--bg-2); } + +.history-item .h-status { + flex-shrink: 0; + font-weight: 600; + font-size: 10px; + min-width: 28px; +} +.history-item.ok .h-status { color: var(--ok); } +.history-item.warn .h-status { color: var(--warn); } +.history-item.err .h-status { color: var(--err); } + +.history-item .h-path { + flex: 1; + white-space: nowrap; + overflow: hidden; + text-overflow: ellipsis; + color: var(--fg-2); +} + +.history-item .h-time { + color: var(--fg-3); + font-size: 10px; + flex-shrink: 0; +} + +/* ───────────────────────────────── + Main + ───────────────────────────────── */ +.main { + display: flex; + flex-direction: column; + overflow: hidden; + background: var(--bg-0); +} + +/* URL bar */ +.url-bar { + display: flex; + align-items: stretch; + gap: 8px; + padding: 14px 16px 8px; + flex-shrink: 0; +} + +.method-select { + appearance: none; + -webkit-appearance: none; + background: var(--bg-1); + border: 1px solid var(--border); + color: var(--fg-0); + padding: 0 28px 0 12px; + border-radius: var(--radius); + font-family: var(--font-mono); + font-size: 12px; + font-weight: 600; + height: 34px; + background-image: url("data:image/svg+xml;utf8,"); + background-repeat: no-repeat; + background-position: right 10px center; +} + +.method-select:focus { + outline: none; + border-color: var(--accent); +} + +.url-host { + display: flex; + align-items: center; + padding: 0 10px; + background: var(--bg-1); + border: 1px solid var(--border); + border-right: none; + border-radius: var(--radius) 0 0 var(--radius); + font-family: var(--font-mono); + font-size: 12px; + color: var(--fg-2); + white-space: nowrap; + height: 34px; +} + +.path-input { + flex: 1; + background: var(--bg-1); + border: 1px solid var(--border); + border-left: none; + border-radius: 0 var(--radius) var(--radius) 0; + padding: 0 12px; + font-family: var(--font-mono); + font-size: 12px; + color: var(--fg-0); + height: 34px; + min-width: 0; +} + +.path-input::placeholder { color: var(--fg-3); } + +.path-input:focus, +.url-host:has(+ .path-input:focus) { + outline: none; + border-color: var(--accent); +} + +.btn-primary { + background: var(--accent); + color: var(--accent-fg); + border: 1px solid var(--accent); + padding: 0 18px; + border-radius: var(--radius); + font-size: 12px; + font-weight: 600; + height: 34px; + white-space: nowrap; + transition: background 0.1s ease; +} + +.btn-primary:hover { background: var(--accent-hover); border-color: var(--accent-hover); } +.btn-primary:active { transform: none; } +.btn-primary:disabled { opacity: 0.5; cursor: not-allowed; } + +/* Params row */ +.params-row { + display: flex; + align-items: center; + gap: 8px; + padding: 0 16px 12px; + flex-shrink: 0; +} + +.params-label { + font-family: var(--font-mono); + font-size: 10px; + font-weight: 600; + letter-spacing: 0.06em; + color: var(--fg-2); + text-transform: uppercase; + padding-left: 4px; + min-width: 48px; +} + +.params-input { + flex: 1; + background: var(--bg-1); + border: 1px solid var(--border); + border-radius: var(--radius); + padding: 0 12px; + font-family: var(--font-mono); + font-size: 12px; + color: var(--fg-0); + height: 30px; +} + +.params-input::placeholder { color: var(--fg-3); } + +.params-input:focus { + outline: none; + border-color: var(--accent); +} + +/* Status strip */ +.status-strip { + padding: 10px 16px; + border-top: 1px solid var(--border); + border-bottom: 1px solid var(--border); + background: var(--bg-1); + display: flex; + align-items: center; + gap: 14px; + font-family: var(--font-mono); + font-size: 11px; + min-height: 38px; + flex-shrink: 0; +} + +.ss-idle { color: var(--fg-3); } +.ss-loading { color: var(--fg-2); } + +.ss-item { + display: flex; + align-items: center; + gap: 5px; + color: var(--fg-2); +} + +.ss-item .k { + color: var(--fg-3); + font-size: 10px; + text-transform: uppercase; + letter-spacing: 0.05em; +} + +.ss-item .v { color: var(--fg-1); font-weight: 600; } + +.ss-status { + font-weight: 700; + padding: 2px 8px; + border-radius: var(--radius); + font-size: 11px; +} + +.ss-status.ok { color: var(--ok); background: rgba(34, 197, 94, 0.1); } +.ss-status.warn { color: var(--warn); background: rgba(234, 179, 8, 0.1); } +.ss-status.err { color: var(--err); background: rgba(239, 68, 68, 0.1); } +.ss-status.info { color: var(--info); background: rgba(56, 189, 248, 0.1); } + +/* Response tabs */ +.resp-tabs { + display: flex; + align-items: center; + gap: 2px; + padding: 0 12px; + background: var(--bg-1); + border-bottom: 1px solid var(--border); + flex-shrink: 0; +} + +.tab { + background: transparent; + border: none; + border-bottom: 2px solid transparent; + padding: 9px 12px; + font-size: 11.5px; + font-weight: 500; + color: var(--fg-2); + margin-bottom: -1px; +} + +.tab:hover { color: var(--fg-0); } +.tab.active { color: var(--fg-0); border-bottom-color: var(--accent); } + +.tab-spacer { flex: 1; } + +/* Response body */ +.resp-body { + flex: 1; + overflow: auto; + background: var(--bg-0); + min-height: 0; +} + +.resp-pre { + padding: 14px 16px; + font-family: var(--font-mono); + font-size: 12px; + line-height: 1.55; + color: var(--fg-1); + white-space: pre; + tab-size: 2; +} + +.resp-pre.empty { color: var(--fg-3); } + +/* JSON syntax coloring (set by JS via classed spans) */ +.json-key { color: var(--accent); } +.json-str { color: var(--ok); } +.json-num { color: var(--warn); } +.json-bool { color: var(--patch); } +.json-null { color: var(--err); } + +/* ───────────────────────────────── + Generic buttons + ───────────────────────────────── */ +.btn-ghost { + background: transparent; + border: 1px solid var(--border); + color: var(--fg-2); + padding: 4px 10px; + border-radius: var(--radius); + font-size: 11px; + transition: background 0.08s, color 0.08s, border-color 0.08s; +} + +.btn-ghost:hover { + color: var(--fg-0); + background: var(--bg-2); + border-color: var(--border-strong); +} + +.btn-xs { + padding: 2px 8px; + font-size: 10px; +} + +/* ───────────────────────────────── + Scrollbars (minimal) + ───────────────────────────────── */ +::-webkit-scrollbar { + width: 10px; + height: 10px; +} +::-webkit-scrollbar-track { background: transparent; } +::-webkit-scrollbar-thumb { + background: var(--border); + border: 2px solid var(--bg-0); + border-radius: 10px; +} +::-webkit-scrollbar-thumb:hover { background: var(--border-strong); } + +/* ───────────────────────────────── + Focus + ───────────────────────────────── */ +:focus-visible { + outline: 2px solid var(--accent); + outline-offset: 1px; +} + +button:focus:not(:focus-visible) { outline: none; } diff --git a/static/js/script.js b/static/js/script.js new file mode 100644 index 0000000..10cf6e3 --- /dev/null +++ b/static/js/script.js @@ -0,0 +1,455 @@ +/* + * plex-api · endpoint tester + * Minimal, no framework. Vanilla DOM. + */ +(() => { + "use strict"; + + // ── DOM ───────────────────────────────────────── + const $ = (sel) => document.querySelector(sel); + const $$ = (sel) => document.querySelectorAll(sel); + + const methodEl = $("#method"); + const pathEl = $("#path-input"); + const paramsEl = $("#params-input"); + const urlHostEl = $("#url-host"); + const sendBtn = $("#btn-send"); + const envChipEl = $("#env-chip"); + + const statusStripEl = $("#status-strip"); + const respPre = $("#resp-pre"); + const tabsEl = $$(".tab"); + const copyBtn = $("#btn-copy"); + const clearBtn = $("#btn-clear"); + const clearHistBtn = $("#btn-clear-history"); + const historyListEl = $("#history-list"); + + const btnPickFiles = $("#btn-pick-files"); + const btnPickDir = $("#btn-pick-dir"); + const fileInput = $("#fusion-file-input"); + const dirInput = $("#fusion-dir-input"); + + // ── State ─────────────────────────────────────── + const state = { + activeTab: "body", + lastResponse: null, // { body, headers, raw, http_status, elapsed_ms, size_bytes, method, url } + history: [], + maxHistory: 20, + }; + + // ── Boot ──────────────────────────────────────── + loadConfig(); + wireEvents(); + renderHistory(); + + async function loadConfig() { + try { + const r = await fetch("/api/config"); + const cfg = await r.json(); + urlHostEl.textContent = `${cfg.base_url}/`; + envChipEl.textContent = cfg.environment; + envChipEl.classList.remove("test", "prod"); + envChipEl.classList.add(cfg.environment === "test" ? "test" : "prod"); + envChipEl.title = `Tenant ${cfg.tenant_id || "(default)"} · key:${cfg.has_key ? "✓" : "✗"} secret:${cfg.has_secret ? "✓" : "✗"}`; + } catch (e) { + envChipEl.textContent = "offline"; + } + } + + function wireEvents() { + // Send button + sendBtn.addEventListener("click", send); + + // Ctrl/Cmd+Enter to send + document.addEventListener("keydown", (e) => { + if ((e.ctrlKey || e.metaKey) && e.key === "Enter") { + e.preventDefault(); + send(); + } + }); + + // Presets + $$(".preset").forEach((btn) => { + btn.addEventListener("click", () => { + const internal = btn.getAttribute("data-internal"); + if (internal) { + runInternal(internal, btn.querySelector(".p")?.textContent || internal); + return; + } + const m = btn.getAttribute("data-method") || "GET"; + const p = btn.getAttribute("data-path") || ""; + methodEl.value = m; + pathEl.value = p; + pathEl.focus(); + }); + }); + + // Tabs + tabsEl.forEach((tab) => { + tab.addEventListener("click", () => { + tabsEl.forEach((t) => t.classList.remove("active")); + tab.classList.add("active"); + state.activeTab = tab.getAttribute("data-tab"); + renderResponseTab(); + }); + }); + + copyBtn.addEventListener("click", copyResponse); + clearBtn.addEventListener("click", clearResponse); + clearHistBtn.addEventListener("click", () => { + state.history = []; + renderHistory(); + }); + + // Fusion local uploads + if (btnPickFiles && fileInput) { + btnPickFiles.addEventListener("click", () => fileInput.click()); + fileInput.addEventListener("change", handleFileSelect); + } + if (btnPickDir && dirInput) { + btnPickDir.addEventListener("click", () => dirInput.click()); + dirInput.addEventListener("change", handleFileSelect); + } + } + + // ── Core: send via proxy ──────────────────────── + async function send() { + const path = pathEl.value.trim().replace(/^\/+/, ""); + const method = methodEl.value; + if (!path) { + setStatusStrip({ error: "Missing path" }); + pathEl.focus(); + return; + } + + const qs = new URLSearchParams(); + qs.set("path", path); + + if (paramsEl.value.trim()) { + const extra = parseParams(paramsEl.value.trim()); + for (const [k, v] of extra) qs.append(k, v); + } + + const url = `/api/plex/raw?${qs.toString()}`; + + setLoading(true, `${method} ${path}`); + const started = performance.now(); + try { + const r = await fetch(url, { method }); + const data = await r.json(); + const elapsed = Math.round(performance.now() - started); + + const resp = { + method, + path, + http_status: data.http_status ?? 0, + http_reason: data.http_reason || "", + elapsed_ms: data.elapsed_ms ?? elapsed, + size_bytes: data.size_bytes ?? 0, + url: data.url || "", + headers: data.headers || {}, + body: data.body ?? data, + raw: data, + }; + state.lastResponse = resp; + setStatusStripFromResponse(resp); + renderResponseTab(); + pushHistory(resp); + } catch (err) { + state.lastResponse = { + error: err.message, + raw: { error: err.message }, + headers: {}, + body: null, + }; + setStatusStrip({ error: err.message }); + respPre.textContent = `// fetch failed\n${err.message}`; + } finally { + setLoading(false); + } + } + + // ── Internal (non-proxy) endpoints ────────────── + async function runInternal(endpoint, label) { + setLoading(true, `RUN ${label}`); + const started = performance.now(); + try { + const r = await fetch(endpoint); + const data = await r.json(); + const elapsed = Math.round(performance.now() - started); + const text = JSON.stringify(data, null, 2); + + const resp = { + method: "RUN", + path: endpoint, + http_status: r.status, + http_reason: r.statusText, + elapsed_ms: elapsed, + size_bytes: new Blob([text]).size, + url: endpoint, + headers: Object.fromEntries(r.headers.entries()), + body: data, + raw: data, + }; + state.lastResponse = resp; + setStatusStripFromResponse(resp); + renderResponseTab(); + pushHistory(resp); + } catch (err) { + setStatusStrip({ error: err.message }); + respPre.textContent = `// fetch failed\n${err.message}`; + } finally { + setLoading(false); + } + } + + // ── Fusion file upload ────────────────────────── + async function handleFileSelect(e) { + const files = e.target.files; + if (!files || files.length === 0) return; + + const fd = new FormData(); + let added = 0; + for (let i = 0; i < files.length; i++) { + if (files[i].name.toLowerCase().endsWith(".json")) { + fd.append(`file_${i}`, files[i]); + added++; + } + } + if (added === 0) { + setStatusStrip({ error: "No .json files in selection" }); + return; + } + + setLoading(true, `UPLOAD ${added} file${added === 1 ? "" : "s"}`); + const started = performance.now(); + try { + const r = await fetch("/api/fusion/tools", { method: "POST", body: fd }); + const data = await r.json(); + const elapsed = Math.round(performance.now() - started); + const text = JSON.stringify(data, null, 2); + + const resp = { + method: "POST", + path: "/api/fusion/tools", + http_status: r.status, + http_reason: r.statusText, + elapsed_ms: elapsed, + size_bytes: new Blob([text]).size, + url: "/api/fusion/tools", + headers: Object.fromEntries(r.headers.entries()), + body: data, + raw: data, + }; + state.lastResponse = resp; + setStatusStripFromResponse(resp); + renderResponseTab(); + pushHistory(resp); + } catch (err) { + setStatusStrip({ error: err.message }); + respPre.textContent = `// upload failed\n${err.message}`; + } finally { + setLoading(false); + e.target.value = ""; + } + } + + // ── Status strip ──────────────────────────────── + function setStatusStrip({ error } = {}) { + if (error) { + statusStripEl.innerHTML = `ERROR${escapeHtml(error)}`; + return; + } + statusStripEl.innerHTML = `Ready · Ctrl+Enter to send`; + } + + function setStatusStripFromResponse(r) { + const status = r.http_status; + let cls = "info"; + if (status >= 200 && status < 300) cls = "ok"; + else if (status >= 300 && status < 400) cls = "warn"; + else if (status >= 400) cls = "err"; + else if (status === 0) cls = "err"; + + const label = status ? `${status} ${r.http_reason || ""}`.trim() : "NO RESP"; + + statusStripEl.innerHTML = ` + ${escapeHtml(label)} + time${r.elapsed_ms}ms + size${formatBytes(r.size_bytes)} + ${r.method}${escapeHtml(r.path)} + `; + } + + function setLoading(isLoading, label) { + sendBtn.disabled = isLoading; + if (isLoading) { + statusStripEl.innerHTML = `… ${escapeHtml(label || "sending")}`; + respPre.classList.add("empty"); + respPre.textContent = "// waiting for response"; + } + } + + // ── Response rendering ────────────────────────── + function renderResponseTab() { + const r = state.lastResponse; + if (!r) { + respPre.classList.add("empty"); + respPre.textContent = "// Response will appear here"; + return; + } + respPre.classList.remove("empty"); + + if (state.activeTab === "headers") { + const lines = Object.entries(r.headers || {}) + .map(([k, v]) => `${k}: ${v}`) + .join("\n"); + respPre.textContent = lines || "// no headers"; + return; + } + + if (state.activeTab === "raw") { + respPre.textContent = JSON.stringify(r.raw, null, 2); + return; + } + + // body tab — try to render just the body nicely + const body = r.body; + if (body == null) { + respPre.textContent = "// empty body"; + return; + } + if (typeof body === "string") { + respPre.textContent = body; + return; + } + try { + respPre.innerHTML = syntaxHighlight(JSON.stringify(body, null, 2)); + } catch { + respPre.textContent = String(body); + } + } + + function syntaxHighlight(json) { + const esc = escapeHtml(json); + return esc.replace( + /("(\\.|[^&])*?")(\s*:)?|\b(true|false|null)\b|-?\d+(?:\.\d+)?(?:[eE][+-]?\d+)?/g, + (match, strMatch, _c, colon) => { + if (strMatch !== undefined) { + return colon + ? `${strMatch}${colon}` + : `${strMatch}`; + } + if (match === "true" || match === "false") return `${match}`; + if (match === "null") return `${match}`; + return `${match}`; + } + ); + } + + // ── Copy / clear ──────────────────────────────── + async function copyResponse() { + const txt = respPre.textContent || ""; + try { + await navigator.clipboard.writeText(txt); + flashBtn(copyBtn, "Copied"); + } catch { + flashBtn(copyBtn, "Fail"); + } + } + + function clearResponse() { + state.lastResponse = null; + respPre.classList.add("empty"); + respPre.textContent = "// Response will appear here"; + setStatusStrip(); + } + + function flashBtn(btn, text) { + const prev = btn.textContent; + btn.textContent = text; + setTimeout(() => (btn.textContent = prev), 900); + } + + // ── History ───────────────────────────────────── + function pushHistory(r) { + const item = { + method: r.method, + path: r.path, + http_status: r.http_status, + elapsed_ms: r.elapsed_ms, + ts: Date.now(), + snapshot: r, + }; + state.history.unshift(item); + state.history = state.history.slice(0, state.maxHistory); + renderHistory(); + } + + function renderHistory() { + historyListEl.innerHTML = ""; + if (state.history.length === 0) { + const li = document.createElement("li"); + li.className = "history-empty"; + li.textContent = "No requests yet"; + historyListEl.appendChild(li); + return; + } + state.history.forEach((item, idx) => { + const li = document.createElement("li"); + const btn = document.createElement("button"); + let cls = "history-item"; + if (item.http_status >= 200 && item.http_status < 300) cls += " ok"; + else if (item.http_status >= 300 && item.http_status < 400) cls += " warn"; + else cls += " err"; + btn.className = cls; + btn.innerHTML = ` + ${item.http_status || "—"} + ${escapeHtml(item.path)} + ${item.elapsed_ms}ms + `; + btn.addEventListener("click", () => { + state.lastResponse = item.snapshot; + setStatusStripFromResponse(item.snapshot); + renderResponseTab(); + }); + li.appendChild(btn); + historyListEl.appendChild(li); + }); + } + + // ── Helpers ───────────────────────────────────── + function parseParams(s) { + // Accept "k=v&k2=v2" or one-per-line + const out = []; + const chunks = s.split(/[&\n]/); + for (const chunk of chunks) { + const t = chunk.trim(); + if (!t) continue; + const i = t.indexOf("="); + if (i === -1) out.push([t, ""]); + else out.push([t.slice(0, i).trim(), t.slice(i + 1).trim()]); + } + return out; + } + + function escapeHtml(s) { + return String(s) + .replace(/&/g, "&") + .replace(//g, ">") + .replace(/"/g, """) + .replace(/'/g, "'"); + } + + function formatBytes(n) { + if (!n) return "0 B"; + const units = ["B", "KB", "MB", "GB"]; + let i = 0; + while (n >= 1024 && i < units.length - 1) { + n /= 1024; + i++; + } + return `${n.toFixed(n >= 10 || i === 0 ? 0 : 1)} ${units[i]}`; + } +})(); diff --git a/templates/index.html b/templates/index.html new file mode 100644 index 0000000..d274acf --- /dev/null +++ b/templates/index.html @@ -0,0 +1,134 @@ + + + + + + plex-api · endpoint tester + + + +
+ + + + +
+ +
+ +
https://test.connect.plex.com/
+ + +
+ + +
+ + +
+ + +
+ Ready · Ctrl+Enter to send +
+ + +
+ + + +
+ + +
+ + +
+
// Response will appear here
+
+
+
+ + + + diff --git a/tests/__init__.py b/tests/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/tests/conftest.py b/tests/conftest.py new file mode 100644 index 0000000..f0b0cd8 --- /dev/null +++ b/tests/conftest.py @@ -0,0 +1,78 @@ +""" +Shared pytest fixtures and setup for the plex-api test suite. + +Sets PLEX_API_KEY and PLEX_API_SECRET to dummy values BEFORE any test +imports app.py — otherwise the import-time guard at the bottom of +plex_api.py will reject empty credentials and break test collection. + +Tests must NEVER hit the real Plex API. All requests should be patched +or routed through fake clients. +""" +import os +import sys +from pathlib import Path + +# Make the project root importable so `import plex_api` works regardless +# of where pytest is invoked from. +ROOT = Path(__file__).resolve().parent.parent +if str(ROOT) not in sys.path: + sys.path.insert(0, str(ROOT)) + +# Inject dummy credentials before any module-level reads happen. +os.environ.setdefault("PLEX_API_KEY", "test-key-do-not-use") +os.environ.setdefault("PLEX_API_SECRET", "test-secret-do-not-use") + + +# ───────────────────────────────────────────── +# Shared fixtures +# ───────────────────────────────────────────── +import pytest + + +class FakePlexClient: + """ + Drop-in replacement for plex_api.PlexClient that records calls + and returns canned responses without ever touching the network. + + Usage: + c = FakePlexClient() + c.set_response("tenants", [{"id": "...", "code": "G5"}]) + result = c.get("mdm", "v1", "tenants") # returns the canned response + assert c.calls == [("mdm", "v1", "tenants")] + """ + + def __init__(self, base="https://test.connect.plex.com"): + self.base = base + self.headers = { + "X-Plex-Connect-Api-Key": "test-key", + "X-Plex-Connect-Api-Secret": "test-secret", + "Content-Type": "application/json", + "Accept": "application/json", + } + self.calls = [] + self._responses = {} + self._default = None + + def set_response(self, resource, payload): + """Canned response for a specific resource string (last segment).""" + self._responses[resource] = payload + + def set_default(self, payload): + """Canned response for any resource not explicitly set.""" + self._default = payload + + def get(self, collection, version, resource, params=None): + self.calls.append((collection, version, resource, params)) + # Match by full resource string first, then by leading segment + if resource in self._responses: + return self._responses[resource] + head = resource.split("/")[0] + if head in self._responses: + return self._responses[head] + return self._default + + +@pytest.fixture +def fake_client(): + """A fresh FakePlexClient for each test.""" + return FakePlexClient() diff --git a/tests/test_app_routes.py b/tests/test_app_routes.py new file mode 100644 index 0000000..68f698f --- /dev/null +++ b/tests/test_app_routes.py @@ -0,0 +1,203 @@ +""" +Tests for the Flask routes in app.py. + +These are smoke tests — they verify that each route registers, responds +with the right shape, and doesn't blow up. The actual Plex client and +diagnostics are mocked so no real network calls happen. +""" +from unittest.mock import patch, MagicMock + +import pytest + +# conftest.py has already injected dummy PLEX_API_KEY/SECRET into env +import app as app_module + + +@pytest.fixture +def client(): + """Flask test client.""" + app_module.app.config["TESTING"] = True + return app_module.app.test_client() + + +# ───────────────────────────────────────────── +# Index +# ───────────────────────────────────────────── +class TestIndex: + def test_index_returns_html(self, client): + rv = client.get("/") + assert rv.status_code == 200 + assert b"" in rv.data + assert b"plex-api" in rv.data + + +# ───────────────────────────────────────────── +# /api/config +# ───────────────────────────────────────────── +class TestConfig: + def test_config_returns_expected_keys(self, client): + rv = client.get("/api/config") + assert rv.status_code == 200 + body = rv.get_json() + for key in ("base_url", "environment", "tenant_id", "has_key", "has_secret"): + assert key in body + + def test_config_environment_is_test_or_prod(self, client): + rv = client.get("/api/config") + body = rv.get_json() + assert body["environment"] in ("test", "production") + + def test_config_reports_credentials_present(self, client): + rv = client.get("/api/config") + body = rv.get_json() + # conftest.py injects dummy values, so both should be True + assert body["has_key"] is True + assert body["has_secret"] is True + + +# ───────────────────────────────────────────── +# /api/diagnostics/tenant +# ───────────────────────────────────────────── +class TestDiagnosticsTenant: + def test_returns_success_envelope(self, client): + with patch.object(app_module, "tenant_whoami") as mock_whoami: + mock_whoami.return_value = { + "match": "g5", + "summary": "test summary", + "configured_tenant_label": "G5", + } + rv = client.get("/api/diagnostics/tenant") + assert rv.status_code == 200 + body = rv.get_json() + assert body["status"] == "success" + assert body["data"]["match"] == "g5" + assert body["data"]["summary"] == "test summary" + + def test_passes_configured_tenant_id_to_whoami(self, client): + with patch.object(app_module, "tenant_whoami") as mock_whoami: + mock_whoami.return_value = {"match": "g5", "summary": ""} + client.get("/api/diagnostics/tenant") + mock_whoami.assert_called_once() + # Second positional arg is the configured tenant ID + call_args = mock_whoami.call_args + assert call_args[0][1] == app_module.TENANT_ID + + def test_returns_500_on_exception(self, client): + with patch.object(app_module, "tenant_whoami", side_effect=RuntimeError("boom")): + rv = client.get("/api/diagnostics/tenant") + assert rv.status_code == 500 + body = rv.get_json() + assert body["status"] == "error" + assert "boom" in body["message"] + + +# ───────────────────────────────────────────── +# /api/diagnostics/tenants/list +# ───────────────────────────────────────────── +class TestDiagnosticsTenantsList: + def test_returns_list_payload(self, client): + with patch.object(app_module, "list_tenants") as mock_list: + mock_list.return_value = [{"id": "abc", "code": "TEST"}] + rv = client.get("/api/diagnostics/tenants/list") + assert rv.status_code == 200 + body = rv.get_json() + assert body["status"] == "success" + assert body["data"] == [{"id": "abc", "code": "TEST"}] + + +# ───────────────────────────────────────────── +# /api/diagnostics/tenants/ +# ───────────────────────────────────────────── +class TestDiagnosticsTenantById: + def test_passes_id_to_get_tenant(self, client): + with patch.object(app_module, "get_tenant") as mock_get: + mock_get.return_value = {"id": "abc-123", "name": "Test"} + rv = client.get("/api/diagnostics/tenants/abc-123") + assert rv.status_code == 200 + mock_get.assert_called_once() + assert mock_get.call_args[0][1] == "abc-123" + + +# ───────────────────────────────────────────── +# /api/plex/raw — proxy +# ───────────────────────────────────────────── +class TestPlexRawProxy: + def test_missing_path_returns_400(self, client): + rv = client.get("/api/plex/raw") + assert rv.status_code == 400 + body = rv.get_json() + assert "Missing required" in body["message"] + + def test_forwards_get_to_plex(self, client): + mock_response = MagicMock() + mock_response.status_code = 200 + mock_response.reason = "OK" + mock_response.ok = True + mock_response.content = b'{"items":[]}' + mock_response.json.return_value = {"items": []} + mock_response.headers = {"Content-Type": "application/json"} + mock_response.url = "https://test.connect.plex.com/mdm/v1/parts" + + with patch.object(app_module.requests, "request", return_value=mock_response) as mock_req: + rv = client.get("/api/plex/raw?path=mdm/v1/parts") + assert rv.status_code == 200 + body = rv.get_json() + assert body["status"] == "success" + assert body["http_status"] == 200 + assert body["method"] == "GET" + assert body["body"] == {"items": []} + + # Verify the proxy actually forwarded to the right URL with the + # client's auth headers + mock_req.assert_called_once() + call_kwargs = mock_req.call_args.kwargs + assert "mdm/v1/parts" in call_kwargs["url"] + assert "X-Plex-Connect-Api-Key" in call_kwargs["headers"] + + def test_strips_path_query_param_from_forwarded_params(self, client): + mock_response = MagicMock() + mock_response.status_code = 200 + mock_response.reason = "OK" + mock_response.ok = True + mock_response.content = b"{}" + mock_response.json.return_value = {} + mock_response.headers = {} + mock_response.url = "https://test.connect.plex.com/mdm/v1/parts" + + with patch.object(app_module.requests, "request", return_value=mock_response) as mock_req: + client.get("/api/plex/raw?path=mdm/v1/parts&limit=5&status=Active") + forwarded = mock_req.call_args.kwargs["params"] + assert "path" not in forwarded + assert forwarded["limit"] == "5" + assert forwarded["status"] == "Active" + + def test_error_response_propagates_status(self, client): + mock_response = MagicMock() + mock_response.status_code = 403 + mock_response.reason = "Forbidden" + mock_response.ok = False + mock_response.content = b'{"error":"forbidden"}' + mock_response.json.return_value = {"error": "forbidden"} + mock_response.headers = {} + mock_response.url = "https://test.connect.plex.com/tooling/v1/tools" + + with patch.object(app_module.requests, "request", return_value=mock_response): + rv = client.get("/api/plex/raw?path=tooling/v1/tools") + assert rv.status_code == 200 # envelope status, not the inner one + body = rv.get_json() + assert body["status"] == "error" + assert body["http_status"] == 403 + + +# ───────────────────────────────────────────── +# /api/plex/discover +# ───────────────────────────────────────────── +class TestDiscover: + def test_calls_discover_all(self, client): + with patch.object(app_module, "discover_all") as mock_discover: + mock_discover.return_value = [{"endpoint": "x", "status": 200}] + rv = client.get("/api/plex/discover") + assert rv.status_code == 200 + body = rv.get_json() + assert body["status"] == "success" + assert body["data"] == [{"endpoint": "x", "status": 200}] diff --git a/tests/test_plex_api.py b/tests/test_plex_api.py new file mode 100644 index 0000000..c352cb7 --- /dev/null +++ b/tests/test_plex_api.py @@ -0,0 +1,85 @@ +""" +Tests for plex_api.PlexClient — header construction and configuration. + +These tests verify the BRIEFING item 1 fix: that the constructor accepts +api_secret and adds the X-Plex-Connect-Api-Secret header. They also lock +in the test/prod URL switch and tenant header behaviour. +""" +from plex_api import PlexClient, BASE_URL, TEST_URL + + +# ───────────────────────────────────────────── +# Header construction +# ───────────────────────────────────────────── +class TestPlexClientHeaders: + def test_sets_api_key_header(self): + c = PlexClient(api_key="my-key") + assert c.headers["X-Plex-Connect-Api-Key"] == "my-key" + + def test_sets_api_secret_header_when_provided(self): + c = PlexClient(api_key="k", api_secret="my-secret") + assert c.headers["X-Plex-Connect-Api-Secret"] == "my-secret" + + def test_omits_api_secret_header_when_empty(self): + c = PlexClient(api_key="k", api_secret="") + assert "X-Plex-Connect-Api-Secret" not in c.headers + + def test_omits_api_secret_header_by_default(self): + c = PlexClient(api_key="k") + assert "X-Plex-Connect-Api-Secret" not in c.headers + + def test_sets_tenant_id_header_when_provided(self): + c = PlexClient(api_key="k", tenant_id="abc-123") + assert c.headers["X-Plex-Connect-Tenant-Id"] == "abc-123" + + def test_omits_tenant_id_header_when_empty(self): + c = PlexClient(api_key="k", tenant_id="") + assert "X-Plex-Connect-Tenant-Id" not in c.headers + + def test_sets_content_type_and_accept_headers(self): + c = PlexClient(api_key="k") + assert c.headers["Content-Type"] == "application/json" + assert c.headers["Accept"] == "application/json" + + def test_all_three_auth_headers_when_full_credentials(self): + c = PlexClient(api_key="k", api_secret="s", tenant_id="t") + assert c.headers["X-Plex-Connect-Api-Key"] == "k" + assert c.headers["X-Plex-Connect-Api-Secret"] == "s" + assert c.headers["X-Plex-Connect-Tenant-Id"] == "t" + + +# ───────────────────────────────────────────── +# Environment routing +# ───────────────────────────────────────────── +class TestPlexClientEnvironment: + def test_use_test_true_uses_test_url(self): + c = PlexClient(api_key="k", use_test=True) + assert c.base == TEST_URL + assert "test." in c.base + + def test_use_test_false_uses_prod_url(self): + c = PlexClient(api_key="k", use_test=False) + assert c.base == BASE_URL + assert "test." not in c.base + + def test_use_test_default_is_prod(self): + # Default constructor arg is use_test=False + c = PlexClient(api_key="k") + assert c.base == BASE_URL + + +# ───────────────────────────────────────────── +# Throttle initialization +# ───────────────────────────────────────────── +class TestPlexClientThrottle: + def test_throttle_state_initialized(self): + c = PlexClient(api_key="k") + assert c._call_count == 0 + assert c._window_start > 0 + + def test_throttle_increments_call_count(self): + c = PlexClient(api_key="k") + c._throttle() + assert c._call_count == 1 + c._throttle() + assert c._call_count == 2 diff --git a/tests/test_plex_diagnostics.py b/tests/test_plex_diagnostics.py new file mode 100644 index 0000000..0bf14de --- /dev/null +++ b/tests/test_plex_diagnostics.py @@ -0,0 +1,202 @@ +""" +Tests for plex_diagnostics — tenant_whoami composite check. + +Verifies all 6 logic branches: + 1. Connected to Grace + 2. Connected to G5 + 3. Connected to a configured-but-unknown tenant + 4. Connected to an unrecognized tenant + 5. list_tenants returns None (auth failure) + 6. list_tenants returns empty list / no parseable IDs + +Plus normalization of dict-wrapped responses (Plex sometimes returns +{"items": [...]}, {"data": [...]}, or a bare list). +""" +import pytest + +from plex_diagnostics import ( + GRACE_TENANT_ID, + G5_TENANT_ID, + KNOWN_TENANTS, + list_tenants, + get_tenant, + tenant_whoami, +) + + +# ───────────────────────────────────────────── +# Constants sanity +# ───────────────────────────────────────────── +class TestKnownTenants: + def test_grace_tenant_id_in_known(self): + assert GRACE_TENANT_ID in KNOWN_TENANTS + assert KNOWN_TENANTS[GRACE_TENANT_ID] == "Grace Engineering" + + def test_g5_tenant_id_in_known(self): + assert G5_TENANT_ID in KNOWN_TENANTS + assert KNOWN_TENANTS[G5_TENANT_ID] == "G5" + + def test_grace_and_g5_are_distinct(self): + assert GRACE_TENANT_ID != G5_TENANT_ID + + +# ───────────────────────────────────────────── +# Raw wrappers — verify they call client.get with the right path +# ───────────────────────────────────────────── +class TestRawWrappers: + def test_list_tenants_calls_correct_endpoint(self, fake_client): + fake_client.set_response("tenants", []) + list_tenants(fake_client) + assert fake_client.calls[0][:3] == ("mdm", "v1", "tenants") + + def test_get_tenant_calls_correct_endpoint(self, fake_client): + fake_client.set_default({"id": "abc"}) + get_tenant(fake_client, "abc-123") + assert fake_client.calls[0][:3] == ("mdm", "v1", "tenants/abc-123") + + +# ───────────────────────────────────────────── +# tenant_whoami — match logic +# ───────────────────────────────────────────── +class TestTenantWhoami: + def test_grace_match(self, fake_client): + fake_client.set_response("tenants", [ + {"id": GRACE_TENANT_ID, "code": "GRACE", "name": "Grace Engineering"} + ]) + report = tenant_whoami(fake_client, GRACE_TENANT_ID) + assert report["match"] == "grace" + assert "Grace Engineering" in report["summary"] + assert "[OK]" in report["summary"] + + def test_g5_match(self, fake_client): + fake_client.set_response("tenants", [ + {"id": G5_TENANT_ID, "code": "G5", "name": "G5 Manufacturing"} + ]) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert report["match"] == "g5" + assert "G5" in report["summary"] + assert "[WARN]" in report["summary"] + + def test_no_data_when_list_returns_none(self, fake_client): + # No set_response → fake_client.get returns None + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert report["match"] == "no_data" + assert "no data" in report["summary"].lower() + + def test_no_data_when_list_returns_empty(self, fake_client): + fake_client.set_response("tenants", []) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert report["match"] == "no_data" + + def test_unknown_tenant_match(self, fake_client): + unknown_id = "11111111-2222-3333-4444-555555555555" + fake_client.set_response("tenants", [ + {"id": unknown_id, "code": "UNK", "name": "Unknown Co"} + ]) + report = tenant_whoami(fake_client, unknown_id) + assert report["match"] == "configured" + assert "Verify this is intentional" in report["summary"] + + def test_other_match_when_visible_unrecognized_and_no_config(self, fake_client): + unknown_id = "11111111-2222-3333-4444-555555555555" + fake_client.set_response("tenants", [ + {"id": unknown_id, "code": "UNK"} + ]) + report = tenant_whoami(fake_client, "") + assert report["match"] == "other" + + def test_grace_takes_priority_over_configured_g5(self, fake_client): + # Edge case: visible tenants include Grace, but TENANT_ID is still G5. + # The match should be "grace" because the routing has actually landed. + fake_client.set_response("tenants", [ + {"id": GRACE_TENANT_ID, "code": "GRACE"} + ]) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert report["match"] == "grace" + + +# ───────────────────────────────────────────── +# Response shape normalization +# ───────────────────────────────────────────── +class TestResponseNormalization: + def test_handles_bare_list_response(self, fake_client): + fake_client.set_response("tenants", [ + {"id": G5_TENANT_ID, "code": "G5"} + ]) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert len(report["visible_tenants"]) == 1 + + def test_handles_dict_data_wrapper(self, fake_client): + fake_client.set_response("tenants", { + "data": [{"id": G5_TENANT_ID, "code": "G5"}] + }) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert len(report["visible_tenants"]) == 1 + assert report["match"] == "g5" + + def test_handles_dict_items_wrapper(self, fake_client): + fake_client.set_response("tenants", { + "items": [{"id": G5_TENANT_ID, "code": "G5"}] + }) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert len(report["visible_tenants"]) == 1 + + def test_handles_dict_rows_wrapper(self, fake_client): + fake_client.set_response("tenants", { + "rows": [{"id": G5_TENANT_ID, "code": "G5"}] + }) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert len(report["visible_tenants"]) == 1 + + def test_handles_single_object_response(self, fake_client): + # Some endpoints return a bare object instead of a list + fake_client.set_response("tenants", { + "id": G5_TENANT_ID, "code": "G5" + }) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert len(report["visible_tenants"]) == 1 + assert report["match"] == "g5" + + +# ───────────────────────────────────────────── +# Report structure +# ───────────────────────────────────────────── +class TestReportStructure: + def test_report_has_required_keys(self, fake_client): + fake_client.set_response("tenants", [{"id": G5_TENANT_ID}]) + report = tenant_whoami(fake_client, G5_TENANT_ID) + for key in ( + "configured_tenant_id", + "configured_tenant_label", + "visible_tenants", + "list_tenants_raw", + "get_tenant_raw", + "match", + "summary", + ): + assert key in report + + def test_report_records_configured_label(self, fake_client): + fake_client.set_response("tenants", [{"id": G5_TENANT_ID}]) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert report["configured_tenant_label"] == "G5" + + def test_report_records_unknown_label_for_unknown_id(self, fake_client): + unknown = "deadbeef-dead-beef-dead-beefdeadbeef" + fake_client.set_response("tenants", [{"id": unknown}]) + report = tenant_whoami(fake_client, unknown) + assert report["configured_tenant_label"] == "unknown" + + def test_get_tenant_called_when_configured_id_provided(self, fake_client): + fake_client.set_response("tenants", [{"id": G5_TENANT_ID}]) + fake_client.set_response(f"tenants/{G5_TENANT_ID}", {"id": G5_TENANT_ID, "name": "G5 Detail"}) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert report["get_tenant_raw"] is not None + # Two calls should have been made: list + get + assert any(c[2] == "tenants" for c in fake_client.calls) + assert any(c[2] == f"tenants/{G5_TENANT_ID}" for c in fake_client.calls) + + def test_get_tenant_skipped_when_no_configured_id(self, fake_client): + fake_client.set_response("tenants", [{"id": G5_TENANT_ID}]) + report = tenant_whoami(fake_client, "") + assert report["get_tenant_raw"] is None diff --git a/tests/test_tool_library_loader.py b/tests/test_tool_library_loader.py new file mode 100644 index 0000000..bfa2bd2 --- /dev/null +++ b/tests/test_tool_library_loader.py @@ -0,0 +1,176 @@ +""" +Tests for tool_library_loader — JSON parsing, schema validation, +stale-file guard, and directory glob. + +All tests use tmp_path so we don't touch the real CAMTools directory. +""" +import json +import os +import time +from datetime import datetime, timedelta +from pathlib import Path + +import pytest + +from tool_library_loader import ( + load_library, + load_all_libraries, + report_library_contents, + _check_file_age, + MAX_FILE_AGE_HOURS, +) + + +# ───────────────────────────────────────────── +# Helpers +# ───────────────────────────────────────────── +SAMPLE_LIBRARY = { + "data": [ + {"guid": "tool-1", "type": "flat end mill", "description": "5/8 SQ"}, + {"guid": "tool-2", "type": "drill", "description": "1/4 drill"}, + {"guid": "tool-3", "type": "holder", "description": "BT30"}, + ] +} + + +def write_json(path: Path, payload): + path.write_text(json.dumps(payload), encoding="utf-8") + + +# ───────────────────────────────────────────── +# load_library — happy path +# ───────────────────────────────────────────── +class TestLoadLibraryHappyPath: + def test_loads_valid_library(self, tmp_path): + f = tmp_path / "lib.json" + write_json(f, SAMPLE_LIBRARY) + tools = load_library(f) + assert tools is not None + assert len(tools) == 3 + assert tools[0]["guid"] == "tool-1" + + def test_empty_data_array_is_valid(self, tmp_path): + f = tmp_path / "lib.json" + write_json(f, {"data": []}) + tools = load_library(f) + assert tools == [] + + +# ───────────────────────────────────────────── +# load_library — error handling +# ───────────────────────────────────────────── +class TestLoadLibraryErrors: + def test_returns_none_for_malformed_json(self, tmp_path): + f = tmp_path / "bad.json" + f.write_text("{not valid json", encoding="utf-8") + assert load_library(f) is None + + def test_returns_none_for_missing_data_key(self, tmp_path): + f = tmp_path / "lib.json" + write_json(f, {"tools": [{"guid": "x"}]}) # wrong root key + assert load_library(f) is None + + def test_returns_none_when_data_is_not_a_list(self, tmp_path): + f = tmp_path / "lib.json" + write_json(f, {"data": "not a list"}) + assert load_library(f) is None + + def test_returns_none_for_stale_file(self, tmp_path): + f = tmp_path / "stale.json" + write_json(f, SAMPLE_LIBRARY) + # Backdate the mtime to 100 hours ago — well past the 25h limit. + old = time.time() - (100 * 3600) + os.utime(f, (old, old)) + assert load_library(f) is None + + +# ───────────────────────────────────────────── +# _check_file_age +# ───────────────────────────────────────────── +class TestFileAgeCheck: + def test_recent_file_passes(self, tmp_path): + f = tmp_path / "fresh.json" + f.write_text("{}", encoding="utf-8") + assert _check_file_age(f) is True + + def test_stale_file_fails(self, tmp_path): + f = tmp_path / "stale.json" + f.write_text("{}", encoding="utf-8") + old = time.time() - ((MAX_FILE_AGE_HOURS + 5) * 3600) + os.utime(f, (old, old)) + assert _check_file_age(f) is False + + def test_custom_max_age_window(self, tmp_path): + f = tmp_path / "f.json" + f.write_text("{}", encoding="utf-8") + old = time.time() - (3 * 3600) # 3 hours old + os.utime(f, (old, old)) + assert _check_file_age(f, max_age_hours=5) is True + assert _check_file_age(f, max_age_hours=1) is False + + +# ───────────────────────────────────────────── +# load_all_libraries +# ───────────────────────────────────────────── +class TestLoadAllLibraries: + def test_returns_empty_for_missing_directory(self, tmp_path): + missing = tmp_path / "nope" + result = load_all_libraries(missing) + assert result == {} + + def test_loads_multiple_files(self, tmp_path): + write_json(tmp_path / "a.json", SAMPLE_LIBRARY) + write_json(tmp_path / "b.json", {"data": [{"guid": "z", "type": "drill"}]}) + result = load_all_libraries(tmp_path) + assert set(result.keys()) == {"a", "b"} + assert len(result["a"]) == 3 + assert len(result["b"]) == 1 + + def test_returns_empty_when_no_json_files(self, tmp_path): + (tmp_path / "readme.txt").write_text("hi", encoding="utf-8") + result = load_all_libraries(tmp_path) + assert result == {} + + def test_abort_on_stale_aborts_full_run(self, tmp_path): + # Two files, one fresh, one stale → with abort_on_stale=True (default), + # the entire load should return {}. + write_json(tmp_path / "fresh.json", SAMPLE_LIBRARY) + stale = tmp_path / "stale.json" + write_json(stale, SAMPLE_LIBRARY) + old = time.time() - (100 * 3600) + os.utime(stale, (old, old)) + + result = load_all_libraries(tmp_path, abort_on_stale=True) + assert result == {} + + def test_skip_stale_continues_with_fresh(self, tmp_path): + write_json(tmp_path / "fresh.json", SAMPLE_LIBRARY) + stale = tmp_path / "stale.json" + write_json(stale, SAMPLE_LIBRARY) + old = time.time() - (100 * 3600) + os.utime(stale, (old, old)) + + result = load_all_libraries(tmp_path, abort_on_stale=False) + assert "fresh" in result + assert "stale" not in result + + +# ───────────────────────────────────────────── +# report_library_contents — smoke test +# ───────────────────────────────────────────── +class TestReportLibraryContents: + def test_runs_without_error(self, capsys): + libs = {"sample": SAMPLE_LIBRARY["data"]} + report_library_contents(libs) + captured = capsys.readouterr() + # Should print library name + per-type counts + assert "sample" in captured.out + assert "flat end mill" in captured.out + assert "drill" in captured.out + assert "holder" in captured.out + + def test_handles_empty_library(self, capsys): + report_library_contents({}) + captured = capsys.readouterr() + # No exception, no output for an empty dict + assert captured.out == "" From 79eefa2a51787383a2a08818a4204bc590c09798 Mon Sep 17 00:00:00 2001 From: grace-shane Date: Tue, 7 Apr 2026 12:53:01 -0400 Subject: [PATCH 02/56] feat: .env.local loader + Claude Preview launch config (#14) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Two small things bundled together so the dev loop is friction-free on a fresh machine. bootstrap.py - New optional dotenv-style loader. Reads KEY=VALUE pairs from a gitignored .env.local in the project root and injects them into os.environ via setdefault — meaning real shell env vars always win, never overridden. - Imported at the very top of plex_api.py BEFORE its module-level os.environ.get() reads, so PLEX_API_KEY / PLEX_API_SECRET pulled from .env.local are picked up correctly. - Missing file is a no-op. Comments (# ...) and blank lines are skipped. Matched surrounding quotes are stripped. CRLF tolerated. - Returns the count of variables actually injected, for diagnostics. Why - Previously, every shell that wanted to run app.py had to export PLEX_API_KEY and PLEX_API_SECRET first. Spawned subprocesses (like Claude Preview) couldn't always inherit them. .env.local gives a single per-machine source of truth that survives shell restarts and is invisible to git. Tests - tests/test_bootstrap.py — 16 new tests covering missing file, basic parsing, multi-pair, value-with-=, comment skipping, blank line skipping, lines-without-= skipping, double-quote strip, single-quote strip, mismatched quotes preserved, internal quotes preserved, setdefault preserves existing env, partial override, whitespace stripping, CRLF line endings. - All 81 tests pass locally (65 existing + 16 new). .gitignore - Added .env, .env.local, .env.*.local - Added editor/IDE noise (.vscode/, .idea/, *.swp) - Added Python tooling noise (.pytest_cache/, .coverage, htmlcov/, .tox/, *.egg-info/, build/, dist/) .env.example - New committed template showing the expected variable names with pointer to developers.plex.com. Copy → .env.local → fill in. .claude/launch.json - Claude Preview launch config so `preview_start plex-api` works out of the box. This was the loose end from the previous PR. Co-authored-by: Claude Opus 4.6 (1M context) --- .claude/launch.json | 11 +++ .env.example | 11 +++ .gitignore | 19 +++++ bootstrap.py | 79 +++++++++++++++++++ plex_api.py | 6 ++ tests/test_bootstrap.py | 169 ++++++++++++++++++++++++++++++++++++++++ 6 files changed, 295 insertions(+) create mode 100644 .claude/launch.json create mode 100644 .env.example create mode 100644 bootstrap.py create mode 100644 tests/test_bootstrap.py diff --git a/.claude/launch.json b/.claude/launch.json new file mode 100644 index 0000000..fd7d9a3 --- /dev/null +++ b/.claude/launch.json @@ -0,0 +1,11 @@ +{ + "version": "0.0.1", + "configurations": [ + { + "name": "plex-api", + "runtimeExecutable": "py", + "runtimeArgs": ["app.py"], + "port": 5000 + } + ] +} diff --git a/.env.example b/.env.example new file mode 100644 index 0000000..59d70fb --- /dev/null +++ b/.env.example @@ -0,0 +1,11 @@ +# .env.example +# +# Copy this file to .env.local (which is gitignored) and fill in real values. +# bootstrap.py loads .env.local at startup so you don't have to set these +# variables in every shell. Real shell environment variables always win. +# +# Get your Consumer Key and Consumer Secret from: +# https://developers.plex.com/ → My Apps → + +PLEX_API_KEY=your-consumer-key-here +PLEX_API_SECRET=your-consumer-secret-here diff --git a/.gitignore b/.gitignore index 7a60b85..26dd9d7 100644 --- a/.gitignore +++ b/.gitignore @@ -1,2 +1,21 @@ __pycache__/ *.pyc + +# dotenv — local secrets, NEVER commit +.env +.env.local +.env.*.local + +# editor / IDE +.vscode/ +.idea/ +*.swp + +# Python tooling +.pytest_cache/ +.coverage +htmlcov/ +.tox/ +*.egg-info/ +build/ +dist/ diff --git a/bootstrap.py b/bootstrap.py new file mode 100644 index 0000000..54a82f3 --- /dev/null +++ b/bootstrap.py @@ -0,0 +1,79 @@ +""" +bootstrap.py +.env.local loader +================== +Optional dotenv-style loader for credentials and other environment +configuration. Imported at the very top of plex_api.py so that +PLEX_API_KEY / PLEX_API_SECRET can come from a gitignored .env.local +file in the project root, instead of requiring the user to set them +in every shell. + +Behavior +-------- +- If .env.local exists in the project root, parse KEY=VALUE pairs + and inject them into os.environ via setdefault — meaning any + variable already set in the real environment WINS, never overridden. +- Lines starting with # are comments. Blank lines are ignored. +- Surrounding single or double quotes on values are stripped. +- Missing file is a no-op (no error). + +Why setdefault, not direct assignment +------------------------------------- +A real shell environment variable should always override .env.local — +that lets CI, production deployments, and ad-hoc shell exports take +precedence over local dev defaults without anyone having to remember +to delete the file. +""" +import os +from pathlib import Path + +# Project root = directory containing this file (bootstrap.py lives at the root) +_PROJECT_ROOT = Path(__file__).resolve().parent + + +def load_env_local(path: Path | str | None = None) -> int: + """ + Load KEY=VALUE pairs from a .env.local file into os.environ via setdefault. + + Parameters + ---------- + path : Path | str | None + Override the file path. Defaults to ``/.env.local``. + + Returns + ------- + int + Number of variables actually injected into os.environ + (i.e. that were not already present). + """ + if path is None: + path = _PROJECT_ROOT / ".env.local" + else: + path = Path(path) + + if not path.exists(): + return 0 + + injected = 0 + for line in path.read_text(encoding="utf-8").splitlines(): + line = line.strip() + if not line or line.startswith("#") or "=" not in line: + continue + + key, _, value = line.partition("=") + key = key.strip() + value = value.strip() + + # Strip matched surrounding quotes (' or ") + if len(value) >= 2 and value[0] == value[-1] and value[0] in ("'", '"'): + value = value[1:-1] + + if key and key not in os.environ: + os.environ[key] = value + injected += 1 + + return injected + + +# Auto-load on import — no-op if .env.local does not exist. +load_env_local() diff --git a/plex_api.py b/plex_api.py index c92af2a..1e1dd72 100644 --- a/plex_api.py +++ b/plex_api.py @@ -7,6 +7,12 @@ Rate: 200 calls/minute """ +# bootstrap MUST be imported before anything reads PLEX_API_KEY/SECRET from +# os.environ — it injects values from .env.local (if present) so the dev +# loop doesn't require setting env vars in every shell. Real shell env +# always wins via setdefault semantics. +import bootstrap # noqa: F401 + import requests import json import csv diff --git a/tests/test_bootstrap.py b/tests/test_bootstrap.py new file mode 100644 index 0000000..1a2ddcc --- /dev/null +++ b/tests/test_bootstrap.py @@ -0,0 +1,169 @@ +""" +Tests for bootstrap.py — .env.local loader. + +Verifies the contract: + - missing file is a no-op + - KEY=VALUE pairs are parsed and injected via setdefault + - existing env vars are NEVER overridden + - blank lines and # comments are skipped + - matched surrounding quotes (single or double) are stripped + - returns the count of injected variables +""" +import os + +import pytest + +from bootstrap import load_env_local + + +# ───────────────────────────────────────────── +# Missing file behavior +# ───────────────────────────────────────────── +class TestMissingFile: + def test_missing_file_is_noop(self, tmp_path): + missing = tmp_path / "does-not-exist.env" + result = load_env_local(missing) + assert result == 0 + + def test_missing_file_does_not_raise(self, tmp_path): + # Should not raise even if directory itself does not exist + nowhere = tmp_path / "nope" / "alsonope" / ".env.local" + load_env_local(nowhere) # no exception + + +# ───────────────────────────────────────────── +# Basic parsing +# ───────────────────────────────────────────── +class TestBasicParsing: + def test_simple_key_value(self, tmp_path, monkeypatch): + f = tmp_path / ".env" + f.write_text("FOO=bar\n") + monkeypatch.delenv("FOO", raising=False) + injected = load_env_local(f) + assert injected == 1 + assert os.environ["FOO"] == "bar" + + def test_multiple_pairs(self, tmp_path, monkeypatch): + f = tmp_path / ".env" + f.write_text("FOO=bar\nBAZ=qux\nHELLO=world\n") + for k in ("FOO", "BAZ", "HELLO"): + monkeypatch.delenv(k, raising=False) + injected = load_env_local(f) + assert injected == 3 + assert os.environ["FOO"] == "bar" + assert os.environ["BAZ"] == "qux" + assert os.environ["HELLO"] == "world" + + def test_value_can_contain_equals(self, tmp_path, monkeypatch): + f = tmp_path / ".env" + f.write_text("URL=https://example.com/?a=1&b=2\n") + monkeypatch.delenv("URL", raising=False) + load_env_local(f) + assert os.environ["URL"] == "https://example.com/?a=1&b=2" + + +# ───────────────────────────────────────────── +# Comments and blank lines +# ───────────────────────────────────────────── +class TestCommentsAndBlanks: + def test_skips_comments(self, tmp_path, monkeypatch): + f = tmp_path / ".env" + f.write_text("# this is a comment\nFOO=bar\n# another comment\n") + monkeypatch.delenv("FOO", raising=False) + injected = load_env_local(f) + assert injected == 1 + assert os.environ["FOO"] == "bar" + + def test_skips_blank_lines(self, tmp_path, monkeypatch): + f = tmp_path / ".env" + f.write_text("\n\nFOO=bar\n\n\nBAZ=qux\n") + for k in ("FOO", "BAZ"): + monkeypatch.delenv(k, raising=False) + injected = load_env_local(f) + assert injected == 2 + + def test_skips_lines_without_equals(self, tmp_path, monkeypatch): + f = tmp_path / ".env" + f.write_text("not-a-pair\nFOO=bar\nalso-not-a-pair\n") + monkeypatch.delenv("FOO", raising=False) + injected = load_env_local(f) + assert injected == 1 + + +# ───────────────────────────────────────────── +# Quote stripping +# ───────────────────────────────────────────── +class TestQuoteStripping: + def test_strips_double_quotes(self, tmp_path, monkeypatch): + f = tmp_path / ".env" + f.write_text('FOO="bar baz"\n') + monkeypatch.delenv("FOO", raising=False) + load_env_local(f) + assert os.environ["FOO"] == "bar baz" + + def test_strips_single_quotes(self, tmp_path, monkeypatch): + f = tmp_path / ".env" + f.write_text("FOO='bar baz'\n") + monkeypatch.delenv("FOO", raising=False) + load_env_local(f) + assert os.environ["FOO"] == "bar baz" + + def test_does_not_strip_mismatched_quotes(self, tmp_path, monkeypatch): + f = tmp_path / ".env" + f.write_text("FOO=\"bar'\n") + monkeypatch.delenv("FOO", raising=False) + load_env_local(f) + assert os.environ["FOO"] == "\"bar'" + + def test_preserves_internal_quotes(self, tmp_path, monkeypatch): + f = tmp_path / ".env" + f.write_text('FOO=bar"baz\n') + monkeypatch.delenv("FOO", raising=False) + load_env_local(f) + assert os.environ["FOO"] == 'bar"baz' + + +# ───────────────────────────────────────────── +# setdefault semantics — real env always wins +# ───────────────────────────────────────────── +class TestSetdefaultBehavior: + def test_existing_env_var_is_not_overridden(self, tmp_path, monkeypatch): + f = tmp_path / ".env" + f.write_text("FOO=from-file\n") + monkeypatch.setenv("FOO", "from-shell") + injected = load_env_local(f) + # Was already set, so injected count is 0 + assert injected == 0 + assert os.environ["FOO"] == "from-shell" + + def test_partial_override_only_sets_missing(self, tmp_path, monkeypatch): + f = tmp_path / ".env" + f.write_text("FOO=from-file\nBAZ=from-file\n") + monkeypatch.setenv("FOO", "from-shell") + monkeypatch.delenv("BAZ", raising=False) + injected = load_env_local(f) + assert injected == 1 + assert os.environ["FOO"] == "from-shell" + assert os.environ["BAZ"] == "from-file" + + +# ───────────────────────────────────────────── +# Whitespace handling +# ───────────────────────────────────────────── +class TestWhitespace: + def test_strips_whitespace_around_key_and_value(self, tmp_path, monkeypatch): + f = tmp_path / ".env" + f.write_text(" FOO = bar \n") + monkeypatch.delenv("FOO", raising=False) + load_env_local(f) + assert os.environ["FOO"] == "bar" + + def test_handles_crlf_line_endings(self, tmp_path, monkeypatch): + f = tmp_path / ".env" + f.write_bytes(b"FOO=bar\r\nBAZ=qux\r\n") + monkeypatch.delenv("FOO", raising=False) + monkeypatch.delenv("BAZ", raising=False) + injected = load_env_local(f) + assert injected == 2 + assert os.environ["FOO"] == "bar" + assert os.environ["BAZ"] == "qux" From 7892923018fed20e19554f8e745925c9764ce414 Mon Sep 17 00:00:00 2001 From: grace-shane Date: Tue, 7 Apr 2026 13:09:12 -0400 Subject: [PATCH 03/56] fix: surface HTTP errors instead of swallowing them as None (#15) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Background ---------- PlexClient.get() previously caught all HTTPErrors and returned None. This made tenant_whoami report match="no_data" with summary="credentials likely invalid" whenever the underlying call returned 401, 403, 404, 5xx or hit a network failure — even though the actual error was a clean 401 from Plex's gateway. The diagnostic suite was hiding the truth and forced us to debug via curl + the proxy route. Changes ------- plex_api.py - New PlexClient.get_envelope() method. Returns a structured envelope {ok, status, reason, body, elapsed_ms, url, error} so callers can distinguish: * 2xx success (with parsed JSON, text, or empty body) * HTTP errors (401, 403, 404, 5xx) — body is preserved * Network failures (DNS, timeout, connection refused) — status=0 * JSON parse failures (text/html responses) — falls through to text Never raises, never swallows. - PlexClient.get() refactored to delegate to get_envelope() for uniformity. Behaviour unchanged: returns parsed JSON on success or None on any failure. Legacy stdout logging on errors is preserved so existing tests and call sites are unaffected. plex_diagnostics.py - tenant_whoami() now calls client.get_envelope() directly so HTTP errors surface as new match values: * "auth_failed" — for 401 / 403 * "request_failed" — for network errors and other 4xx/5xx Both branches return helpful summary strings pointing the operator at the actual problem (PLEX_API_KEY/SECRET for auth, network/host reachability for request failures). - Report now includes a list_tenants_envelope key with ok/status/ reason/elapsed_ms/error so the UI can show the underlying HTTP metadata even on success. tests/ - conftest.py FakePlexClient grows a get_envelope() method that synthesizes a 200 OK envelope around set_response bodies, plus a new set_envelope() for injecting specific error envelopes. - test_plex_api.py adds 16 tests for get_envelope (200, 401, 403, 404, 500, ConnectionError, Timeout, text body fallback, empty body, url propagation, elapsed_ms) and 3 for the refactored get() legacy interface. - test_plex_diagnostics.py adds 9 tests for the new branches: * 401 → auth_failed (+ summary mentions PLEX_API_KEY/SECRET) * 403 → auth_failed * auth_failed preserves envelope metadata * auth_failed does not waste a get_tenant call * 404 → request_failed * 500 → request_failed * network error → request_failed (+ summary contains "could not reach") * Timeout → request_failed * request_failed preserves envelope metadata Plus 1 test that the success path includes the envelope metadata. - Existing no_data tests updated for the new summary text. Total: 105 tests pass locally (24 net new). Co-authored-by: Claude Opus 4.6 (1M context) --- plex_api.py | 87 ++++++++++++++++-- plex_diagnostics.py | 61 ++++++++++--- tests/conftest.py | 52 +++++++++-- tests/test_plex_api.py | 161 ++++++++++++++++++++++++++++++++- tests/test_plex_diagnostics.py | 115 ++++++++++++++++++++++- 5 files changed, 436 insertions(+), 40 deletions(-) diff --git a/plex_api.py b/plex_api.py index 1e1dd72..d25c359 100644 --- a/plex_api.py +++ b/plex_api.py @@ -71,20 +71,87 @@ def _throttle(self): self._window_start = time.time() def get(self, collection, version, resource, params=None): - """GET request with auto-throttling and error handling""" + """ + GET request with auto-throttling. + + Returns the parsed JSON body on success, or None on any failure. + Backward-compatible legacy interface — callers that need to know + WHY a request failed (auth error vs network error vs 404 vs JSON + parse failure) should use ``get_envelope()`` instead. + """ + env = self.get_envelope(collection, version, resource, params) + if not env["ok"]: + # Preserve the historical "log to stdout" behaviour for the + # legacy callers, then collapse to None. + print(f" HTTP Error {env['status']}: {env['url']}") + if env["body"] is not None: + snippet = str(env["body"])[:300] + print(f" Response: {snippet}") + return None + return env["body"] + + def get_envelope(self, collection, version, resource, params=None): + """ + GET request returning a structured envelope. + + Unlike ``get()`` (which returns parsed JSON on success and None on + any failure), this method returns a dict so callers can distinguish: + + - successful empty / null responses + - authentication errors (401, 403) + - other HTTP errors (404, 5xx, ...) + - network failures (DNS, timeout, connection refused, ...) + - JSON parse failures (response was text/html instead of JSON) + + Returns + ------- + dict + { + "ok": bool, # True iff response was 2xx + "status": int, # HTTP status; 0 if no response + "reason": str, # HTTP reason phrase or + # exception class name + "body": Any, # parsed JSON if possible, + # else text, else None + "elapsed_ms": int, + "url": str, + "error": str | None, # human-readable error if not ok + } + """ self._throttle() url = f"{self.base}/{collection}/{version}/{resource}" + started = time.perf_counter() + try: r = requests.get(url, headers=self.headers, params=params, timeout=30) - r.raise_for_status() - return r.json() - except requests.exceptions.HTTPError as e: - print(f" HTTP Error {r.status_code}: {url}") - print(f" Response: {r.text[:300]}") - return None - except Exception as e: - print(f" Error: {e}") - return None + except requests.exceptions.RequestException as e: + return { + "ok": False, + "status": 0, + "reason": e.__class__.__name__, + "body": None, + "elapsed_ms": int((time.perf_counter() - started) * 1000), + "url": url, + "error": str(e), + } + + elapsed_ms = int((time.perf_counter() - started) * 1000) + + # Try JSON first; fall back to text; fall back to None. + try: + body = r.json() + except ValueError: + body = r.text or None + + return { + "ok": r.ok, + "status": r.status_code, + "reason": r.reason or "", + "body": body, + "elapsed_ms": elapsed_ms, + "url": r.url, + "error": None if r.ok else f"HTTP {r.status_code} {r.reason}".strip(), + } def get_paginated(self, collection, version, resource, params=None, limit=100): """GET all pages of a paginated endpoint""" diff --git a/plex_diagnostics.py b/plex_diagnostics.py index 05c5c57..5941a82 100644 --- a/plex_diagnostics.py +++ b/plex_diagnostics.py @@ -62,15 +62,21 @@ def tenant_whoami(client, configured_tenant_id: str = "") -> dict: then compares the visible tenant(s) against the known Grace and G5 UUIDs so the UI can show a clear "is this the right tenant?" status. + Uses ``client.get_envelope()`` so HTTP errors (401, 403, 404, 5xx) and + network failures surface as distinct ``match`` values instead of being + swallowed into ``no_data``. + Returns a structured report: { "configured_tenant_id": "", "configured_tenant_label": "Grace Engineering" | "G5" | "unknown", "visible_tenants": [{id, code, name, label}, ...], - "list_tenants_raw": , - "get_tenant_raw": , + "list_tenants_raw": , + "list_tenants_envelope": {ok, status, reason, elapsed_ms, error}, + "get_tenant_raw": , "match": "grace" | "g5" | "configured" | - "other" | "no_data", + "other" | "no_data" | + "auth_failed" | "request_failed", "summary": "", } """ @@ -79,22 +85,48 @@ def tenant_whoami(client, configured_tenant_id: str = "") -> dict: "configured_tenant_label": KNOWN_TENANTS.get(configured_tenant_id, "unknown"), "visible_tenants": [], "list_tenants_raw": None, + "list_tenants_envelope": None, "get_tenant_raw": None, "match": "no_data", "summary": "", } - # ── Step 1: list_tenants ──────────────────── - listed = list_tenants(client) - report["list_tenants_raw"] = listed - - if listed is None: - report["summary"] = ( - "list_tenants returned no data — credentials likely invalid, " - "or test.connect.plex.com is unreachable." - ) + # ── Step 1: list_tenants via get_envelope so HTTP errors surface ──── + list_env = client.get_envelope("mdm", "v1", "tenants") + report["list_tenants_envelope"] = { + "ok": list_env["ok"], + "status": list_env["status"], + "reason": list_env["reason"], + "elapsed_ms": list_env["elapsed_ms"], + "error": list_env["error"], + } + report["list_tenants_raw"] = list_env["body"] + + if not list_env["ok"]: + status = list_env["status"] + if status in (401, 403): + report["match"] = "auth_failed" + report["summary"] = ( + f"[ERROR] list_tenants returned HTTP {status} {list_env['reason']}. " + f"Check that PLEX_API_KEY and PLEX_API_SECRET are valid in .env.local " + f"or your shell environment. Underlying error: {list_env['error']}" + ) + elif status == 0: + report["match"] = "request_failed" + report["summary"] = ( + f"[ERROR] list_tenants could not reach Plex: {list_env['error']}. " + f"Check network connectivity and that {client.base} is reachable." + ) + else: + report["match"] = "request_failed" + report["summary"] = ( + f"[ERROR] list_tenants returned HTTP {status} {list_env['reason']}: " + f"{list_env['error']}" + ) return report + listed = list_env["body"] + # Normalize the response. Plex sometimes wraps lists in {data|items|rows}. if isinstance(listed, list): items = listed @@ -131,8 +163,9 @@ def tenant_whoami(client, configured_tenant_id: str = "") -> dict: if not visible_ids: report["match"] = "no_data" report["summary"] = ( - "list_tenants returned a response but no tenant IDs could be parsed. " - "Check the raw response in this report." + "list_tenants returned no data — the response was empty or " + "contained no parseable tenant IDs. Check the raw response " + "in this report." ) return report diff --git a/tests/conftest.py b/tests/conftest.py index f0b0cd8..e7bb433 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -34,11 +34,16 @@ class FakePlexClient: Drop-in replacement for plex_api.PlexClient that records calls and returns canned responses without ever touching the network. - Usage: - c = FakePlexClient() - c.set_response("tenants", [{"id": "...", "code": "G5"}]) - result = c.get("mdm", "v1", "tenants") # returns the canned response - assert c.calls == [("mdm", "v1", "tenants")] + Two parallel canned-response stores: + - ``set_response(resource, body)`` — body returned by both ``get()`` + and ``get_envelope()`` (the latter wraps the body in a synthetic + 200 OK envelope). + - ``set_envelope(resource, envelope)`` — full envelope dict returned + by ``get_envelope()`` only. Use this to test error branches like + 401/403/network failure. + + If both are set for the same resource, ``set_envelope`` wins for + ``get_envelope()`` calls and ``set_response`` is used for ``get()``. """ def __init__(self, base="https://test.connect.plex.com"): @@ -51,19 +56,22 @@ def __init__(self, base="https://test.connect.plex.com"): } self.calls = [] self._responses = {} + self._envelopes = {} self._default = None def set_response(self, resource, payload): - """Canned response for a specific resource string (last segment).""" + """Canned body for a specific resource string (last segment).""" self._responses[resource] = payload + def set_envelope(self, resource, envelope): + """Canned full envelope (overrides set_response for get_envelope).""" + self._envelopes[resource] = envelope + def set_default(self, payload): - """Canned response for any resource not explicitly set.""" + """Canned body for any resource not explicitly set.""" self._default = payload - def get(self, collection, version, resource, params=None): - self.calls.append((collection, version, resource, params)) - # Match by full resource string first, then by leading segment + def _lookup_body(self, resource): if resource in self._responses: return self._responses[resource] head = resource.split("/")[0] @@ -71,6 +79,30 @@ def get(self, collection, version, resource, params=None): return self._responses[head] return self._default + def get(self, collection, version, resource, params=None): + self.calls.append((collection, version, resource, params)) + return self._lookup_body(resource) + + def get_envelope(self, collection, version, resource, params=None): + self.calls.append((collection, version, resource, params)) + # Explicit envelope override wins + if resource in self._envelopes: + return self._envelopes[resource] + head = resource.split("/")[0] + if head in self._envelopes: + return self._envelopes[head] + # Otherwise synthesize a 200 OK envelope wrapping the canned body + body = self._lookup_body(resource) + return { + "ok": True, + "status": 200, + "reason": "OK", + "body": body, + "elapsed_ms": 0, + "url": f"{self.base}/{collection}/{version}/{resource}", + "error": None, + } + @pytest.fixture def fake_client(): diff --git a/tests/test_plex_api.py b/tests/test_plex_api.py index c352cb7..be24cb1 100644 --- a/tests/test_plex_api.py +++ b/tests/test_plex_api.py @@ -1,10 +1,12 @@ """ -Tests for plex_api.PlexClient — header construction and configuration. - -These tests verify the BRIEFING item 1 fix: that the constructor accepts -api_secret and adds the X-Plex-Connect-Api-Secret header. They also lock -in the test/prod URL switch and tenant header behaviour. +Tests for plex_api.PlexClient — header construction, configuration, +and the get_envelope() method. """ +from unittest.mock import MagicMock, patch + +import pytest +import requests + from plex_api import PlexClient, BASE_URL, TEST_URL @@ -83,3 +85,152 @@ def test_throttle_increments_call_count(self): assert c._call_count == 1 c._throttle() assert c._call_count == 2 + + +# ───────────────────────────────────────────── +# get_envelope() — structured success/error envelope +# ───────────────────────────────────────────── +def _mock_response(status, json_body=None, text="", reason="", url=""): + """Build a MagicMock that mimics a requests.Response.""" + r = MagicMock(spec=requests.Response) + r.status_code = status + r.reason = reason or {200: "OK", 401: "Unauthorized", 403: "Forbidden", + 404: "Not Found", 500: "Internal Server Error"}.get(status, "") + r.ok = 200 <= status < 300 + r.text = text + r.url = url or "https://test.connect.plex.com/mdm/v1/x" + if json_body is not None: + r.json.return_value = json_body + else: + r.json.side_effect = ValueError("no json") + return r + + +class TestGetEnvelopeSuccess: + def test_returns_ok_envelope_for_200(self): + c = PlexClient(api_key="k", api_secret="s", use_test=True) + with patch("plex_api.requests.get", return_value=_mock_response( + 200, json_body=[{"id": "abc", "code": "G5"}] + )): + env = c.get_envelope("mdm", "v1", "tenants") + assert env["ok"] is True + assert env["status"] == 200 + assert env["reason"] == "OK" + assert env["body"] == [{"id": "abc", "code": "G5"}] + assert env["error"] is None + assert env["elapsed_ms"] >= 0 + + def test_envelope_contains_url(self): + c = PlexClient(api_key="k", use_test=True) + with patch("plex_api.requests.get", return_value=_mock_response( + 200, json_body={}, url="https://test.connect.plex.com/mdm/v1/parts" + )): + env = c.get_envelope("mdm", "v1", "parts") + assert "mdm/v1/parts" in env["url"] + + def test_text_body_when_json_parse_fails(self): + c = PlexClient(api_key="k", use_test=True) + with patch("plex_api.requests.get", return_value=_mock_response( + 200, json_body=None, text="not json" + )): + env = c.get_envelope("mdm", "v1", "tenants") + assert env["ok"] is True + assert env["body"] == "not json" + + def test_none_body_when_text_empty_and_no_json(self): + c = PlexClient(api_key="k", use_test=True) + with patch("plex_api.requests.get", return_value=_mock_response( + 200, json_body=None, text="" + )): + env = c.get_envelope("mdm", "v1", "tenants") + assert env["body"] is None + + +class TestGetEnvelopeHttpErrors: + def test_401_returns_error_envelope(self): + c = PlexClient(api_key="k", api_secret="s", use_test=True) + with patch("plex_api.requests.get", return_value=_mock_response( + 401, json_body={"code": "REQUEST_NOT_AUTHENTICATED"} + )): + env = c.get_envelope("mdm", "v1", "tenants") + assert env["ok"] is False + assert env["status"] == 401 + assert env["reason"] == "Unauthorized" + assert env["body"] == {"code": "REQUEST_NOT_AUTHENTICATED"} + assert "401" in env["error"] + assert "Unauthorized" in env["error"] + + def test_403_returns_error_envelope(self): + c = PlexClient(api_key="k", use_test=True) + with patch("plex_api.requests.get", return_value=_mock_response(403, json_body={})): + env = c.get_envelope("tooling", "v1", "tools") + assert env["ok"] is False + assert env["status"] == 403 + + def test_404_returns_error_envelope(self): + c = PlexClient(api_key="k", use_test=True) + with patch("plex_api.requests.get", return_value=_mock_response(404, json_body={})): + env = c.get_envelope("mdm", "v1", "tenants/nonexistent") + assert env["ok"] is False + assert env["status"] == 404 + + def test_500_returns_error_envelope(self): + c = PlexClient(api_key="k", use_test=True) + with patch("plex_api.requests.get", return_value=_mock_response(500, json_body={})): + env = c.get_envelope("mdm", "v1", "tenants") + assert env["ok"] is False + assert env["status"] == 500 + + +class TestGetEnvelopeNetworkErrors: + def test_connection_error_returns_status_zero(self): + c = PlexClient(api_key="k", use_test=True) + with patch("plex_api.requests.get", side_effect=requests.exceptions.ConnectionError("refused")): + env = c.get_envelope("mdm", "v1", "tenants") + assert env["ok"] is False + assert env["status"] == 0 + assert env["reason"] == "ConnectionError" + assert env["body"] is None + assert "refused" in env["error"] + + def test_timeout_returns_status_zero(self): + c = PlexClient(api_key="k", use_test=True) + with patch("plex_api.requests.get", side_effect=requests.exceptions.Timeout("timed out")): + env = c.get_envelope("mdm", "v1", "tenants") + assert env["ok"] is False + assert env["status"] == 0 + assert env["reason"] == "Timeout" + + def test_dns_failure_returns_status_zero(self): + c = PlexClient(api_key="k", use_test=True) + with patch("plex_api.requests.get", side_effect=requests.exceptions.ConnectionError("dns")): + env = c.get_envelope("mdm", "v1", "tenants") + assert env["status"] == 0 + + +# ───────────────────────────────────────────── +# get() (legacy) — verify backward compat after refactor +# ───────────────────────────────────────────── +class TestGetLegacy: + def test_get_returns_body_on_success(self): + c = PlexClient(api_key="k", use_test=True) + with patch("plex_api.requests.get", return_value=_mock_response( + 200, json_body={"items": [1, 2, 3]} + )): + result = c.get("mdm", "v1", "tenants") + assert result == {"items": [1, 2, 3]} + + def test_get_returns_none_on_4xx(self, capsys): + c = PlexClient(api_key="k", use_test=True) + with patch("plex_api.requests.get", return_value=_mock_response(401, json_body={"code": "X"})): + result = c.get("mdm", "v1", "tenants") + assert result is None + # Legacy stdout logging is preserved + captured = capsys.readouterr() + assert "401" in captured.out + + def test_get_returns_none_on_network_error(self): + c = PlexClient(api_key="k", use_test=True) + with patch("plex_api.requests.get", side_effect=requests.exceptions.ConnectionError("x")): + result = c.get("mdm", "v1", "tenants") + assert result is None diff --git a/tests/test_plex_diagnostics.py b/tests/test_plex_diagnostics.py index 0bf14de..d160934 100644 --- a/tests/test_plex_diagnostics.py +++ b/tests/test_plex_diagnostics.py @@ -78,7 +78,9 @@ def test_g5_match(self, fake_client): assert "[WARN]" in report["summary"] def test_no_data_when_list_returns_none(self, fake_client): - # No set_response → fake_client.get returns None + # No set_response → FakePlexClient.get_envelope synthesizes a 200 OK + # with body=None → tenant_whoami should still report no_data because + # there are no parseable IDs to work with. report = tenant_whoami(fake_client, G5_TENANT_ID) assert report["match"] == "no_data" assert "no data" in report["summary"].lower() @@ -87,6 +89,7 @@ def test_no_data_when_list_returns_empty(self, fake_client): fake_client.set_response("tenants", []) report = tenant_whoami(fake_client, G5_TENANT_ID) assert report["match"] == "no_data" + assert "no data" in report["summary"].lower() def test_unknown_tenant_match(self, fake_client): unknown_id = "11111111-2222-3333-4444-555555555555" @@ -200,3 +203,113 @@ def test_get_tenant_skipped_when_no_configured_id(self, fake_client): fake_client.set_response("tenants", [{"id": G5_TENANT_ID}]) report = tenant_whoami(fake_client, "") assert report["get_tenant_raw"] is None + + def test_report_includes_envelope_metadata(self, fake_client): + fake_client.set_response("tenants", [{"id": G5_TENANT_ID}]) + report = tenant_whoami(fake_client, G5_TENANT_ID) + env = report["list_tenants_envelope"] + assert env is not None + assert env["ok"] is True + assert env["status"] == 200 + assert env["error"] is None + + +# ───────────────────────────────────────────── +# HTTP error visibility — the whole reason for this PR +# ───────────────────────────────────────────── +def _err_envelope(status, reason, error_msg, body=None): + """Build a fake error envelope as PlexClient.get_envelope would return.""" + return { + "ok": False, + "status": status, + "reason": reason, + "body": body, + "elapsed_ms": 100, + "url": "https://test.connect.plex.com/mdm/v1/tenants", + "error": error_msg, + } + + +class TestAuthFailureBranch: + def test_401_maps_to_auth_failed(self, fake_client): + fake_client.set_envelope("tenants", _err_envelope( + 401, "Unauthorized", "HTTP 401 Unauthorized", + body={"code": "REQUEST_NOT_AUTHENTICATED"} + )) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert report["match"] == "auth_failed" + assert "401" in report["summary"] + assert "PLEX_API_KEY" in report["summary"] + assert "PLEX_API_SECRET" in report["summary"] + + def test_403_maps_to_auth_failed(self, fake_client): + fake_client.set_envelope("tenants", _err_envelope( + 403, "Forbidden", "HTTP 403 Forbidden" + )) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert report["match"] == "auth_failed" + assert "403" in report["summary"] + + def test_auth_failed_preserves_envelope_metadata(self, fake_client): + fake_client.set_envelope("tenants", _err_envelope( + 401, "Unauthorized", "HTTP 401 Unauthorized" + )) + report = tenant_whoami(fake_client, G5_TENANT_ID) + env = report["list_tenants_envelope"] + assert env["ok"] is False + assert env["status"] == 401 + assert env["error"] == "HTTP 401 Unauthorized" + + def test_auth_failed_does_not_call_get_tenant(self, fake_client): + fake_client.set_envelope("tenants", _err_envelope( + 401, "Unauthorized", "x" + )) + tenant_whoami(fake_client, G5_TENANT_ID) + # Only the list call should have been made, not the by-id call + list_calls = [c for c in fake_client.calls if c[2] == "tenants"] + get_calls = [c for c in fake_client.calls if c[2] == f"tenants/{G5_TENANT_ID}"] + assert len(list_calls) == 1 + assert len(get_calls) == 0 + + +class TestRequestFailedBranch: + def test_network_error_maps_to_request_failed(self, fake_client): + fake_client.set_envelope("tenants", _err_envelope( + 0, "ConnectionError", "Connection refused" + )) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert report["match"] == "request_failed" + assert "could not reach" in report["summary"].lower() + assert "Connection refused" in report["summary"] + + def test_timeout_maps_to_request_failed(self, fake_client): + fake_client.set_envelope("tenants", _err_envelope( + 0, "Timeout", "Read timed out" + )) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert report["match"] == "request_failed" + + def test_404_maps_to_request_failed(self, fake_client): + fake_client.set_envelope("tenants", _err_envelope( + 404, "Not Found", "HTTP 404 Not Found" + )) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert report["match"] == "request_failed" + assert "404" in report["summary"] + + def test_500_maps_to_request_failed(self, fake_client): + fake_client.set_envelope("tenants", _err_envelope( + 500, "Internal Server Error", "HTTP 500 Internal Server Error" + )) + report = tenant_whoami(fake_client, G5_TENANT_ID) + assert report["match"] == "request_failed" + assert "500" in report["summary"] + + def test_request_failed_preserves_envelope_metadata(self, fake_client): + fake_client.set_envelope("tenants", _err_envelope( + 500, "Internal Server Error", "HTTP 500" + )) + report = tenant_whoami(fake_client, G5_TENANT_ID) + env = report["list_tenants_envelope"] + assert env["status"] == 500 + assert env["ok"] is False From bd602212224aabaf939b43f1617d33b9364a2274 Mon Sep 17 00:00:00 2001 From: grace-shane Date: Tue, 7 Apr 2026 13:29:24 -0400 Subject: [PATCH 04/56] docs: correct subscription-not-tenant hypothesis, add access matrix (#16) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The earlier "tenant routing" hypothesis was wrong. Empirical testing with Courtney's new Fusion2Plex Consumer Key shows that the 403/401 errors we were seeing on tooling/v1/* and other endpoints are PER-PRODUCT SUBSCRIPTION at the dev portal level, exactly as Plex_API_Reference.md originally said. The tenant routing detour was a misread on my part — apology embedded in BRIEFING.md. Plex 401 vs 404 is the only signal that distinguishes "unsubscribed product" from "bad credentials": both bad creds and unsubscribed-product return 401 REQUEST_NOT_AUTHENTICATED at the gateway, while subscribed-but-resource-missing returns 404 RESOURCE_NOT_FOUND. Verified access matrix for the Fusion2Plex app: Path Status Subscribed? ------------------------------------- ------ ----------- mdm/v1/tenants 401 No mdm/v1/parts 401 No mdm/v1/suppliers 401 No purchasing/v1/purchase-orders 401 No production/v1/control/workcenters 401 No manufacturing/v1/operations 404 Yes (MES) tooling/v1/tools 404 Yes (Tooling) tooling/v1/tool-assemblies 404 Yes (Tooling) tooling/v1/tool-inventory 404 Yes (Tooling) So Tooling and Standalone MES are now reachable. We still need Courtney to approve the Fusion2Plex app for Common APIs, Purchasing, and Production Control before any of the consumable upsert side of the sync can happen. Changes ------- Plex_API_Reference.md - Replaced the "Tenant Routing Suspected" callout with an accurate "API Product Subscription Model" section - Added the access matrix as a permanent reference table - Spelled out the 401-vs-404 disambiguation rule BRIEFING.md - Current Situation rewritten to reflect Fusion2Plex app, 31-day key expiration, partial subscription state - Tenants table reframed as historical reference - Replaced 403-suspected-tenant-routing block with the verified access matrix - Gotchas updated: * Removed the wrong tenant-scoping gotcha * Added 401-vs-401-vs-404 explanation * Marked the previously-hardcoded k3SmLW3y key as dead * Added env var override gotcha - Immediate TODO updated to reflect that tooling/v1/* is no longer blocked (issue #4 work can begin) TODO.md - Phase 3 BLOCKED line rewritten to reflect partial subscription state and the corrected hypothesis Co-authored-by: Claude Opus 4.6 (1M context) --- BRIEFING.md | 91 +++++++++++++++++++++++++++---------------- Plex_API_Reference.md | 26 +++++++++---- TODO.md | 2 +- 3 files changed, 77 insertions(+), 42 deletions(-) diff --git a/BRIEFING.md b/BRIEFING.md index db78f48..97f09d0 100644 --- a/BRIEFING.md +++ b/BRIEFING.md @@ -22,28 +22,31 @@ Forked from just-shane/plex-api. Grace Engineering's working copy. ## Current situation -- Connected and authenticating successfully — but to the WRONG tenant (G5) -- G5 is real production data belonging to another company — READ ONLY, no writes -- IT (Courtney) is resolving tenant access for Grace Engineering -- No new credentials needed — switching tenants = enabling one header +- Courtney issued a new dev portal app: **Fusion2Plex** (April 2026) +- Key + Secret live in `.env.local` (gitignored). Loaded by `bootstrap.py`. +- The new key **expires every 31 days** — we need a rotation reminder +- The Fusion2Plex app has been approved for **Tooling** and **Standalone MES** API products only — Common APIs, Purchasing, and Production Control are still pending Courtney's approval +- We do not yet know which tenant the new app is bound to, because `mdm/v1/tenants` requires Common APIs (currently 401) - Use https://test.connect.plex.com (test. prefix) for all development +> **Earlier (now superseded) belief:** we thought the 403 → 401 errors on tooling endpoints were tenant scoping. They were not. The original `Plex_API_Reference.md` was right: it's per-product subscription approval in the dev portal. The `Fusion2Plex` access matrix (see Plex_API_Reference §3) confirms this empirically — tooling endpoints now return 404 (auth ok, no resource), MDM endpoints return 401 (not subscribed). + --- -## Auth — three headers required -X-Plex-Connect-Api-Key: # identifies the app -X-Plex-Connect-Api-Secret: # second factor, same credential -X-Plex-Connect-Tenant-Id: # tenant routing — omit = defaults to G5 +## Auth — header model +X-Plex-Connect-Api-Key: # identifies the app, scoped to subscribed API products +X-Plex-Connect-Api-Secret: # second factor, paired with the key +X-Plex-Connect-Tenant-Id: # optional — omit to use the app's default tenant -Keys and secrets are managed here in Claude Code via environment variables. +Keys and secrets are loaded from `.env.local` via `bootstrap.py` at startup. Never hardcode credentials. Never commit credentials. -### Tenants +### Tenants (historical reference — may be re-verified once Common APIs is enabled) | Name | Tenant ID | Status | |-----------------|----------------------------------------|-------------------------------| -| Grace Eng. | a6af9c99-bce5-4938-a007-364dc5603d08 | Target — waiting on IT | -| G5 | b406c8c4-cef0-4d62-862c-1758b702cd02 | Currently connected — READ ONLY | +| Grace Eng. | a6af9c99-bce5-4938-a007-364dc5603d08 | Target tenant for sync writes | +| G5 | b406c8c4-cef0-4d62-862c-1758b702cd02 | Old app's bound tenant — read-only, another company | --- @@ -80,16 +83,30 @@ Fusion 360 .json (network share, via ADC) | GET purchasing/v1/purchase-orders | URL-encode spaces in filter values | | GET production/v1/control/workcenters | Target for pocket/turret assignment pushes | -### 403 responses — suspected tenant routing, not subscription +### Access matrix — Fusion2Plex app (verified empirically) + +Plex returns **HTTP 401 `REQUEST_NOT_AUTHENTICATED`** for any endpoint +whose API product the app is NOT subscribed to. The same 401 also covers +genuinely bad credentials, so the only way to tell the two apart is by +comparing across endpoints. + +A subscribed-but-resource-missing endpoint returns **404 `RESOURCE_NOT_FOUND`**. -- tooling/v1/tools -- tooling/v1/tool-assemblies -- tooling/v1/tool-inventory +| Path | Status | Notes | +|---------------------------------------|--------|-------| +| mdm/v1/tenants | 401 | Need Common APIs | +| mdm/v1/parts | 401 | Need Common APIs | +| mdm/v1/suppliers | 401 | Need Common APIs | +| purchasing/v1/purchase-orders | 401 | Need Purchasing | +| production/v1/control/workcenters | 401 | Need Production Control | +| manufacturing/v1/operations | 404 | ✅ Standalone MES enabled | +| tooling/v1/tools | 404 | ✅ Tooling enabled | +| tooling/v1/tool-assemblies | 404 | ✅ Tooling enabled | +| tooling/v1/tool-inventory | 404 | ✅ Tooling enabled | -Working hypothesis: these 403s will resolve once IT completes the tenant -routing change for Grace Engineering. Cannot verify until tenant access lands, -since G5 is another company's data and we have no authority to test writes -there. The tenant change is the **only** open IT blocker. +Pending IT actions: ask Courtney to also approve the `Fusion2Plex` app for +**Common APIs**, **Purchasing**, and **Production Control** in the Plex +developer portal. --- @@ -161,14 +178,15 @@ All items below are mirrored as GitHub Issues — see https://github.com/grace-shane/plex-api/issues for live status. 1. ~~Fix PlexClient constructor — add api_secret, include header~~ DONE -2. Read baseline tooling inventory from mdm/v1/parts — issue #2 (unblocked, - read-only — can start today on G5) +2. Read baseline tooling inventory from mdm/v1/parts — issue #2 + BLOCKED on Common APIs subscription (currently 401) 3. build_part_payload(tool: dict) -> dict — issue #3 - Maps Fusion tool object to mdm/v1/parts POST body + Maps Fusion tool object to mdm/v1/parts POST body. Blocked on Common APIs. 4. resolve_supplier_uuid(vendor_name: str) -> str — issue #3 - Looks up supplier UUID from mdm/v1/suppliers (safe to test on G5 read) + Looks up supplier UUID from mdm/v1/suppliers. Blocked on Common APIs. 5. build_assembly_payload(tool: dict, holder: dict) -> dict — issue #4 - Draft only — endpoints currently 403 (suspected tenant scoping) + tooling/v1/tool-assemblies is now reachable (Tooling API approved). + Need to figure out the correct paths/payloads. NO LONGER BLOCKED. 6. Core sync logic — upsert with guid-based dedup — issue #7 7. Error handling + logging to network share text file — issue #8 @@ -176,20 +194,25 @@ https://github.com/grace-shane/plex-api/issues for live status. ## Gotchas — read before touching anything -- **G5 is production data. Read only. No writes, no mutations.** -- PLEX_API_KEY and PLEX_API_SECRET must be set in the environment before - running plex_api.py or app.py — both will hard-fail with a clear message - if they are missing -- The previously hardcoded API key (k3SmLW3y…) is still in git history on - master and must be rotated before production deployment — see issue #12 +- **G5 is another company's data. Reads we got there were tied to the OLD + app key — not the current Fusion2Plex app. The old key is dead.** +- PLEX_API_KEY and PLEX_API_SECRET come from `.env.local` via `bootstrap.py`. + A real shell env var with the same name will OVERRIDE `.env.local` (by + design) — clear stale shell vars if you have them. +- **The previously hardcoded API key (k3SmLW3y…) is dead.** It's still in + git history but no longer authenticates. The current key is the + Fusion2Plex Consumer Key in `.env.local`, which expires every 31 days. + See issue #12 for the rotation cadence. +- **Plex returns 401 `REQUEST_NOT_AUTHENTICATED` for both bad credentials + AND endpoints under unsubscribed API products.** The only way to tell + them apart is to compare across multiple endpoints — if SOME calls + return 200/404 and OTHERS return 401, the 401s are subscription, not + auth. See the access matrix above. - mdm/v1/parts has NO server-side pagination — unfiltered = entire DB pulled - supplierId in responses is a UUID, not a supplier code (MSC != "MSC001") - URL-encode spaces in filter strings (MRO SUPPLIES -> MRO%20SUPPLIES) - API key must be in header — URL parameter returns 401 - PowerShell: use Invoke-RestMethod, not curl (alias doesn't pass headers) -- Tooling 403s on tooling/v1/* are SUSPECTED to be tenant scoping, not API - collection subscription. Working hypothesis only — cannot verify until - tenant routing lands. See issue #1. - Fusion Tool objects from CAM API are copies, not references - ADC stale file guard will abort sync if network share files are > 25h old - BROTHER SPEEDIO ALUMINUM.json is committed to repo for reference only — diff --git a/Plex_API_Reference.md b/Plex_API_Reference.md index 3acb82b..18082d6 100644 --- a/Plex_API_Reference.md +++ b/Plex_API_Reference.md @@ -40,16 +40,28 @@ The target architecture requires pushing Fusion 360 data to the Tooling/Workcent | Purchasing | `purchasing/v1/purchase-orders` | Returns full PO headers (e.g., tooling orders from MSC). | | Production | `production/v1/control/workcenters` | Discovered on Dev Portal. Replaces old 404 manufacturing endpoint. | -### ⚠️ 403 Responses — Tenant Routing Suspected +### API Product Subscription Model > [!IMPORTANT] -> **ACTION REQUIRED**: IT (Courtney) must complete the tenant routing change so Grace Engineering credentials land on the Grace tenant (`a6af9c99-bce5-4938-a007-364dc5603d08`) instead of G5 (`b406c8c4-cef0-4d62-862c-1758b702cd02`). This is the **only** open IT blocker. +> Plex requires each Consumer Key to be **explicitly subscribed** to API products in the developer portal before any URI under that product is reachable. An unsubscribed product returns **HTTP 401 `REQUEST_NOT_AUTHENTICATED`** at the gateway, *not* 403 — same wire response as bad credentials, which makes diagnosing this without an access matrix surprisingly hard. > -> The 403s observed on the endpoints below are suspected to be tenant-scoping rather than API collection subscription. **This is a working hypothesis** — we cannot verify it until tenant access is resolved, because G5 is another company's production data and we have no authority to test writes there. Re-run `discover_all()` once tenant routing lands to confirm. - -- `tooling/v1/tools` -- `tooling/v1/tool-assemblies` -- `tooling/v1/tool-inventory` +> Verified empirically against the Grace `Fusion2Plex` app (April 2026): `tooling/v1/*` returns `404 RESOURCE_NOT_FOUND` (auth ok, just no resource at that path), while unsubscribed products like `mdm/v1/*` return `401 REQUEST_NOT_AUTHENTICATED`. The 401-vs-404 distinction is the only way to tell from outside the portal whether a product is enabled. + +#### Current access matrix for the `Fusion2Plex` app + +| Path | Status | Subscribed? | +|---------------------------------------|--------|-------------| +| `mdm/v1/tenants` | 401 | ❌ Common APIs not approved | +| `mdm/v1/parts` | 401 | ❌ Common APIs not approved | +| `mdm/v1/suppliers` | 401 | ❌ Common APIs not approved | +| `purchasing/v1/purchase-orders` | 401 | ❌ Purchasing not approved | +| `production/v1/control/workcenters` | 401 | ❌ Production Control not approved | +| `manufacturing/v1/operations` | 404 | ✅ Standalone MES approved | +| `tooling/v1/tools` | 404 | ✅ Tooling approved | +| `tooling/v1/tool-assemblies` | 404 | ✅ Tooling approved | +| `tooling/v1/tool-inventory` | 404 | ✅ Tooling approved | + +**Pending IT action**: ask Courtney to also approve the `Fusion2Plex` app for **Common APIs**, **Purchasing**, and **Production Control** so we can read parts/suppliers, look up POs, and push workcenter docs. --- diff --git a/TODO.md b/TODO.md index cd7659e..a695a87 100644 --- a/TODO.md +++ b/TODO.md @@ -25,7 +25,7 @@ This document outlines the step-by-step implementation plan for the Autodesk Fus - [ ] Implement API call to create/update Tool Assemblies, assigning the purchased consumable parts to them. → [#4](https://github.com/grace-shane/plex-api/issues/4) - [ ] Implement API call to link Tool Assemblies to Routings/Operations. → [#5](https://github.com/grace-shane/plex-api/issues/5) - [ ] Implement API call to update tooling within the specific Workcenter Document (`production/v1/control/workcenters`). → [#6](https://github.com/grace-shane/plex-api/issues/6) -- [ ] **BLOCKED**: Waiting on IT (Courtney) to complete tenant routing so credentials land on Grace Engineering instead of G5. Hypothesis: the 403s on `tooling/v1/*` endpoints will resolve once tenant access is fixed. → [#1](https://github.com/grace-shane/plex-api/issues/1) +- [ ] **PARTIALLY BLOCKED**: New `Fusion2Plex` app from Courtney is approved for **Tooling** and **Standalone MES** API products (those endpoints now return 404 instead of 403 — auth ok). Still waiting on Courtney to also approve **Common APIs**, **Purchasing**, and **Production Control** for the same app. The earlier "tenant routing" hypothesis was wrong; this was per-product subscription all along. → [#1](https://github.com/grace-shane/plex-api/issues/1) ## Phase 4: Data Mapping & Sync Logic From c88fb09d5c98f37857917a338676bd9fc3d81e23 Mon Sep 17 00:00:00 2001 From: grace-shane Date: Tue, 7 Apr 2026 14:44:36 -0400 Subject: [PATCH 05/56] feat: production write guard at the proxy (#17) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Adds a hard safety guard at the /api/plex/raw layer that refuses mutating HTTP methods (POST/PUT/PATCH/DELETE) when the server is running against a production Plex environment, unless the operator explicitly opts in by setting PLEX_ALLOW_WRITES=1. Why --- Empirical testing this hour established that the Fusion2Plex Consumer Key authenticates against connect.plex.com (PRODUCTION) on the real Grace Engineering tenant (58f781ba-…). A casual write — even one triggered by a stray click in the UI — could affect actual manufacturing operations. We currently have no test environment for this app. Changes ------- app.py - New module-level constants: WRITES_ALLOWED — true iff PLEX_ALLOW_WRITES env var is set IS_PRODUCTION — true iff client.base does not contain "test." WRITE_METHODS — frozenset of POST, PUT, PATCH, DELETE - New _is_write_blocked(method) helper returning (blocked, reason). GET is never blocked. Mutating methods are blocked iff IS_PRODUCTION and not WRITES_ALLOWED. - /api/plex/raw enforces the guard before any forwarding. Refused requests return HTTP 403 with a structured error envelope: { status, http_status: 0, method, url, message, guard: "PLEX_ALLOW_WRITES", is_production, writes_allowed } - /api/config exposes is_production and writes_allowed so the UI can render an appropriate banner. - __main__ prints a loud warning banner at startup when running against a production environment, indicating whether writes are blocked or enabled. Tests — 14 net new (119 total, all passing) - 8 tests under TestProductionWriteGuard: * GET always allowed in production * POST/PUT/PATCH/DELETE blocked in prod default * POST allowed in prod when writes enabled * POST allowed in test environment regardless * /api/config exposes guard state - 6 tests under TestIsWriteBlocked covering the helper directly, including method case-insensitivity How to enable writes when you actually need them ------------------------------------------------ $env:PLEX_ALLOW_WRITES = "1" # PowerShell export PLEX_ALLOW_WRITES=1 # bash py app.py Rotate the env var off as soon as you're done. Next PR: full migration to USE_TEST=False, new Grace tenant UUID, KNOWN_TENANTS update, and a doc rewrite to retract the I/l misread hypothesis chain. This PR is intentionally tiny and lands first so the guard exists before any other code touches production. Co-authored-by: Claude Opus 4.6 (1M context) --- app.py | 76 ++++++++++++++++++- tests/test_app_routes.py | 159 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 233 insertions(+), 2 deletions(-) diff --git a/app.py b/app.py index d6256ee..1400582 100644 --- a/app.py +++ b/app.py @@ -31,6 +31,47 @@ use_test=USE_TEST, ) +# ───────────────────────────────────────────── +# Production write guard +# ───────────────────────────────────────────── +# Read-only methods are always allowed. Mutating methods (POST/PUT/PATCH/ +# DELETE) are blocked when running against a non-test Plex environment +# (connect.plex.com), unless the operator explicitly opts in by setting +# PLEX_ALLOW_WRITES=1 in the environment. +# +# This guard exists because the Fusion2Plex app currently has read access +# to real Grace Engineering production data. A casual write — even one +# triggered by a stray click in the UI — could affect actual manufacturing +# operations. +# +# To enable writes: +# $env:PLEX_ALLOW_WRITES = "1" # PowerShell +# export PLEX_ALLOW_WRITES=1 # bash +# Then restart the server. The /api/config endpoint will reflect the change. +WRITES_ALLOWED = os.environ.get("PLEX_ALLOW_WRITES", "").strip().lower() in ( + "1", "true", "yes", "on", "enabled", +) +IS_PRODUCTION = "test." not in client.base +WRITE_METHODS = {"POST", "PUT", "PATCH", "DELETE"} + + +def _is_write_blocked(method: str) -> tuple[bool, str]: + """ + Returns (blocked, reason). True if a write request should be refused. + """ + if method.upper() not in WRITE_METHODS: + return False, "" + if not IS_PRODUCTION: + return False, "" + if WRITES_ALLOWED: + return False, "" + return True, ( + f"Write blocked: {method} requests to {client.base} are refused " + f"because the server is running against a production Plex environment " + f"and PLEX_ALLOW_WRITES is not set. To enable writes, set " + f"PLEX_ALLOW_WRITES=1 in the environment and restart the server." + ) + @app.route('/') def index(): @@ -62,11 +103,26 @@ def api_plex_raw(): "message": "Missing required 'path' query param (e.g. mdm/v1/parts)", }), 400 + method = request.method.upper() + + # Production write guard — refuse mutating methods unless explicitly enabled + blocked, reason = _is_write_blocked(method) + if blocked: + return jsonify({ + "status": "error", + "http_status": 0, + "method": method, + "url": f"{client.base}/{path}", + "message": reason, + "guard": "PLEX_ALLOW_WRITES", + "is_production": IS_PRODUCTION, + "writes_allowed": WRITES_ALLOWED, + }), 403 + # Forward all query params EXCEPT our own 'path' marker. forwarded_params = {k: v for k, v in request.args.items() if k != 'path'} url = f"{client.base}/{path}" - method = request.method.upper() body = None if method in ('POST', 'PUT', 'PATCH'): @@ -229,6 +285,8 @@ def api_config(): return jsonify({ "base_url": client.base, "environment": "test" if USE_TEST else "production", + "is_production": IS_PRODUCTION, + "writes_allowed": WRITES_ALLOWED, "tenant_id": TENANT_ID, "has_key": bool(API_KEY), "has_secret": bool(API_SECRET), @@ -236,6 +294,20 @@ def api_config(): if __name__ == '__main__': - # Run the server on port 5000 + # Loud startup banner if we're connected to a production environment + if IS_PRODUCTION: + print() + print("=" * 70) + print(f" WARNING: Connected to PRODUCTION Plex environment") + print(f" {client.base}") + if WRITES_ALLOWED: + print(f" WRITES ARE ENABLED via PLEX_ALLOW_WRITES") + print(f" Every POST/PUT/PATCH/DELETE will hit real production data.") + else: + print(f" Writes are BLOCKED at the proxy. To enable, set") + print(f" PLEX_ALLOW_WRITES=1 in the environment and restart.") + print("=" * 70) + print() + print("Starting UX Test Server...") app.run(debug=True, host='0.0.0.0', port=5000) diff --git a/tests/test_app_routes.py b/tests/test_app_routes.py index 68f698f..35227f8 100644 --- a/tests/test_app_routes.py +++ b/tests/test_app_routes.py @@ -201,3 +201,162 @@ def test_calls_discover_all(self, client): body = rv.get_json() assert body["status"] == "success" assert body["data"] == [{"endpoint": "x", "status": 200}] + + +# ───────────────────────────────────────────── +# Production write guard +# ───────────────────────────────────────────── +class TestProductionWriteGuard: + """ + The /api/plex/raw proxy must refuse mutating methods (POST/PUT/PATCH/ + DELETE) when running against a production Plex environment unless + PLEX_ALLOW_WRITES is explicitly enabled. + + These tests temporarily flip the module-level IS_PRODUCTION and + WRITES_ALLOWED constants since they're computed at import time from + env vars (which conftest.py has already locked in). + """ + + def test_get_always_allowed_in_production(self, client, monkeypatch): + monkeypatch.setattr(app_module, "IS_PRODUCTION", True) + monkeypatch.setattr(app_module, "WRITES_ALLOWED", False) + + mock_response = MagicMock() + mock_response.status_code = 200 + mock_response.reason = "OK" + mock_response.ok = True + mock_response.content = b"{}" + mock_response.json.return_value = {} + mock_response.headers = {} + mock_response.url = "https://connect.plex.com/mdm/v1/tenants" + + with patch.object(app_module.requests, "request", return_value=mock_response): + rv = client.get("/api/plex/raw?path=mdm/v1/tenants") + assert rv.status_code == 200 + assert rv.get_json()["status"] == "success" + + def test_post_blocked_in_production_without_writes_allowed(self, client, monkeypatch): + monkeypatch.setattr(app_module, "IS_PRODUCTION", True) + monkeypatch.setattr(app_module, "WRITES_ALLOWED", False) + + rv = client.post("/api/plex/raw?path=mdm/v1/parts", json={"foo": "bar"}) + assert rv.status_code == 403 + body = rv.get_json() + assert body["status"] == "error" + assert body["guard"] == "PLEX_ALLOW_WRITES" + assert body["is_production"] is True + assert body["writes_allowed"] is False + assert "PLEX_ALLOW_WRITES" in body["message"] + assert "POST" in body["message"] + + def test_put_blocked_in_production_without_writes_allowed(self, client, monkeypatch): + monkeypatch.setattr(app_module, "IS_PRODUCTION", True) + monkeypatch.setattr(app_module, "WRITES_ALLOWED", False) + + rv = client.put("/api/plex/raw?path=mdm/v1/parts/x", json={"foo": "bar"}) + assert rv.status_code == 403 + assert rv.get_json()["guard"] == "PLEX_ALLOW_WRITES" + + def test_patch_blocked_in_production_without_writes_allowed(self, client, monkeypatch): + monkeypatch.setattr(app_module, "IS_PRODUCTION", True) + monkeypatch.setattr(app_module, "WRITES_ALLOWED", False) + + rv = client.patch("/api/plex/raw?path=mdm/v1/parts/x", json={"foo": "bar"}) + assert rv.status_code == 403 + + def test_delete_blocked_in_production_without_writes_allowed(self, client, monkeypatch): + monkeypatch.setattr(app_module, "IS_PRODUCTION", True) + monkeypatch.setattr(app_module, "WRITES_ALLOWED", False) + + rv = client.delete("/api/plex/raw?path=mdm/v1/parts/x") + assert rv.status_code == 403 + + def test_post_allowed_in_production_when_writes_enabled(self, client, monkeypatch): + monkeypatch.setattr(app_module, "IS_PRODUCTION", True) + monkeypatch.setattr(app_module, "WRITES_ALLOWED", True) + + mock_response = MagicMock() + mock_response.status_code = 201 + mock_response.reason = "Created" + mock_response.ok = True + mock_response.content = b'{"id":"new"}' + mock_response.json.return_value = {"id": "new"} + mock_response.headers = {} + mock_response.url = "https://connect.plex.com/mdm/v1/parts" + + with patch.object(app_module.requests, "request", return_value=mock_response): + rv = client.post("/api/plex/raw?path=mdm/v1/parts", json={"foo": "bar"}) + assert rv.status_code == 200 # envelope is 200; inner http_status is 201 + body = rv.get_json() + assert body["status"] == "success" + assert body["http_status"] == 201 + + def test_post_allowed_in_test_environment_regardless(self, client, monkeypatch): + monkeypatch.setattr(app_module, "IS_PRODUCTION", False) + monkeypatch.setattr(app_module, "WRITES_ALLOWED", False) + + mock_response = MagicMock() + mock_response.status_code = 200 + mock_response.reason = "OK" + mock_response.ok = True + mock_response.content = b"{}" + mock_response.json.return_value = {} + mock_response.headers = {} + mock_response.url = "https://test.connect.plex.com/mdm/v1/parts" + + with patch.object(app_module.requests, "request", return_value=mock_response): + rv = client.post("/api/plex/raw?path=mdm/v1/parts", json={"foo": "bar"}) + assert rv.status_code == 200 + + def test_config_endpoint_exposes_guard_state(self, client): + rv = client.get("/api/config") + body = rv.get_json() + assert "is_production" in body + assert "writes_allowed" in body + assert isinstance(body["is_production"], bool) + assert isinstance(body["writes_allowed"], bool) + + +# ───────────────────────────────────────────── +# Helper function _is_write_blocked +# ───────────────────────────────────────────── +class TestIsWriteBlocked: + def test_get_never_blocked_in_production(self, monkeypatch): + monkeypatch.setattr(app_module, "IS_PRODUCTION", True) + monkeypatch.setattr(app_module, "WRITES_ALLOWED", False) + blocked, reason = app_module._is_write_blocked("GET") + assert blocked is False + assert reason == "" + + def test_get_never_blocked_in_test(self, monkeypatch): + monkeypatch.setattr(app_module, "IS_PRODUCTION", False) + monkeypatch.setattr(app_module, "WRITES_ALLOWED", False) + blocked, reason = app_module._is_write_blocked("GET") + assert blocked is False + + def test_post_blocked_in_production_default(self, monkeypatch): + monkeypatch.setattr(app_module, "IS_PRODUCTION", True) + monkeypatch.setattr(app_module, "WRITES_ALLOWED", False) + blocked, reason = app_module._is_write_blocked("POST") + assert blocked is True + assert "PLEX_ALLOW_WRITES" in reason + + def test_post_unblocked_in_test(self, monkeypatch): + monkeypatch.setattr(app_module, "IS_PRODUCTION", False) + monkeypatch.setattr(app_module, "WRITES_ALLOWED", False) + blocked, reason = app_module._is_write_blocked("POST") + assert blocked is False + + def test_post_unblocked_when_writes_enabled(self, monkeypatch): + monkeypatch.setattr(app_module, "IS_PRODUCTION", True) + monkeypatch.setattr(app_module, "WRITES_ALLOWED", True) + blocked, reason = app_module._is_write_blocked("POST") + assert blocked is False + + def test_method_case_insensitive(self, monkeypatch): + monkeypatch.setattr(app_module, "IS_PRODUCTION", True) + monkeypatch.setattr(app_module, "WRITES_ALLOWED", False) + blocked, _ = app_module._is_write_blocked("post") + assert blocked is True + blocked, _ = app_module._is_write_blocked("Delete") + assert blocked is True From d242980005ed9381b2478c17e3d9271249ae8215 Mon Sep 17 00:00:00 2001 From: grace-shane Date: Tue, 7 Apr 2026 14:58:10 -0400 Subject: [PATCH 06/56] feat: migrate to PROD Plex environment + verified Grace tenant (#18) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Switches the codebase to its actual operating reality after the debugging marathon: the Fusion2Plex Consumer Key authenticates against connect.plex.com (PRODUCTION) on the Grace tenant 58f781ba-1691-4f32-b1db-381cdb21300c. There is no test environment for this app. Reads work; writes are blocked by PR #17's proxy guard unless PLEX_ALLOW_WRITES=1. Code ---- plex_api.py - New module-level constant GRACE_TENANT_ID = the verified Grace UUID returned by GET mdm/v1/tenants on 2026-04-07 - TENANT_ID reads from PLEX_TENANT_ID env var, defaults to GRACE_TENANT_ID - USE_TEST reads from PLEX_USE_TEST env var, defaults to False - API_SECRET docstring clarified — Plex authenticates on the key alone for the Fusion2Plex app; the secret header is harmless to send but optional - __main__ banner now prints WARNING when running against PROD, including current writes state - explore_parts() call commented out in __main__ — that helper unconditionally pulls 19 MB of unfiltered parts data plex_diagnostics.py - KNOWN_TENANTS gains the verified GRACE_TENANT_ID and a GRACE_OLD_TENANT_ID labeled "Grace (stale UUID — replace with verified one)" so old configs surface a clear diagnostic instead of "unknown" UI - New env-chips container in templates/index.html holding two pills: the existing environment chip (TEST/PROD) and a new writes-chip (READ ONLY / WRITES ON) that's only visible in production - CSS: env-chip.prod gets a stronger red background, font-weight bumped. New .writes-chip styles for blocked (green) and allowed (red) states. - script.js loadConfig() reads is_production and writes_allowed from /api/config and renders the chips with helpful tooltips pointing at PLEX_ALLOW_WRITES .env.example - Documents PLEX_TENANT_ID, PLEX_USE_TEST, PLEX_ALLOW_WRITES - Points at developers.plex.com → My Apps → Fusion2Plex - Explains the production-by-default model Docs ---- BRIEFING.md — major rewrite - Current Situation reflects production reality, real Grace tenant ID, write guard, no-test-environment fact, 31-day key cycle - Tenants table reorganized — verified Grace UUID front and center, old wrong UUID kept with clear "stale" label, G5 marked as another company's old test data - Auth section updated — secret is OPTIONAL not "second factor" - Access matrix VERIFIED (200s on mdm/v1/tenants, parts, suppliers, purchase-orders; 404s on tooling/v1/*, manufacturing/v1/*, production/v1/control/workcenters) - New 401 vs 404 explainer - New section "History of incorrect hypotheses" — postmortem of the four wrong turns this debugging session took, all rooted in one cause: I misread `l` as `I` when reading the API key from a screenshot. Lessons documented so future-me doesn't repeat them. - Gotchas updated — every-read-hits-prod warning, no-pagination on mdm/v1/parts (19.6 MB) and purchasing/v1/purchase-orders (44 MB) empirically verified, write guard documented, l-vs-I image reading lesson Plex_API_Reference.md - Section 3 retitled "Verified Endpoints & Access Matrix" - Real PROD numbers replace the previous (wrong) tooling-subscribed table - Adds explicit 401-vs-404 reading guide - Adds the no-pagination gotcha as a permanent reference TODO.md - Phase 3 BLOCKED line corrected — IT blocker resolved, what remains is finding the right URL patterns for tooling/manufacturing/ production-control endpoints (those still 404) - Each Phase 3 item now reflects what's reachable vs. what isn't Tests — 128 pass locally, 7 net new - TestKnownTenants: GRACE_TENANT_ID is the verified UUID, GRACE_OLD_TENANT_ID is preserved with "stale" label, all known IDs are distinct - TestModuleDefaults: PLEX_TENANT_ID env-var pickup with default, PLEX_USE_TEST handling for "1", "true", garbage, and unset Co-authored-by: Claude Opus 4.6 (1M context) --- .env.example | 23 +- BRIEFING.md | 371 +++++++++++++++++++++++---------- Plex_API_Reference.md | 75 ++++--- TODO.md | 12 +- plex_api.py | 56 +++-- plex_diagnostics.py | 17 +- static/css/style.css | 25 ++- static/js/script.js | 36 +++- templates/index.html | 5 +- tests/test_plex_api.py | 51 ++++- tests/test_plex_diagnostics.py | 17 +- 11 files changed, 505 insertions(+), 183 deletions(-) diff --git a/.env.example b/.env.example index 59d70fb..d3995b5 100644 --- a/.env.example +++ b/.env.example @@ -2,10 +2,29 @@ # # Copy this file to .env.local (which is gitignored) and fill in real values. # bootstrap.py loads .env.local at startup so you don't have to set these -# variables in every shell. Real shell environment variables always win. +# variables in every shell. Real shell environment variables always win +# over .env.local via setdefault semantics. # # Get your Consumer Key and Consumer Secret from: -# https://developers.plex.com/ → My Apps → +# https://developers.plex.com/ → My Apps → Fusion2Plex → Key +# ── REQUIRED ──────────────────────────────────────────────────────── PLEX_API_KEY=your-consumer-key-here PLEX_API_SECRET=your-consumer-secret-here + +# ── OPTIONAL ──────────────────────────────────────────────────────── +# Override the target tenant. Defaults to the verified Grace Engineering +# production tenant. Set this only if you need to point at a different +# tenant for testing. +# PLEX_TENANT_ID=58f781ba-1691-4f32-b1db-381cdb21300c + +# Hit the test environment (test.connect.plex.com) instead of production +# (connect.plex.com). The Fusion2Plex app currently only exists in +# production, so leaving this unset is correct for normal use. +# PLEX_USE_TEST=1 + +# Allow mutating HTTP methods (POST/PUT/PATCH/DELETE) against production. +# OFF by default — every write to connect.plex.com affects real Grace +# manufacturing data. Set to 1 only when you intentionally want to write, +# and unset it as soon as you're done. +# PLEX_ALLOW_WRITES=1 diff --git a/BRIEFING.md b/BRIEFING.md index 97f09d0..8ab7c99 100644 --- a/BRIEFING.md +++ b/BRIEFING.md @@ -3,6 +3,11 @@ This is the primary context document for AI-assisted development sessions. Read this first, then read plex_api.py and tool_library_loader.py. +> **Read the "History of incorrect hypotheses" section at the bottom of this +> file before changing anything credential- or tenant-related.** It documents +> four wrong turns this project took that all came down to one root cause +> (see History §1). Do not repeat them. + --- ## What this project is @@ -20,46 +25,67 @@ Forked from just-shane/plex-api. Grace Engineering's working copy. --- -## Current situation - -- Courtney issued a new dev portal app: **Fusion2Plex** (April 2026) -- Key + Secret live in `.env.local` (gitignored). Loaded by `bootstrap.py`. -- The new key **expires every 31 days** — we need a rotation reminder -- The Fusion2Plex app has been approved for **Tooling** and **Standalone MES** API products only — Common APIs, Purchasing, and Production Control are still pending Courtney's approval -- We do not yet know which tenant the new app is bound to, because `mdm/v1/tenants` requires Common APIs (currently 401) -- Use https://test.connect.plex.com (test. prefix) for all development - -> **Earlier (now superseded) belief:** we thought the 403 → 401 errors on tooling endpoints were tenant scoping. They were not. The original `Plex_API_Reference.md` was right: it's per-product subscription approval in the dev portal. The `Fusion2Plex` access matrix (see Plex_API_Reference §3) confirms this empirically — tooling endpoints now return 404 (auth ok, no resource), MDM endpoints return 401 (not subscribed). +## Current situation (April 2026) + +- **App**: `Fusion2Plex` in the Plex Developer Portal +- **Environment**: `https://connect.plex.com` — **PRODUCTION**, real Grace data +- **Tenant**: `58f781ba-1691-4f32-b1db-381cdb21300c` (`Grace`) — verified + empirically by `GET /mdm/v1/tenants` +- **Credentials**: Consumer Key + (optional) Secret in `.env.local`, + loaded by `bootstrap.py` at startup. Gitignored. +- **Key expires every 31 days** — see issue #12 for rotation cadence +- **Reads work** — `mdm/v1/tenants`, `mdm/v1/parts`, `mdm/v1/suppliers`, + `purchasing/v1/purchase-orders` all return 200 +- **Writes are blocked** at the proxy by default (PR #17 production guard). + To enable: set `PLEX_ALLOW_WRITES=1` in the environment and restart +- **There is NO test environment for this app.** The Fusion2Plex Consumer + Key only authenticates against `connect.plex.com`, not `test.connect.plex.com`. + Every action you take is against real production data. --- ## Auth — header model -X-Plex-Connect-Api-Key: # identifies the app, scoped to subscribed API products -X-Plex-Connect-Api-Secret: # second factor, paired with the key -X-Plex-Connect-Tenant-Id: # optional — omit to use the app's default tenant -Keys and secrets are loaded from `.env.local` via `bootstrap.py` at startup. -Never hardcode credentials. Never commit credentials. +``` +X-Plex-Connect-Api-Key: # required — identifies the app +X-Plex-Connect-Tenant-Id: # required — selects the tenant +X-Plex-Connect-Api-Secret: # OPTIONAL — Plex authenticates on + # the key alone for this app +``` + +The Insomnia Generate Code output for a working request shows only the +key + tenant headers. The secret may be needed in some configurations +(future-proof, harmless to send), but is not currently required. + +Credentials are loaded from `.env.local` via `bootstrap.py`. +**Never hardcode credentials. Never commit credentials.** -### Tenants (historical reference — may be re-verified once Common APIs is enabled) +### Tenants -| Name | Tenant ID | Status | -|-----------------|----------------------------------------|-------------------------------| -| Grace Eng. | a6af9c99-bce5-4938-a007-364dc5603d08 | Target tenant for sync writes | -| G5 | b406c8c4-cef0-4d62-862c-1758b702cd02 | Old app's bound tenant — read-only, another company | +| Name | Tenant ID | Status | +|-------------------|----------------------------------------|-------------------------------------| +| **Grace Eng.** | `58f781ba-1691-4f32-b1db-381cdb21300c` | **CURRENT** — verified live, prod | +| Grace (stale) | `a6af9c99-bce5-4938-a007-364dc5603d08` | Dead. Was in earlier docs. Wrong. | +| G5 | `b406c8c4-cef0-4d62-862c-1758b702cd02` | Another company. Old test app only. | + +Tenant IDs are not secrets — they are committed as defaults in +`plex_api.py` (`GRACE_TENANT_ID`) and `plex_diagnostics.py` +(`KNOWN_TENANTS`). --- ## Architecture -Fusion 360 .json (network share, via ADC) -└── tool_library_loader.py reads + validates JSON, stale-file guard -└── transform layer (build_part_payload, build_assembly_payload) -└── plex_api.py / PlexClient pushes to Plex REST API -├── mdm/v1/parts (consumable tools) -├── mdm/v1/suppliers (resolve vendor UUIDs) -├── tooling/v1/tool-assemblies (BLOCKED — see below) -└── production/v1/control/workcenters +``` +Fusion 360 .json (network share, via Autodesk Desktop Connector) + └── tool_library_loader.py reads + validates JSON, stale-file guard + └── transform layer build_part_payload, build_assembly_payload + └── plex_api.py / PlexClient pushes to Plex REST API + ├── mdm/v1/parts consumable tools + ├── mdm/v1/suppliers resolve vendor UUIDs + ├── tooling/v1/tool-assemblies see History §3 below + └── production/v1/control/workcenters see History §3 below +``` ### Industry hierarchy (Plex data model) @@ -71,42 +97,36 @@ Fusion 360 .json (network share, via ADC) --- -## Plex API endpoints - -### Working (test environment) - -| Endpoint | Notes | -|----------------------------------------|------------------------------------------------| -| GET mdm/v1/tenants | Returns tenants for credential. Currently G5. | -| GET mdm/v1/parts | NO pagination — always filter status=Active | -| GET mdm/v1/suppliers | Returns UUIDs, not supplier codes | -| GET purchasing/v1/purchase-orders | URL-encode spaces in filter values | -| GET production/v1/control/workcenters | Target for pocket/turret assignment pushes | - -### Access matrix — Fusion2Plex app (verified empirically) - -Plex returns **HTTP 401 `REQUEST_NOT_AUTHENTICATED`** for any endpoint -whose API product the app is NOT subscribed to. The same 401 also covers -genuinely bad credentials, so the only way to tell the two apart is by -comparing across endpoints. - -A subscribed-but-resource-missing endpoint returns **404 `RESOURCE_NOT_FOUND`**. - -| Path | Status | Notes | -|---------------------------------------|--------|-------| -| mdm/v1/tenants | 401 | Need Common APIs | -| mdm/v1/parts | 401 | Need Common APIs | -| mdm/v1/suppliers | 401 | Need Common APIs | -| purchasing/v1/purchase-orders | 401 | Need Purchasing | -| production/v1/control/workcenters | 401 | Need Production Control | -| manufacturing/v1/operations | 404 | ✅ Standalone MES enabled | -| tooling/v1/tools | 404 | ✅ Tooling enabled | -| tooling/v1/tool-assemblies | 404 | ✅ Tooling enabled | -| tooling/v1/tool-inventory | 404 | ✅ Tooling enabled | - -Pending IT actions: ask Courtney to also approve the `Fusion2Plex` app for -**Common APIs**, **Purchasing**, and **Production Control** in the Plex -developer portal. +## Plex API access matrix — Fusion2Plex on production + +Verified empirically against `connect.plex.com` with the Grace tenant. + +| Status | Path | Notes | +|--------|---------------------------------------|---------------------------------------------| +| **200**| `mdm/v1/tenants` | 62 B — tenant list | +| **200**| `mdm/v1/parts?limit=1` | **19.6 MB** — `limit` IGNORED, full DB dump | +| **200**| `mdm/v1/suppliers?limit=1` | 708 KB — same, no server-side pagination | +| **200**| `purchasing/v1/purchase-orders?limit=1` | **44 MB** — full PO history | +| 404 | `production/v1/control/workcenters` | Path doesn't exist on this app — see History §3 | +| 404 | `tooling/v1/tools` | Path doesn't exist — see History §3 | +| 404 | `tooling/v1/tool-assemblies` | Path doesn't exist — see History §3 | +| 404 | `tooling/v1/tool-inventory` | Path doesn't exist — see History §3 | +| 404 | `manufacturing/v1/operations` | Path doesn't exist — see History §3 | + +**The 404 endpoints either use a different URL pattern in this product +set, or aren't available to the Fusion2Plex app at all.** The user will +need to share working URLs from Insomnia for those endpoints to make +progress on issues #4, #5, #6. + +### How to read 401 vs 404 from Plex + +- **401 `REQUEST_NOT_AUTHENTICATED`** — bad credentials OR you're hitting + a recognized namespace your app isn't subscribed to. Same wire response. +- **404 `RESOURCE_NOT_FOUND`** — Plex's gateway has no route at that path. + Could mean unknown URL OR subscribed-but-no-resource. Same wire response. +- **The only way to tell apart cleanly** is to compare across many endpoints + with the same auth, AND ideally compare against a known-good client + (Insomnia → Generate Code) for ground truth. --- @@ -122,53 +142,75 @@ Source file: BROTHER SPEEDIO ALUMINUM.json (28 entries, root "data" array) | product-id | Part number | Vendor part number, key for PO link| | vendor | Supplier (resolve to UUID first) | | | post-process.number | Pocket / turret number | Critical for workcenter doc update | -| geometry.DC | Cutting diameter | Blocked endpoint | -| geometry.OAL | Overall length | Blocked endpoint | -| geometry.NOF | Number of flutes | Blocked endpoint | -| holder (object) | Assembly component / BOM link | Blocked endpoint | +| geometry.DC | Cutting diameter | | +| geometry.OAL | Overall length | | +| geometry.NOF | Number of flutes | | +| holder (object) | Assembly component / BOM link | | Tool type distribution in active library: - flat end mill: 12 | holder: 6 | bull nose end mill: 4 | drill: 2 - face mill: 1 | form mill: 1 | slot mill: 1 | probe: 1 -Sync filter: include only type != "holder" AND type != "probe" +Sync filter: include only `type != "holder" AND type != "probe"` --- ## What's built ### plex_api.py -- PlexClient base class with throttling (200 calls/min rate limit) -- Constructor takes api_key, api_secret, tenant_id, use_test -- Sets X-Plex-Connect-Api-Key, X-Plex-Connect-Api-Secret, and - X-Plex-Connect-Tenant-Id headers -- Credentials read from PLEX_API_KEY / PLEX_API_SECRET env vars -- get() and get_paginated() methods -- Extraction functions: extract_purchase_orders, extract_parts, extract_workcenters -- discover_all() endpoint probe utility +- `PlexClient` base class with throttling (200 calls/min rate limit) +- Constructor takes `api_key`, `api_secret`, `tenant_id`, `use_test` +- All four config values read from environment variables via `bootstrap.py` + (`PLEX_API_KEY`, `PLEX_API_SECRET`, `PLEX_TENANT_ID`, `PLEX_USE_TEST`) +- `TENANT_ID` defaults to `GRACE_TENANT_ID` (production Grace) +- `USE_TEST` defaults to `False` (production is the only environment we have) +- `get()` returns parsed JSON or None (legacy) +- `get_envelope()` returns a structured envelope so callers can see HTTP errors +- Extraction helpers: `extract_purchase_orders`, `extract_parts`, `extract_workcenters` +- `discover_all()` endpoint probe utility ### plex_diagnostics.py -- list_tenants(client) — GET /mdm/v1/tenants -- get_tenant(client, id) — GET /mdm/v1/tenants/{id} -- tenant_whoami(client, configured_id) — composite check that compares - visible tenants against the known Grace and G5 UUIDs and returns a - structured report. Run this first to verify tenant routing. +- `list_tenants(client)` — GET /mdm/v1/tenants +- `get_tenant(client, id)` — GET /mdm/v1/tenants/{id} +- `tenant_whoami(client, configured_id)` — composite check that compares + visible tenants against `KNOWN_TENANTS` and returns a structured report + with `match` enum (`grace`, `g5`, `auth_failed`, `request_failed`, + `no_data`, `configured`, `other`). Run this first to verify tenant routing. ### tool_library_loader.py -- load_library(path) — loads single .json, returns data array -- load_all_libraries(directory) — globs all .json files in CAMTools dir +- `load_library(path)` — loads single .json, returns data array +- `load_all_libraries(directory)` — globs all .json files in CAMTools dir - Stale file guard — aborts if files older than 25h (ADC sync stall detection) -- PermissionError and JSONDecodeError handling (ADC mid-sync file locks) -- report_library_contents() — diagnostic summary +- `PermissionError` and `JSONDecodeError` handling (ADC mid-sync file locks) +- `report_library_contents()` — diagnostic summary + +### bootstrap.py +- Loads `.env.local` (gitignored) into `os.environ` via `setdefault` + semantics — real shell env vars always win +- Imported at the very top of `plex_api.py` so credential reads happen + AFTER the file is loaded +- Tested in `tests/test_bootstrap.py` (16 tests) ### app.py + templates/static - Flask endpoint tester UI at http://localhost:5000 - Left rail: Diagnostics (run first), Plex presets, Extractors, Fusion local - Top: method selector + URL bar + query params + Send (Ctrl/Cmd+Enter) - Tabbed response pane (Body / Headers / Raw), copy and clear, history -- /api/plex/raw proxy lets the UI hit any Plex endpoint via PlexClient +- Env-chip in header shows TEST (amber) or **PROD (red)**, plus + **READ ONLY** / **WRITES ON** sub-pill +- `/api/plex/raw` proxy lets the UI hit any Plex endpoint via PlexClient without exposing credentials to the browser -- /api/diagnostics/tenant runs tenant_whoami from plex_diagnostics +- **Production write guard** in proxy refuses POST/PUT/PATCH/DELETE + against `connect.plex.com` unless `PLEX_ALLOW_WRITES=1` is set +- `/api/diagnostics/tenant` runs `tenant_whoami` +- `/api/config` exposes non-secret config including `is_production` and + `writes_allowed` + +### Tests +- `pytest` suite in `tests/`. CI on PRs to `master` via + `.github/workflows/test.yml`. Branch protection on master requires the + `pytest` check to pass before merge. Auto-merge enabled. +- Currently 119+ tests, all green. --- @@ -178,44 +220,59 @@ All items below are mirrored as GitHub Issues — see https://github.com/grace-shane/plex-api/issues for live status. 1. ~~Fix PlexClient constructor — add api_secret, include header~~ DONE -2. Read baseline tooling inventory from mdm/v1/parts — issue #2 - BLOCKED on Common APIs subscription (currently 401) -3. build_part_payload(tool: dict) -> dict — issue #3 - Maps Fusion tool object to mdm/v1/parts POST body. Blocked on Common APIs. -4. resolve_supplier_uuid(vendor_name: str) -> str — issue #3 - Looks up supplier UUID from mdm/v1/suppliers. Blocked on Common APIs. -5. build_assembly_payload(tool: dict, holder: dict) -> dict — issue #4 - tooling/v1/tool-assemblies is now reachable (Tooling API approved). - Need to figure out the correct paths/payloads. NO LONGER BLOCKED. -6. Core sync logic — upsert with guid-based dedup — issue #7 -7. Error handling + logging to network share text file — issue #8 +2. Read baseline tooling inventory from `mdm/v1/parts` — issue #2. + **Endpoint works** but `limit` is ignored (full DB pull is 19.6 MB). + Need to figure out the right filter parameter (`status=Active`, + maybe `type=...`) to get just consumable cutting tools. +3. `build_part_payload(tool: dict) -> dict` — issue #3. + Maps Fusion tool object to `mdm/v1/parts` POST body. Drafting can + start now since we can read existing parts to learn the schema. +4. `resolve_supplier_uuid(vendor_name: str) -> str` — issue #3. + Looks up supplier UUID from `mdm/v1/suppliers` (works on PROD now). +5. `build_assembly_payload(tool: dict, holder: dict) -> dict` — issue #4. + `tooling/v1/tool-assemblies` returns 404 on PROD — need working URL + pattern from Insomnia. +6. Core sync logic — upsert with guid-based dedup — issue #7. + Dry-run by default. Real writes require `PLEX_ALLOW_WRITES=1`. +7. Error handling + logging to network share text file — issue #8. --- ## Gotchas — read before touching anything -- **G5 is another company's data. Reads we got there were tied to the OLD - app key — not the current Fusion2Plex app. The old key is dead.** -- PLEX_API_KEY and PLEX_API_SECRET come from `.env.local` via `bootstrap.py`. - A real shell env var with the same name will OVERRIDE `.env.local` (by - design) — clear stale shell vars if you have them. -- **The previously hardcoded API key (k3SmLW3y…) is dead.** It's still in - git history but no longer authenticates. The current key is the - Fusion2Plex Consumer Key in `.env.local`, which expires every 31 days. - See issue #12 for the rotation cadence. +- **EVERY READ HITS PRODUCTION DATA.** There is no test environment for the + Fusion2Plex app. Be conscious of rate limits (200/min) and response sizes + (`mdm/v1/parts` is 19.6 MB unfiltered). +- **Writes are blocked at the proxy by default** (PR #17). To enable: + `PLEX_ALLOW_WRITES=1` env var. Unset it as soon as you're done. +- **`mdm/v1/parts` and `purchasing/v1/purchase-orders` IGNORE the `limit` + query param** — empirically verified. `?limit=1` returns the entire + database (19.6 MB and 44 MB respectively). Always include a real filter + like `status=Active` and a date range. +- **`PLEX_API_KEY` / `PLEX_API_SECRET` come from `.env.local`** via + `bootstrap.py`. A real shell env var with the same name will OVERRIDE + `.env.local` via `setdefault` semantics — clear stale shell vars if you + have them. (See History §1 for the painful version of this lesson.) +- **The previously hardcoded API key (`k3SmLW3y…`) is dead.** It's in git + history but no longer authenticates anywhere. - **Plex returns 401 `REQUEST_NOT_AUTHENTICATED` for both bad credentials AND endpoints under unsubscribed API products.** The only way to tell - them apart is to compare across multiple endpoints — if SOME calls - return 200/404 and OTHERS return 401, the 401s are subscription, not - auth. See the access matrix above. -- mdm/v1/parts has NO server-side pagination — unfiltered = entire DB pulled + them apart is to compare across multiple endpoints AND against a + known-good client like Insomnia. See History §2. +- **`l` (lowercase L) and `I` (uppercase i) are visually identical in many + fonts.** When reading credentials from images, treat them as ambiguous. + Always paste credentials as text, never read them from a screenshot. + See History §1. +- **Visible categories in the dev portal ≠ URL prefixes.** "Common APIs, + Platform APIs, Standalone MES, IIoT" don't 1:1 map to `mdm/`, `purchasing/`, + `tooling/` etc. The mapping is opaque. - supplierId in responses is a UUID, not a supplier code (MSC != "MSC001") -- URL-encode spaces in filter strings (MRO SUPPLIES -> MRO%20SUPPLIES) +- URL-encode spaces in filter strings (`MRO SUPPLIES` -> `MRO%20SUPPLIES`) - API key must be in header — URL parameter returns 401 -- PowerShell: use Invoke-RestMethod, not curl (alias doesn't pass headers) +- PowerShell: use `Invoke-RestMethod`, not `curl` (alias doesn't pass headers) - Fusion Tool objects from CAM API are copies, not references - ADC stale file guard will abort sync if network share files are > 25h old -- BROTHER SPEEDIO ALUMINUM.json is committed to repo for reference only — +- `BROTHER SPEEDIO ALUMINUM.json` is committed to repo for reference only — sync script must always read from network share, not this file --- @@ -229,3 +286,87 @@ https://github.com/grace-shane/plex-api/issues for live status. | Citizen / Tsugami | RS-232 → TCP | Moxa NPort 5150/5250 | | Haas VMCs | Ethernet | Sigma 5 native | +--- + +## History of incorrect hypotheses + +This is a postmortem of four wrong turns this project took, written here +so the next agent (or future-me) doesn't repeat them. All four trace back +to one root cause: I misread an API key from a screenshot. + +### §1 — The I-vs-l misread (root cause of everything below) + +When the user shared a screenshot of the Fusion2Plex Consumer Key from the +Plex Developer Portal, I read the 9th character as `I` (uppercase i) when +it was actually `l` (lowercase L). In most fonts these are visually +indistinguishable. I wrote `AEiK3tYoIfA15wt3x3t0qmILFGAG2NkK` into +`.env.local` instead of the correct `AEiK3tYolfA15wt3x3t0qmILFGAG2NkK`. + +Plex's gateway is case-sensitive on the key value, so it returned 401 +`REQUEST_NOT_AUTHENTICATED` for everything. That's an entirely generic +"bad credentials" response. From the outside, it looked exactly like a +subscription problem or a tenant scoping problem. + +**Lesson**: never read credentials from images. Always have the user paste +the value as text, or use Insomnia "Generate Code" output as ground truth. + +### §2 — The "tenant routing" / "subscription" / "more subscription" cycle + +Driven by the 401s from §1, I cycled through three wrong hypotheses about +why endpoints were failing: + +- **Hypothesis A** (initial): "Tooling endpoints return 403 because IT + hasn't enabled the Tooling API collection in the dev portal" — sourced + from the original `Plex_API_Reference.md` written by the previous + developer. **Plausible but unverified.** +- **Hypothesis B** (my correction in PR #16): "Actually it's tenant + scoping, not subscription. The 403s will resolve once Courtney completes + tenant routing." — based on a misread of BRIEFING. **Wrong.** +- **Hypothesis C** (my second correction): "Actually the Plex_API_Reference + was right, it IS per-product subscription. The Fusion2Plex app needs more + product approvals." — based on testing with the wrong key. **Also wrong.** + +The actual answer was: **the key value was wrong.** Once the right key +was loaded, every endpoint that was supposedly "blocked" started returning +200. There was no subscription problem and no tenant routing problem. +The whole investigation was an artifact of one character. + +**Lesson**: when you have a confusing 401 that resists every hypothesis, +the most likely explanation is that the credential value is wrong, even +if you "verified" it. Verify against a known-good client first. + +### §3 — Tooling/manufacturing/production-control 404s + +After fixing the key, the working endpoints (`mdm/`, `purchasing/`) all +returned 200. But `tooling/v1/tools`, `manufacturing/v1/operations`, and +`production/v1/control/workcenters` returned 404 `RESOURCE_NOT_FOUND`. + +These exact paths were in the original `Plex_API_Reference.md` and worked +for the previous developer with their old credentials on the test +environment. They don't work for the Fusion2Plex app on production. + +There are three possible explanations and we don't yet know which: +- The URL patterns are different in this product set +- Those endpoints aren't included in the Fusion2Plex app's product subscriptions +- The previous developer was on a fundamentally different Plex deployment + +**Status**: unresolved. The user will need to share a working Insomnia +URL for one of those endpoints to make progress. Issues #4, #5, #6 +remain blocked on this. + +### §4 — The stale shell env var + +While debugging §1, I wasted ~45 minutes because the user's shell had a +DIFFERENT, also-invalid `PLEX_API_KEY` set as a User-level Windows +environment variable in `HKCU\Environment`. Even when `.env.local` had the +correct value, `bootstrap.setdefault()` correctly refused to override the +shell value, and Flask kept using the wrong key. + +The user's stale value was `uP4G8xgHdkoCFcJ00LPgfB5KYILsfdt6` — origin +unknown. Probably set via `setx` or System Properties at some earlier +point in the project's life. + +**Lesson**: the very first thing `tenant_whoami` should do is print which +key value (first 8 chars + length + first-source — env var or .env.local) +is being used. We should also probably make `bootstrap.py` log when +`.env.local` is being shadowed by an existing env var. diff --git a/Plex_API_Reference.md b/Plex_API_Reference.md index 18082d6..f924194 100644 --- a/Plex_API_Reference.md +++ b/Plex_API_Reference.md @@ -27,41 +27,50 @@ X-Plex-Connect-Api-Key: --- -## 3. Discovered Endpoints & Subscription Status - -The target architecture requires pushing Fusion 360 data to the Tooling/Workcenter endpoints. Initial discovery revealed that certain API collections require activation by IT. - -### ✅ Working Endpoints - -| Collection | Endpoint | Purpose | -|---|---|---| -| Master Data | `mdm/v1/parts` | Returns master part records. Confirmed working. | -| Master Data | `mdm/v1/suppliers` | Returns supplier UUIDs (e.g., MSC Industrial). | -| Purchasing | `purchasing/v1/purchase-orders` | Returns full PO headers (e.g., tooling orders from MSC). | -| Production | `production/v1/control/workcenters` | Discovered on Dev Portal. Replaces old 404 manufacturing endpoint. | - -### API Product Subscription Model +## 3. Verified Endpoints & Access Matrix > [!IMPORTANT] -> Plex requires each Consumer Key to be **explicitly subscribed** to API products in the developer portal before any URI under that product is reachable. An unsubscribed product returns **HTTP 401 `REQUEST_NOT_AUTHENTICATED`** at the gateway, *not* 403 — same wire response as bad credentials, which makes diagnosing this without an access matrix surprisingly hard. -> -> Verified empirically against the Grace `Fusion2Plex` app (April 2026): `tooling/v1/*` returns `404 RESOURCE_NOT_FOUND` (auth ok, just no resource at that path), while unsubscribed products like `mdm/v1/*` return `401 REQUEST_NOT_AUTHENTICATED`. The 401-vs-404 distinction is the only way to tell from outside the portal whether a product is enabled. - -#### Current access matrix for the `Fusion2Plex` app - -| Path | Status | Subscribed? | -|---------------------------------------|--------|-------------| -| `mdm/v1/tenants` | 401 | ❌ Common APIs not approved | -| `mdm/v1/parts` | 401 | ❌ Common APIs not approved | -| `mdm/v1/suppliers` | 401 | ❌ Common APIs not approved | -| `purchasing/v1/purchase-orders` | 401 | ❌ Purchasing not approved | -| `production/v1/control/workcenters` | 401 | ❌ Production Control not approved | -| `manufacturing/v1/operations` | 404 | ✅ Standalone MES approved | -| `tooling/v1/tools` | 404 | ✅ Tooling approved | -| `tooling/v1/tool-assemblies` | 404 | ✅ Tooling approved | -| `tooling/v1/tool-inventory` | 404 | ✅ Tooling approved | - -**Pending IT action**: ask Courtney to also approve the `Fusion2Plex` app for **Common APIs**, **Purchasing**, and **Production Control** so we can read parts/suppliers, look up POs, and push workcenter docs. +> All values below were verified empirically against `connect.plex.com` +> (production) on **2026-04-07** with the `Fusion2Plex` Consumer Key on the +> Grace tenant (`58f781ba-1691-4f32-b1db-381cdb21300c`). Reproduce by +> running the diagnostic at `/api/diagnostics/tenant` from the local UI. + +### Current access matrix + +| Status | Path | Notes | +|--------|------------------------------------------|------------------------------------------------------| +| **200**| `mdm/v1/tenants` | Returns tenant list (62 B). Used by `tenant_whoami`. | +| **200**| `mdm/v1/parts?limit=1` | **19.6 MB** — `limit` IGNORED. Filter or pay the bill. | +| **200**| `mdm/v1/suppliers?limit=1` | 708 KB — same no-pagination behaviour. | +| **200**| `purchasing/v1/purchase-orders?limit=1` | **44 MB** — full PO history. | +| 404 | `tooling/v1/tools` | Path doesn't exist on this app's product set. | +| 404 | `tooling/v1/tool-assemblies` | Same. | +| 404 | `tooling/v1/tool-inventory` | Same. | +| 404 | `manufacturing/v1/operations` | Same. | +| 404 | `production/v1/control/workcenters` | Same. Issues #4, #5, #6 are blocked on finding the right URLs. | + +### Reading Plex's status codes + +- **200** — success. +- **401 `REQUEST_NOT_AUTHENTICATED`** — bad credentials OR a recognized + namespace your app isn't subscribed to. Same wire response, indistinguishable + from outside. +- **404 `RESOURCE_NOT_FOUND`** — Plex's gateway has no route at that path. + Could mean unknown URL OR subscribed-but-no-resource. Same wire response. +- **403** — **never observed in practice on this app**, despite earlier docs + claiming we were getting 403 from `tooling/v1/*`. Treat any 403 as + unexpected. + +The 401-vs-404 distinction is **not** a clean signal. The only reliable way +to disambiguate is to compare against a known-good client (Insomnia "Generate +Code" output is the gold standard). + +### No server-side pagination + +`mdm/v1/parts` and `purchasing/v1/purchase-orders` **silently ignore** the +`limit` query parameter. We learned this empirically — `?limit=1` returned +19.6 MB and 44 MB respectively. Always use a real filter (`status=Active`, +date range, etc.) before calling these endpoints. --- diff --git a/TODO.md b/TODO.md index a695a87..5a1a320 100644 --- a/TODO.md +++ b/TODO.md @@ -20,12 +20,12 @@ This document outlines the step-by-step implementation plan for the Autodesk Fus ## Phase 3: Plex API Source-of-Truth Implementation -- [ ] Implement API call to retrieve current tooling inventory from Plex (master list) to prep for overwrite. → [#2](https://github.com/grace-shane/plex-api/issues/2) -- [ ] Implement API call to update/create purchased parts (focused first on **consumables** like cutting tools) in Plex. → [#3](https://github.com/grace-shane/plex-api/issues/3) -- [ ] Implement API call to create/update Tool Assemblies, assigning the purchased consumable parts to them. → [#4](https://github.com/grace-shane/plex-api/issues/4) -- [ ] Implement API call to link Tool Assemblies to Routings/Operations. → [#5](https://github.com/grace-shane/plex-api/issues/5) -- [ ] Implement API call to update tooling within the specific Workcenter Document (`production/v1/control/workcenters`). → [#6](https://github.com/grace-shane/plex-api/issues/6) -- [ ] **PARTIALLY BLOCKED**: New `Fusion2Plex` app from Courtney is approved for **Tooling** and **Standalone MES** API products (those endpoints now return 404 instead of 403 — auth ok). Still waiting on Courtney to also approve **Common APIs**, **Purchasing**, and **Production Control** for the same app. The earlier "tenant routing" hypothesis was wrong; this was per-product subscription all along. → [#1](https://github.com/grace-shane/plex-api/issues/1) +- [ ] Implement API call to retrieve current tooling inventory from Plex (master list) — `mdm/v1/parts` works on PROD now, but the `limit` param is ignored so we need a real filter (`status=Active`, etc.). → [#2](https://github.com/grace-shane/plex-api/issues/2) +- [ ] Implement API call to update/create purchased parts — `mdm/v1/parts` and `mdm/v1/suppliers` are reachable, drafting can begin. Writes are blocked at the proxy by default; opt in with `PLEX_ALLOW_WRITES=1`. → [#3](https://github.com/grace-shane/plex-api/issues/3) +- [ ] Implement API call to create/update Tool Assemblies — `tooling/v1/tool-assemblies` returns 404 on PROD with the Fusion2Plex app. Need a working URL pattern from Insomnia. → [#4](https://github.com/grace-shane/plex-api/issues/4) +- [ ] Implement API call to link Tool Assemblies to Routings/Operations — `manufacturing/v1/operations` returns 404 on PROD. Same problem as #4. → [#5](https://github.com/grace-shane/plex-api/issues/5) +- [ ] Implement API call to update tooling within the specific Workcenter Document — `production/v1/control/workcenters` returns 404 on PROD. Same problem. → [#6](https://github.com/grace-shane/plex-api/issues/6) +- [x] **IT blocker resolved.** The Fusion2Plex app on production with the Grace tenant authenticates correctly. The earlier "tenant routing" / "subscription approvals" investigation was a red herring caused by a credential typo. See BRIEFING.md "History of incorrect hypotheses" for the postmortem. → [#1](https://github.com/grace-shane/plex-api/issues/1) ## Phase 4: Data Mapping & Sync Logic diff --git a/plex_api.py b/plex_api.py index d25c359..4739e94 100644 --- a/plex_api.py +++ b/plex_api.py @@ -21,18 +21,36 @@ from datetime import datetime # ───────────────────────────────────────────── -# CONFIGURATION — fill these in +# CONFIGURATION # ───────────────────────────────────────────── -# Credentials come from environment variables — never hardcode/commit. -# PLEX_API_KEY — Consumer Key from developers.plex.com → My Apps -# PLEX_API_SECRET — Consumer Secret, paired with the key -API_KEY = os.environ.get("PLEX_API_KEY", "") -API_SECRET = os.environ.get("PLEX_API_SECRET", "") -# Tenant IDs are not secrets — safe to commit. G5 is what we currently have access to. -TENANT_ID = "b406c8c4-cef0-4d62-862c-1758b702cd02" # G5 (read-only) — Grace UUID = a6af9c99-bce5-4938-a007-364dc5603d08 -BASE_URL = "https://connect.plex.com" -TEST_URL = "https://test.connect.plex.com" -USE_TEST = True # all dev work goes against test.connect.plex.com +# All values come from environment variables (loaded via bootstrap.py +# from .env.local). Credentials are never hardcoded or committed. +# +# PLEX_API_KEY — Consumer Key from the Plex Developer Portal +# PLEX_API_SECRET — Consumer Secret (currently optional — Plex +# gateway authenticates on key alone) +# PLEX_TENANT_ID — Target tenant UUID. Default is the Grace +# Engineering production tenant. Tenant IDs +# are not secrets, safe to commit as defaults. +# PLEX_USE_TEST — "1" to hit test.connect.plex.com instead of +# connect.plex.com (production). Default is False +# because the current Fusion2Plex app only exists +# in the production environment. +# +# History note: an earlier version of this file hardcoded an old +# Consumer Key and the wrong Grace UUID (a6af9c99-...). Both are dead. +# The verified-working configuration is what's defaulted below. +GRACE_TENANT_ID = "58f781ba-1691-4f32-b1db-381cdb21300c" + +API_KEY = os.environ.get("PLEX_API_KEY", "") +API_SECRET = os.environ.get("PLEX_API_SECRET", "") +TENANT_ID = os.environ.get("PLEX_TENANT_ID", GRACE_TENANT_ID) + +BASE_URL = "https://connect.plex.com" +TEST_URL = "https://test.connect.plex.com" +USE_TEST = os.environ.get("PLEX_USE_TEST", "").strip().lower() in ( + "1", "true", "yes", "on", "enabled", +) OUTPUT_DIR = "C:/projects/plex-api/outputs" TOOL_LIB_DIR = "Z:\\Engineering\\Tooling\\Fusion_Libraries" # Mapped drive path containing JSON files @@ -421,9 +439,9 @@ def explore_parts(client): if __name__ == "__main__": - if not API_KEY or not API_SECRET: + if not API_KEY: raise SystemExit( - "Missing credentials. Set PLEX_API_KEY and PLEX_API_SECRET environment variables." + "Missing PLEX_API_KEY. Set it in the environment or in .env.local." ) client = PlexClient( @@ -435,10 +453,18 @@ def explore_parts(client): print(f"Plex API Client — {'TEST' if USE_TEST else 'PRODUCTION'}") print(f"Base URL: {client.base}") - print(f"Key: {API_KEY[:8]}{'*' * 20}") + print(f"Tenant: {TENANT_ID or '(default)'}") + print(f"Key: {API_KEY[:8]}{'*' * 20}") + print(f"Secret: {'set' if API_SECRET else '(unset — Plex authenticates on key alone)'}") + + if not USE_TEST: + print() + print("WARNING: Connected to PRODUCTION Plex environment.") + print(" Reads are safe. Writes are blocked at the proxy unless") + print(" PLEX_ALLOW_WRITES=1 is also set in the environment.") # ── Focus: Parts endpoint exploration - explore_parts(client) + # explore_parts(client) # NOTE: pulls 19 MB unfiltered — leave commented # ── Other exploration (uncomment as needed) # discover_all(client) diff --git a/plex_diagnostics.py b/plex_diagnostics.py index 5941a82..f787079 100644 --- a/plex_diagnostics.py +++ b/plex_diagnostics.py @@ -16,13 +16,22 @@ # Known tenants # Tenant IDs are not secrets — committing them is fine. These labels are # used to make the whoami report human-readable. +# +# History: an earlier version of BRIEFING.md listed a different Grace UUID +# (a6af9c99-bce5-4938-a007-364dc5603d08). That value is dead — verified +# empirically against the live API. The real Grace tenant ID is the one +# below, which the Plex API itself returns when you GET mdm/v1/tenants +# with the Fusion2Plex Consumer Key. The old UUID is kept here labeled +# "Grace (stale)" so anyone hitting it gets a clear signal. # ───────────────────────────────────────────── -GRACE_TENANT_ID = "a6af9c99-bce5-4938-a007-364dc5603d08" -G5_TENANT_ID = "b406c8c4-cef0-4d62-862c-1758b702cd02" +GRACE_TENANT_ID = "58f781ba-1691-4f32-b1db-381cdb21300c" # verified Apr 2026 +GRACE_OLD_TENANT_ID = "a6af9c99-bce5-4938-a007-364dc5603d08" # dead, kept for diagnostics +G5_TENANT_ID = "b406c8c4-cef0-4d62-862c-1758b702cd02" KNOWN_TENANTS = { - GRACE_TENANT_ID: "Grace Engineering", - G5_TENANT_ID: "G5", + GRACE_TENANT_ID: "Grace Engineering", + GRACE_OLD_TENANT_ID: "Grace (stale UUID — replace with verified one)", + G5_TENANT_ID: "G5", } diff --git a/static/css/style.css b/static/css/style.css index faabb30..c61891e 100644 --- a/static/css/style.css +++ b/static/css/style.css @@ -120,7 +120,30 @@ button { cursor: pointer; } } .env-chip.test { color: var(--warn); border-color: rgba(234, 179, 8, 0.3); } -.env-chip.prod { color: var(--err); border-color: rgba(239, 68, 68, 0.3); } +.env-chip.prod { + color: var(--err); + border-color: rgba(239, 68, 68, 0.5); + background: rgba(239, 68, 68, 0.08); + font-weight: 600; +} + +.env-chips { + display: flex; + align-items: center; + gap: 4px; +} + +.writes-chip.hidden { display: none; } +.writes-chip.allowed { + color: var(--err); + border-color: rgba(239, 68, 68, 0.5); + background: rgba(239, 68, 68, 0.08); +} +.writes-chip.blocked { + color: var(--ok); + border-color: rgba(34, 197, 94, 0.4); + background: rgba(34, 197, 94, 0.06); +} .rail-section { padding: 12px 12px 16px; diff --git a/static/js/script.js b/static/js/script.js index 10cf6e3..6016ccc 100644 --- a/static/js/script.js +++ b/static/js/script.js @@ -15,6 +15,7 @@ const urlHostEl = $("#url-host"); const sendBtn = $("#btn-send"); const envChipEl = $("#env-chip"); + const writesChipEl = $("#writes-chip"); const statusStripEl = $("#status-strip"); const respPre = $("#resp-pre"); @@ -47,10 +48,39 @@ const r = await fetch("/api/config"); const cfg = await r.json(); urlHostEl.textContent = `${cfg.base_url}/`; - envChipEl.textContent = cfg.environment; + + // Environment chip + envChipEl.textContent = cfg.environment === "production" ? "PROD" : "TEST"; envChipEl.classList.remove("test", "prod"); - envChipEl.classList.add(cfg.environment === "test" ? "test" : "prod"); - envChipEl.title = `Tenant ${cfg.tenant_id || "(default)"} · key:${cfg.has_key ? "✓" : "✗"} secret:${cfg.has_secret ? "✓" : "✗"}`; + envChipEl.classList.add(cfg.is_production ? "prod" : "test"); + envChipEl.title = + `Tenant ${cfg.tenant_id || "(default)"} · ` + + `key:${cfg.has_key ? "✓" : "✗"} ` + + `secret:${cfg.has_secret ? "✓" : "✗"}`; + + // Writes chip — only meaningful in production + if (cfg.is_production) { + writesChipEl.classList.remove("hidden"); + if (cfg.writes_allowed) { + writesChipEl.textContent = "WRITES ON"; + writesChipEl.classList.remove("blocked"); + writesChipEl.classList.add("allowed"); + writesChipEl.title = + "PLEX_ALLOW_WRITES is set. POST/PUT/PATCH/DELETE to " + + "production are ENABLED. Every mutating call hits real " + + "Grace Engineering production data."; + } else { + writesChipEl.textContent = "READ ONLY"; + writesChipEl.classList.remove("allowed"); + writesChipEl.classList.add("blocked"); + writesChipEl.title = + "Production write guard active. POST/PUT/PATCH/DELETE " + + "to production are blocked at the proxy. To enable, set " + + "PLEX_ALLOW_WRITES=1 in the environment and restart."; + } + } else { + writesChipEl.classList.add("hidden"); + } } catch (e) { envChipEl.textContent = "offline"; } diff --git a/templates/index.html b/templates/index.html index d274acf..f90d38e 100644 --- a/templates/index.html +++ b/templates/index.html @@ -12,7 +12,10 @@