Add ScopeBlind authorization receipts extension#459
Add ScopeBlind authorization receipts extension#459vipul674 wants to merge 3 commits intoGetBindu:mainfrom
Conversation
📝 WalkthroughWalkthroughAdds a new ScopeBlind extension that evaluates Cedar authorization policies, issues Ed25519-signed authorization receipts, wires middleware to evaluate requests and attach context, extends task/workers to create and attach receipts to artifacts/metadata, and provides verification utilities and tests. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Middleware as ScopeBlindMiddleware
participant Extension as ScopeBlindExtension
participant Worker as ManifestWorker
participant Storage as ArtifactStorage
participant Verifier as Verifier
rect rgba(135,206,250,0.5)
Client->>Middleware: POST JSON-RPC request (method, params)
Middleware->>Extension: evaluate_request(request, method, request_data)
Extension-->>Middleware: ScopeBlindDecision (Allow/Deny, policy_hash, verification_key)
Middleware->>Middleware: store decision in request.state.scopeblind_context
alt decision Deny & mode=enforce
Middleware-->>Client: JSON-RPC error (403)
else proceed
Middleware->>Worker: submit task (includes scopeblind_context)
Worker->>Extension: create_receipt(authorization_context, artifacts, task_id, ...)
Extension-->>Worker: ScopeBlindReceipt
Worker->>Storage: attach receipt to artifacts & update task metadata
end
end
rect rgba(144,238,144,0.5)
Verifier->>Verifier: verify_receipt(receipt)
Verifier->>Storage: optionally recompute artifact digest
Verifier-->>Client: VerificationResult (valid/signature/integrity)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
bindu/server/endpoints/agent_card.py (1)
47-57:⚠️ Potential issue | 🟡 MinorSilent fall-through when
agent_extensionis not a dict.If an extension exposes an
agent_extensionattribute whose value is not adict, the new branch neither returns a serialized value nor falls through to theisinstance(ext, dict)/ unknown-type branches (those areelif). The function then returnsNoneimplicitly with no warning log, making misconfigurations hard to diagnose. Also note the ordering: an object that both has anagent_extensionattr and is adictwill take this branch first, which is fine in practice but worth being explicit about.Minor Ruff B009 nit on line 48:
getattr(ext, "agent_extension")with a constant name can just beext.agent_extension.♻️ Proposed fix
- elif hasattr(ext, "agent_extension"): - serialized = getattr(ext, "agent_extension") - if isinstance(serialized, dict): - return serialized - elif isinstance(ext, dict): + elif hasattr(ext, "agent_extension"): + serialized = ext.agent_extension + if isinstance(serialized, dict): + return serialized + logger.warning( + f"Extension {type(ext).__name__} exposes non-dict agent_extension " + f"({type(serialized).__name__}), skipping" + ) + return None + elif isinstance(ext, dict): # Already in correct format return ext else: # Unknown extension type logger.warning(f"Unknown extension type: {type(ext)}, skipping") return None🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/server/endpoints/agent_card.py` around lines 47 - 57, The branch handling ext.agent_extension silently returns None when that attribute exists but is not a dict; update the logic around ext and agent_extension to explicitly handle non-dict values: use ext.agent_extension (not getattr) to retrieve the attribute, if it's a dict return it, otherwise log a warning via logger.warning that the agent_extension is present but not a dict (including its actual type/value) and return None; ensure the dict-vs-attribute checks are ordered so dict ext still returns immediately and that any case with a non-dict agent_extension produces the warning instead of falling through silently.
🧹 Nitpick comments (8)
tests/unit/server/endpoints/test_agent_card.py (1)
43-55: LGTM — good coverage for the newagent_extensionbranch.Test correctly exercises the new branch in
_serialize_extension. The Ruff RUF012 hint about the mutable class attribute is a false positive here — the attribute is only read and asserted for identity equality, never mutated.Optional: consider also adding a negative test where
ext.agent_extensionis a non-dict (e.g.Noneor a string) to lock in the current fall-through behavior (returnsNone), which is currently untested and — as noted onagent_card.py— a bit subtle.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/server/endpoints/test_agent_card.py` around lines 43 - 55, Add a negative test exercising the fall-through when an extension exposes a non-dict agent_extension: create a test (e.g. test_serialize_extension_agent_extension_non_dict) that defines a MockExtension with agent_extension set to None and another case with a string, call _serialize_extension(MockExtension()) for each, and assert the result is None to lock in current behavior in _serialize_extension.bindu/server/handlers/message_handlers.py (1)
155-162: LGTM — context extraction and forwarding look correct.
_scopeblind_contextis popped from metadata (so it doesn't leak to stored message metadata) and forwarded intoscheduler_params["scopeblind_context"], matching the newTaskSendParamsfield and consumed byManifestWorker.run_task.Nit: a blank line between the payment-context and scopeblind-context blocks would aid readability (they're currently visually glued together), and the stale "✅ SAFE payment context handling" comment on line 155 now also covers scopeblind — consider generalizing it to "extension-provided context handling".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/server/handlers/message_handlers.py` around lines 155 - 162, Update the comment and spacing around the context-extraction blocks: replace the stale "✅ SAFE payment context handling" comment with a more general description like "extension-provided context handling", insert a blank line between the payment_context pop/block and the scopeblind_context pop/block for readability, and ensure the extracted keys (payment_context and scopeblind_context) are still forwarded into scheduler_params before calling self.scheduler.run_task so they match TaskSendParams and ManifestWorker.run_task.bindu/settings.py (1)
346-368: Consider anenabledflag and a signing-key passphrase setting.
ScopeBlindSettingsfollows theX402Settingspattern cleanly and correctly exposes values viaapp_settings.scopeblind. Two optional improvements worth considering:
- Add an
enabled: bool = Falsetoggle (likeAuthSettings/HydraSettings) so operators can disable ScopeBlind globally without mutating the extension manifest. This also avoids a surprise runtime cost if the extension is auto-loaded.- The Ed25519 signing key is persisted to disk at
pki_dir/private_key_filename. If the key is stored unencrypted, add aprivate_key_passphrase_env: str = ""(or equivalent secret-sourced field) so deployments can encrypt it at rest — matching the "key separation / receipt non-repudiation" goal stated in the PR objectives. This won't hardcode secrets; it just makes the setting available viaapp_settings.scopeblind.No functional bug; defaults look reasonable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/settings.py` around lines 346 - 368, Add an optional enabled toggle and a private key passphrase field to ScopeBlindSettings so operators can disable ScopeBlind without changing the manifest and can supply an encryption passphrase for the persisted Ed25519 key; specifically add enabled: bool = False (matching AuthSettings/HydraSettings pattern) and private_key_passphrase_env: str = "" (or similarly named secret-sourced setting) as attributes on the ScopeBlindSettings class so they are available via app_settings.scopeblind and populated from the existing SCOPEBLIND__ env_prefix.pyproject.toml (1)
36-36: cedar-python==0.1.4 is verified on PyPI with correct specifications.Distribution exists as
cedar-pythonon PyPI, version 0.1.4 is the latest, requires Python >=3.12, and is Apache-2.0 licensed. Wheels are available for Python 3.12–3.14 across macOS (x86_64, arm64), Linux (x86_64, aarch64), and Windows (x64). No security vulnerabilities detected.Since
cedar-pythonis exclusively imported within the ScopeBlind extension and ScopeBlind is entirely optional/configurable, consider moving it to an optional dependency group:scopeblind = ["cedar-python==0.1.4"]This prevents unnecessary installation costs for users who don't enable ScopeBlind authorization.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pyproject.toml` at line 36, Move the pinned dependency "cedar-python==0.1.4" out of the main dependencies and add it to an optional extras group named "scopeblind" in pyproject.toml; specifically remove the line containing cedar-python==0.1.4 from the top-level dependencies and add an extras entry like scopeblind = ["cedar-python==0.1.4"] so ScopeBlind can opt-in the package without forcing installation for all users.tests/unit/penguin/test_bindufy.py (1)
185-203: Add a regression test for inline Cedar policies.The path-based case is covered, but
_setup_scopeblind_extensionalso needs to preserve inlinecedar_policiesstrings so they are not rewritten as caller-relative paths.Suggested test addition
def test_setup_scopeblind_extension(self, tmp_path): """Test creating ScopeBlind extension from config.""" policy_dir = tmp_path / "policies" policy_dir.mkdir(parents=True, exist_ok=True) (policy_dir / "policy.cedar").write_text( 'permit(principal, action == Action::"message/send", resource);', encoding="utf-8", ) extension = _setup_scopeblind_extension( { "mode": "shadow", "cedar_policies": str(policy_dir), }, caller_dir=tmp_path, ) assert extension.mode == "shadow" assert extension.cedar_policies == str(policy_dir) + + def test_setup_scopeblind_extension_preserves_inline_policy(self, tmp_path): + """Test inline Cedar policies are not treated as filesystem paths.""" + policy = 'permit(principal, action == Action::"message/send", resource);' + + extension = _setup_scopeblind_extension( + { + "mode": "shadow", + "cedar_policies": policy, + }, + caller_dir=tmp_path, + ) + + assert extension.mode == "shadow" + assert extension.cedar_policies == policy🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/penguin/test_bindufy.py` around lines 185 - 203, The test currently only verifies path-based cedar_policies are preserved; add a regression test that passes an inline Cedar policy string to _setup_scopeblind_extension and assert it is left unchanged (not rewritten to a caller-relative path). In the tests/unit/penguin/test_bindufy.py file add a test (e.g., test_setup_scopeblind_extension_inline_policy) that calls _setup_scopeblind_extension with {"mode":"shadow", "cedar_policies": 'permit(principal, action == Action::"message/send", resource);'} (or similar inline policy text) and asserts extension.mode == "shadow" and extension.cedar_policies equals the exact inline string; this will ensure _setup_scopeblind_extension preserves inline cedar_policies.bindu/server/middleware/scopeblind.py (1)
47-50: ASGI receive callable won't emithttp.disconnecton repeated calls.
receivereturns the samehttp.requestframe on every invocation. Starlette'sRequest.stream()guards against this via_stream_consumed, so the immediate path works, but any downstream middleware that callsreceive()to detect a client disconnect (e.g. long-polling / SSE helpers) will hang/loop. Consider yielding a one-shot body then anhttp.disconnect:♻️ Suggested one-shot receive
- async def receive(): - return {"type": "http.request", "body": body} + sent = False + async def receive(): + nonlocal sent + if not sent: + sent = True + return {"type": "http.request", "body": body, "more_body": False} + return {"type": "http.disconnect"}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/server/middleware/scopeblind.py` around lines 47 - 50, The custom ASGI receive callable defined for StarletteRequest always returns the same {"type": "http.request", "body": body} frame which can cause downstream code that polls receive() (e.g., for http.disconnect) to hang; change the receive implementation (the async def receive used to construct StarletteRequest) to be one-shot: keep a local boolean/flag (e.g., consumed) and on first call return the http.request frame with body, and on subsequent calls return {"type": "http.disconnect"} (or an empty body then http.disconnect) so downstream middleware expecting a disconnect will not loop indefinitely.tests/unit/extensions/scopeblind/test_scopeblind_extension.py (1)
122-132: Consider using the production serializer to keep test and production receipt serialization synchronized.Manually spreading
receipt.payload.__dict__and mapping artifact digests risks drifting fromreceipt_to_dict(the actual serialization used inattach_receipt_to_artifactsandbuild_task_receipt_metadata). IfScopeBlindReceiptorScopeBlindReceiptPayloadgains/renames a field, this test will silently continue passing while production breaks or vice versa. Using the same helper exercises the verifier against the same shape actually carried through the middleware/worker path.Note:
receipt_to_dictis not part of the public API, so you'll need to either import from the private module or have it exported frombindu.extensions.scopeblind.__init__.py.♻️ Suggested refactor
+ from bindu.extensions.scopeblind.receipt import receipt_to_dict - receipt_dict = { - "payload": { - **receipt.payload.__dict__, - "artifacts": [digest.__dict__ for digest in receipt.payload.artifacts], - }, - "payload_hash": receipt.payload_hash, - "verification_key": receipt.verification_key, - "signature": receipt.signature, - "algorithm": receipt.algorithm, - "issuer": receipt.issuer, - } + receipt_dict = receipt_to_dict(receipt)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/extensions/scopeblind/test_scopeblind_extension.py` around lines 122 - 132, Replace the manual construction of the receipt dictionary with the production serializer: import and call receipt_to_dict(...) (from bindu.extensions.scopeblind or the private module where it's implemented) instead of manually spreading receipt.payload.__dict__ and mapping receipt.payload.artifacts; this ensures the test uses the same shape as attach_receipt_to_artifacts and build_task_receipt_metadata and stays in sync with ScopeBlindReceipt/ScopeBlindReceiptPayload field changes.bindu/extensions/scopeblind/extension.py (1)
123-138:policy_sourcecan raise on long inline policies and inconsistently re-uses the unstripped value.Two small issues:
raw_value = self.cedar_policies.strip()is used to constructPath(os.path.expanduser(raw_value)), but a multi-line inline Cedar policy (permit(...);\nforbid(...);) becomes aPathwhose.is_file()/.is_dir()calls may raiseOSError(ENAMETOOLONG)on some filesystems (e.g. Linux path limit ≈ 4096 bytes, component limit 255), instead of just returningFalse.- When falling through to the inline branch, the function returns
self.cedar_policies(unstripped) even though the file/dir branches operated on the stripped/expanded value. This inconsistency will leak leading/trailing whitespace into thepolicy_hashandPolicySetparsing.♻️ Proposed tweak
`@cached_property` def policy_source(self) -> str: """Load Cedar policy text from a string, file, or directory.""" raw_value = self.cedar_policies.strip() - expanded = Path(os.path.expanduser(raw_value)) - if expanded.is_file(): - return expanded.read_text(encoding="utf-8") - if expanded.is_dir(): - policy_parts = [ - path.read_text(encoding="utf-8") - for path in sorted(expanded.glob("*.cedar")) - ] - if not policy_parts: - raise ValueError(f"No Cedar policy files found in {expanded}") - return "\n".join(policy_parts) - return self.cedar_policies + # Heuristic: only treat as filesystem path when it looks like one and fits path limits. + looks_like_path = ( + "\n" not in raw_value + and len(raw_value) < 4096 + and ("/" in raw_value or raw_value.endswith(".cedar") or raw_value.startswith("~")) + ) + if looks_like_path: + try: + expanded = Path(os.path.expanduser(raw_value)) + if expanded.is_file(): + return expanded.read_text(encoding="utf-8") + if expanded.is_dir(): + policy_parts = [ + path.read_text(encoding="utf-8") + for path in sorted(expanded.glob("*.cedar")) + ] + if not policy_parts: + raise ValueError(f"No Cedar policy files found in {expanded}") + return "\n".join(policy_parts) + except OSError: + pass + return raw_value🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/extensions/scopeblind/extension.py` around lines 123 - 138, policy_source currently always treats the stripped cedar_policies string as a filesystem path which can raise OSError on very long inline policies and then returns the unstripped value on the inline branch; update policy_source to first compute stripped = self.cedar_policies.strip(), and only attempt Path expansion/IO when stripped does not contain newlines and its length is reasonable (or wrap Path/os calls in a try/except OSError) to avoid ENAMETOOLONG; when falling back to inline return stripped (not self.cedar_policies) so the same normalized value is used for hashing/parsing, and ensure any Path-related failures are caught and treated as “not a file/dir” rather than bubbling the OSError.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bindu/extensions/scopeblind/extension.py`:
- Around line 170-178: generate_and_save_key_pair currently creates _key_dir and
writes the private key without setting strict permissions; update it so after
creating the directory (self._key_dir.mkdir(...)) you enforce owner-only
permissions (e.g. os.chmod(self._key_dir, 0o700)), and after writing the key
bytes from _generate_key_pair_data() call os.chmod(self.private_key_path, 0o600)
to ensure the private key is owner-readable/writable only; also set a reasonable
permission for the public key (e.g. os.chmod(self.public_key_path, 0o644)) so
the public key remains readable while the private key is protected.
In `@bindu/extensions/scopeblind/receipt.py`:
- Around line 116-128: attach_receipt_to_artifacts currently appends the same
receipt_dict instance to every artifact and silently skips artifacts whose
metadata or receipts shape is unexpected; change it so each artifact gets its
own copy of the receipt (e.g., dict(receipt_dict) or a shallow copy) before
appending, and add explicit logging warnings when metadata is not a dict or when
the existing app_settings.scopeblind.meta_receipts_key value exists but is not a
list (in that case log and replace it with a new list containing the copied
receipt so the artifact is not left without a receipt). Use the existing symbols
attach_receipt_to_artifacts, receipt_to_dict,
app_settings.scopeblind.meta_receipts_key, Artifact and ScopeBlindReceipt to
locate where to make the change and keep behavior in-place for artifacts.
In `@bindu/server/endpoints/a2a_protocol.py`:
- Around line 89-103: In _attach_scopeblind_context, scrub any client-supplied
reserved key before attaching the middleware-produced context: locate the
metadata on a2a_request["params"]["message"] (msg_obj), remove any existing
"_scopeblind_context" (pop it) from msg_obj["metadata"], then, if
request.state.scopeblind_context is present, set
msg_obj["metadata"]["_scopeblind_context"] = request.state.scopeblind_context so
only the middleware value is forwarded; keep the early returns for non-target
methods and missing params intact.
In `@bindu/server/middleware/scopeblind.py`:
- Around line 40-56: The middleware currently swallows all exceptions when
parsing the body (await request.body() / json.loads) and forwards the request
even in enforce mode; update ScopeBlind so parsing errors are handled by denying
the request when mode == "enforce" (return a 4xx/403 response) and only allow
fail-open in non-enforce modes; narrow the except clause to (UnicodeDecodeError,
json.JSONDecodeError) to avoid hiding unexpected errors; when rebuilding the
request use the existing Request class (remove the redundant inline
StarletteRequest import) and ensure the ASGI receive() returns
{"type":"http.request","body": body, "more_body": False} so downstream reads
correctly; keep the logger.warning but include the error string.
---
Outside diff comments:
In `@bindu/server/endpoints/agent_card.py`:
- Around line 47-57: The branch handling ext.agent_extension silently returns
None when that attribute exists but is not a dict; update the logic around ext
and agent_extension to explicitly handle non-dict values: use
ext.agent_extension (not getattr) to retrieve the attribute, if it's a dict
return it, otherwise log a warning via logger.warning that the agent_extension
is present but not a dict (including its actual type/value) and return None;
ensure the dict-vs-attribute checks are ordered so dict ext still returns
immediately and that any case with a non-dict agent_extension produces the
warning instead of falling through silently.
---
Nitpick comments:
In `@bindu/extensions/scopeblind/extension.py`:
- Around line 123-138: policy_source currently always treats the stripped
cedar_policies string as a filesystem path which can raise OSError on very long
inline policies and then returns the unstripped value on the inline branch;
update policy_source to first compute stripped = self.cedar_policies.strip(),
and only attempt Path expansion/IO when stripped does not contain newlines and
its length is reasonable (or wrap Path/os calls in a try/except OSError) to
avoid ENAMETOOLONG; when falling back to inline return stripped (not
self.cedar_policies) so the same normalized value is used for hashing/parsing,
and ensure any Path-related failures are caught and treated as “not a file/dir”
rather than bubbling the OSError.
In `@bindu/server/handlers/message_handlers.py`:
- Around line 155-162: Update the comment and spacing around the
context-extraction blocks: replace the stale "✅ SAFE payment context handling"
comment with a more general description like "extension-provided context
handling", insert a blank line between the payment_context pop/block and the
scopeblind_context pop/block for readability, and ensure the extracted keys
(payment_context and scopeblind_context) are still forwarded into
scheduler_params before calling self.scheduler.run_task so they match
TaskSendParams and ManifestWorker.run_task.
In `@bindu/server/middleware/scopeblind.py`:
- Around line 47-50: The custom ASGI receive callable defined for
StarletteRequest always returns the same {"type": "http.request", "body": body}
frame which can cause downstream code that polls receive() (e.g., for
http.disconnect) to hang; change the receive implementation (the async def
receive used to construct StarletteRequest) to be one-shot: keep a local
boolean/flag (e.g., consumed) and on first call return the http.request frame
with body, and on subsequent calls return {"type": "http.disconnect"} (or an
empty body then http.disconnect) so downstream middleware expecting a disconnect
will not loop indefinitely.
In `@bindu/settings.py`:
- Around line 346-368: Add an optional enabled toggle and a private key
passphrase field to ScopeBlindSettings so operators can disable ScopeBlind
without changing the manifest and can supply an encryption passphrase for the
persisted Ed25519 key; specifically add enabled: bool = False (matching
AuthSettings/HydraSettings pattern) and private_key_passphrase_env: str = "" (or
similarly named secret-sourced setting) as attributes on the ScopeBlindSettings
class so they are available via app_settings.scopeblind and populated from the
existing SCOPEBLIND__ env_prefix.
In `@pyproject.toml`:
- Line 36: Move the pinned dependency "cedar-python==0.1.4" out of the main
dependencies and add it to an optional extras group named "scopeblind" in
pyproject.toml; specifically remove the line containing cedar-python==0.1.4 from
the top-level dependencies and add an extras entry like scopeblind =
["cedar-python==0.1.4"] so ScopeBlind can opt-in the package without forcing
installation for all users.
In `@tests/unit/extensions/scopeblind/test_scopeblind_extension.py`:
- Around line 122-132: Replace the manual construction of the receipt dictionary
with the production serializer: import and call receipt_to_dict(...) (from
bindu.extensions.scopeblind or the private module where it's implemented)
instead of manually spreading receipt.payload.__dict__ and mapping
receipt.payload.artifacts; this ensures the test uses the same shape as
attach_receipt_to_artifacts and build_task_receipt_metadata and stays in sync
with ScopeBlindReceipt/ScopeBlindReceiptPayload field changes.
In `@tests/unit/penguin/test_bindufy.py`:
- Around line 185-203: The test currently only verifies path-based
cedar_policies are preserved; add a regression test that passes an inline Cedar
policy string to _setup_scopeblind_extension and assert it is left unchanged
(not rewritten to a caller-relative path). In the
tests/unit/penguin/test_bindufy.py file add a test (e.g.,
test_setup_scopeblind_extension_inline_policy) that calls
_setup_scopeblind_extension with {"mode":"shadow", "cedar_policies":
'permit(principal, action == Action::"message/send", resource);'} (or similar
inline policy text) and asserts extension.mode == "shadow" and
extension.cedar_policies equals the exact inline string; this will ensure
_setup_scopeblind_extension preserves inline cedar_policies.
In `@tests/unit/server/endpoints/test_agent_card.py`:
- Around line 43-55: Add a negative test exercising the fall-through when an
extension exposes a non-dict agent_extension: create a test (e.g.
test_serialize_extension_agent_extension_non_dict) that defines a MockExtension
with agent_extension set to None and another case with a string, call
_serialize_extension(MockExtension()) for each, and assert the result is None to
lock in current behavior in _serialize_extension.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: cb9c8f32-57eb-425c-bf28-d759f01e09ad
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (26)
README.mdbindu/common/protocol/types.pybindu/extensions/scopeblind/__init__.pybindu/extensions/scopeblind/extension.pybindu/extensions/scopeblind/receipt.pybindu/extensions/scopeblind/verifier.pybindu/penguin/bindufy.pybindu/penguin/config_validator.pybindu/server/applications.pybindu/server/endpoints/a2a_protocol.pybindu/server/endpoints/agent_card.pybindu/server/handlers/message_handlers.pybindu/server/middleware/__init__.pybindu/server/middleware/scopeblind.pybindu/server/workers/manifest_worker.pybindu/settings.pybindu/utils/__init__.pybindu/utils/capabilities.pydocs/SCOPEBLIND.mdpyproject.tomltests/unit/extensions/scopeblind/test_scopeblind_extension.pytests/unit/penguin/test_bindufy.pytests/unit/server/endpoints/test_agent_card.pytests/unit/server/middleware/test_scopeblind.pytests/unit/server/workers/test_manifest_worker.pytests/unit/utils/test_capabilities.py
| def attach_receipt_to_artifacts( | ||
| artifacts: list[Artifact], | ||
| receipt: ScopeBlindReceipt, | ||
| ) -> list[Artifact]: | ||
| """Attach the receipt to every artifact's metadata in-place.""" | ||
| receipt_dict = receipt_to_dict(receipt) | ||
| for artifact in artifacts: | ||
| metadata = artifact.setdefault("metadata", {}) | ||
| if isinstance(metadata, dict): | ||
| receipts = metadata.setdefault(app_settings.scopeblind.meta_receipts_key, []) | ||
| if isinstance(receipts, list): | ||
| receipts.append(receipt_dict) | ||
| return artifacts |
There was a problem hiding this comment.
attach_receipt_to_artifacts aliases the same receipt_dict across every artifact and silently drops receipts when metadata shape is unexpected.
- The same
receipt_dictobject is appended to every artifact'sscopeblind.receiptslist (line 127). Any downstream code that mutates one artifact's attached receipt (e.g. attaches extra fields, sorts keys) will unexpectedly mutate it for all other artifacts as well. - The
isinstance(metadata, dict)/isinstance(receipts, list)guards silently skip attachment if the shapes don't match, so a pre-existing non-listscopeblind.receiptsvalue causes receipts to be dropped without any warning — the artifact then looks valid but carries no receipt, which is exactly the tampering signal verifiers rely on.
♻️ Proposed fix: copy the dict per artifact and log on unexpected shapes
def attach_receipt_to_artifacts(
artifacts: list[Artifact],
receipt: ScopeBlindReceipt,
) -> list[Artifact]:
"""Attach the receipt to every artifact's metadata in-place."""
receipt_dict = receipt_to_dict(receipt)
for artifact in artifacts:
metadata = artifact.setdefault("metadata", {})
- if isinstance(metadata, dict):
- receipts = metadata.setdefault(app_settings.scopeblind.meta_receipts_key, [])
- if isinstance(receipts, list):
- receipts.append(receipt_dict)
+ if not isinstance(metadata, dict):
+ raise TypeError(
+ f"Artifact metadata must be a dict, got {type(metadata).__name__}"
+ )
+ receipts = metadata.setdefault(app_settings.scopeblind.meta_receipts_key, [])
+ if not isinstance(receipts, list):
+ raise TypeError(
+ f"Artifact {app_settings.scopeblind.meta_receipts_key} must be a list, "
+ f"got {type(receipts).__name__}"
+ )
+ receipts.append(dict(receipt_dict)) # per-artifact copy to avoid aliasing
return artifacts🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindu/extensions/scopeblind/receipt.py` around lines 116 - 128,
attach_receipt_to_artifacts currently appends the same receipt_dict instance to
every artifact and silently skips artifacts whose metadata or receipts shape is
unexpected; change it so each artifact gets its own copy of the receipt (e.g.,
dict(receipt_dict) or a shallow copy) before appending, and add explicit logging
warnings when metadata is not a dict or when the existing
app_settings.scopeblind.meta_receipts_key value exists but is not a list (in
that case log and replace it with a new list containing the copied receipt so
the artifact is not left without a receipt). Use the existing symbols
attach_receipt_to_artifacts, receipt_to_dict,
app_settings.scopeblind.meta_receipts_key, Artifact and ScopeBlindReceipt to
locate where to make the change and keep behavior in-place for artifacts.
| def _attach_scopeblind_context(request: Request, a2a_request: Any, method: str) -> None: | ||
| """Attach ScopeBlind authorization context to message metadata if available.""" | ||
| if method not in ("message/send", "message/stream"): | ||
| return | ||
|
|
||
| scopeblind_context = getattr(request.state, "scopeblind_context", None) | ||
| if scopeblind_context is None: | ||
| return | ||
|
|
||
| if "params" not in a2a_request or "message" not in a2a_request["params"]: | ||
| return | ||
|
|
||
| msg_obj = a2a_request["params"]["message"] | ||
| msg_obj.setdefault("metadata", {}) | ||
| msg_obj["metadata"]["_scopeblind_context"] = scopeblind_context |
There was a problem hiding this comment.
Scrub client-supplied _scopeblind_context before forwarding.
metadata is client-controlled. If request.state.scopeblind_context is absent, an incoming _scopeblind_context remains in the request and can be consumed downstream as internal authorization context. Pop the reserved key first, then attach only the middleware-produced value.
Proposed fix
def _attach_scopeblind_context(request: Request, a2a_request: Any, method: str) -> None:
"""Attach ScopeBlind authorization context to message metadata if available."""
if method not in ("message/send", "message/stream"):
return
- scopeblind_context = getattr(request.state, "scopeblind_context", None)
- if scopeblind_context is None:
- return
-
if "params" not in a2a_request or "message" not in a2a_request["params"]:
return
msg_obj = a2a_request["params"]["message"]
- msg_obj.setdefault("metadata", {})
- msg_obj["metadata"]["_scopeblind_context"] = scopeblind_context
+ metadata = msg_obj.setdefault("metadata", {})
+ if not isinstance(metadata, dict):
+ return
+
+ metadata.pop("_scopeblind_context", None)
+
+ scopeblind_context = getattr(request.state, "scopeblind_context", None)
+ if scopeblind_context is None:
+ return
+
+ metadata["_scopeblind_context"] = scopeblind_context🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindu/server/endpoints/a2a_protocol.py` around lines 89 - 103, In
_attach_scopeblind_context, scrub any client-supplied reserved key before
attaching the middleware-produced context: locate the metadata on
a2a_request["params"]["message"] (msg_obj), remove any existing
"_scopeblind_context" (pop it) from msg_obj["metadata"], then, if
request.state.scopeblind_context is present, set
msg_obj["metadata"]["_scopeblind_context"] = request.state.scopeblind_context so
only the middleware value is forwarded; keep the early returns for non-target
methods and missing params intact.
| try: | ||
| body = await request.body() | ||
| request_data = json.loads(body.decode("utf-8")) | ||
| method = request_data.get("method", "") | ||
|
|
||
| from starlette.requests import Request as StarletteRequest | ||
|
|
||
| async def receive(): | ||
| return {"type": "http.request", "body": body} | ||
|
|
||
| request = StarletteRequest(request.scope, receive) | ||
| except Exception as error: | ||
| logger.warning( | ||
| "ScopeBlind middleware could not parse request body", | ||
| error=str(error), | ||
| ) | ||
| return await call_next(request) |
There was a problem hiding this comment.
Fail-open on body parse failure bypasses enforce-mode authorization.
If await request.body() or json.loads(...) raises, the middleware logs a warning and forwards the request downstream without any Cedar evaluation — even when mode="enforce". Downstream JSON-RPC parsing will typically reject a malformed body as well, so real-world exposure is limited, but in enforce mode the safer default is to deny rather than pass through on an unknown parse error.
Also: except Exception is intentionally broad here (Ruff BLE001), which is acceptable for a middleware boundary, but consider narrowing to (UnicodeDecodeError, json.JSONDecodeError) so unexpected errors (e.g. a misbehaving receive in a test/harness) aren't swallowed silently.
🛡️ Suggested tighter handling
try:
body = await request.body()
request_data = json.loads(body.decode("utf-8"))
method = request_data.get("method", "")
- from starlette.requests import Request as StarletteRequest
-
async def receive():
- return {"type": "http.request", "body": body}
+ return {"type": "http.request", "body": body, "more_body": False}
- request = StarletteRequest(request.scope, receive)
- except Exception as error:
+ request = Request(request.scope, receive)
+ except (UnicodeDecodeError, json.JSONDecodeError, AttributeError) as error:
logger.warning(
"ScopeBlind middleware could not parse request body",
error=str(error),
)
- return await call_next(request)
+ if self.scopeblind_ext.mode == "enforce":
+ code, message = extract_error_fields(InsufficientPermissionsError)
+ return jsonrpc_error(
+ code,
+ message,
+ "ScopeBlind could not evaluate the request body.",
+ request_id=None,
+ status=400,
+ )
+ return await call_next(request)This also addresses the redundant inline import at line 45 (already imported at line 9 as Request) and the missing more_body: False on the ASGI receive dict.
🧰 Tools
🪛 Ruff (0.15.10)
[warning] 51-51: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindu/server/middleware/scopeblind.py` around lines 40 - 56, The middleware
currently swallows all exceptions when parsing the body (await request.body() /
json.loads) and forwards the request even in enforce mode; update ScopeBlind so
parsing errors are handled by denying the request when mode == "enforce" (return
a 4xx/403 response) and only allow fail-open in non-enforce modes; narrow the
except clause to (UnicodeDecodeError, json.JSONDecodeError) to avoid hiding
unexpected errors; when rebuilding the request use the existing Request class
(remove the redundant inline StarletteRequest import) and ensure the ASGI
receive() returns {"type":"http.request","body": body, "more_body": False} so
downstream reads correctly; keep the logger.warning but include the error
string.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bindu/penguin/bindufy.py`:
- Around line 161-171: The code currently treats any missing file path like
"policies/authz.cedar" as inline Cedar by falling back to the raw string; change
this to fail-fast for path-like inputs: detect path-like raw_cedar_policies
(e.g., contains os.path.sep, startswith "./" or "../", or endswith a Cedar
extension) and if resolved_policy_path.exists() is False then raise a clear
exception (or log and exit) instead of assigning cedar_policies =
raw_cedar_policies; update the logic around raw_cedar_policies / policy_path /
resolved_policy_path / cedar_policies (and use caller_dir) so only true inline
policy text is accepted and missing files produce an immediate error.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| raw_cedar_policies = scopeblind_config["cedar_policies"].strip() | ||
| policy_path = Path(os.path.expanduser(raw_cedar_policies)) | ||
| resolved_policy_path = policy_path | ||
| if not resolved_policy_path.is_absolute() and caller_dir is not None: | ||
| resolved_policy_path = (caller_dir / resolved_policy_path).resolve() | ||
|
|
||
| cedar_policies = ( | ||
| str(resolved_policy_path) | ||
| if resolved_policy_path.exists() | ||
| else raw_cedar_policies | ||
| ) |
There was a problem hiding this comment.
Fail fast for missing path-like Cedar policy sources.
This correctly preserves inline Cedar, but a typo like policies/authz.cedar now falls back to the literal string instead of failing. Since the extension constructor only validates non-empty policy text, path mistakes can survive startup and fail during authorization.
Suggested guard for path-like inputs
raw_cedar_policies = scopeblind_config["cedar_policies"].strip()
policy_path = Path(os.path.expanduser(raw_cedar_policies))
resolved_policy_path = policy_path
if not resolved_policy_path.is_absolute() and caller_dir is not None:
resolved_policy_path = (caller_dir / resolved_policy_path).resolve()
+ looks_like_policy_path = (
+ policy_path.suffix == ".cedar"
+ or "/" in raw_cedar_policies
+ or "\\" in raw_cedar_policies
+ )
+ if looks_like_policy_path and not resolved_policy_path.exists():
+ raise FileNotFoundError(
+ f"ScopeBlind Cedar policy path does not exist: {resolved_policy_path}"
+ )
+
cedar_policies = (
str(resolved_policy_path)
if resolved_policy_path.exists()
else raw_cedar_policies
)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindu/penguin/bindufy.py` around lines 161 - 171, The code currently treats
any missing file path like "policies/authz.cedar" as inline Cedar by falling
back to the raw string; change this to fail-fast for path-like inputs: detect
path-like raw_cedar_policies (e.g., contains os.path.sep, startswith "./" or
"../", or endswith a Cedar extension) and if resolved_policy_path.exists() is
False then raise a clear exception (or log and exit) instead of assigning
cedar_policies = raw_cedar_policies; update the logic around raw_cedar_policies
/ policy_path / resolved_policy_path / cedar_policies (and use caller_dir) so
only true inline policy text is accepted and missing files produce an immediate
error.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
bindu/extensions/scopeblind/extension.py (1)
355-356: Signing the hex digest string rather than raw bytes.
sha256_digest(...)returns a hex string;self.private_key.sign(payload_hash.encode("utf-8"))therefore signs 64 ASCII bytes instead of the 32-byte raw digest. This is cryptographically fine as long as the verifier reconstructs the hex string identically, but it's non-standard (Ed25519 already hashes internally) and couples verifiers to the exact textual encoding. If you ever movepayload_hashto bytes/base64 in the wire format, signatures will silently stop verifying.Consider signing the deterministic JSON bytes directly (Ed25519 accepts arbitrary-length messages) and keeping
payload_hashpurely as a content identifier, or at minimum add a comment pinning the "sign the lowercase-hex digest" contract so verifier implementers don't diverge.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bindu/extensions/scopeblind/extension.py` around lines 355 - 356, Currently the code signs the hex string returned by sha256_digest by calling self.private_key.sign(payload_hash.encode("utf-8")), which signs 64 ASCII hex bytes instead of the 32 raw digest and couples verifiers to that textual encoding; change this to sign deterministic message bytes (preferably the canonical JSON bytes of the payload) or sign the raw 32-byte digest (decode the hex to bytes before calling self.private_key.sign), and update the code around sha256_digest, payload_hash and self.private_key.sign to reflect this; if you must keep the hex-string contract, add a clear comment next to sha256_digest/payload_hash and the sign call explicitly stating “we sign the lowercase hex-encoded SHA-256 string” so verifier implementers are pinned to the exact encoding.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bindu/extensions/scopeblind/extension.py`:
- Around line 219-222: The issuer property currently returns a value derived
only from policy_hash (issuer in method issuer), so different signing keys with
the same policy produce identical issuers; change issuer to incorporate the
signer's public key (e.g., verification_key or the exported public_key_base58)
by hashing policy_hash concatenated with the public key and taking the first 16
hex chars (e.g., sha256(policy_hash + public_key_base58)[:16]) to produce a
stable identifier per (policy, key) pair; update the cached_property issuer
implementation to reference the class field that holds the verification/public
key (e.g., verification_key) and compute the combined hash accordingly.
- Around line 273-284: The _build_context function assumes Starlette Request
attributes and can raise AttributeError when request lacks .client or .url;
change the reads to safe getattr/getattr-like checks: replace direct access to
request.client.host and request.url.path with guarded retrieval using
getattr(request, "client", None) and getattr(request, "url", None) and then fall
back to defaults like "unknown" or "" (e.g., client = getattr(request, "client",
None); client_ip = client.host if client and getattr(client, "host", None) else
"unknown"; path = getattr(getattr(request, "url", None), "path", "")). Keep
other fields (user, authenticated, request_data.get("id")) unchanged and ensure
this logic lives inside _build_context to avoid AttributeError before
evaluate_request's try/except.
---
Nitpick comments:
In `@bindu/extensions/scopeblind/extension.py`:
- Around line 355-356: Currently the code signs the hex string returned by
sha256_digest by calling self.private_key.sign(payload_hash.encode("utf-8")),
which signs 64 ASCII hex bytes instead of the 32 raw digest and couples
verifiers to that textual encoding; change this to sign deterministic message
bytes (preferably the canonical JSON bytes of the payload) or sign the raw
32-byte digest (decode the hex to bytes before calling self.private_key.sign),
and update the code around sha256_digest, payload_hash and self.private_key.sign
to reflect this; if you must keep the hex-string contract, add a clear comment
next to sha256_digest/payload_hash and the sign call explicitly stating “we sign
the lowercase hex-encoded SHA-256 string” so verifier implementers are pinned to
the exact encoding.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f9edcd76-8078-4fd3-8b57-6e753c8f200b
📒 Files selected for processing (1)
bindu/extensions/scopeblind/extension.py
| @cached_property | ||
| def policy_source(self) -> str: | ||
| """Load Cedar policy text from a string, file, or directory.""" | ||
| raw_value = self.cedar_policies.strip() | ||
| expanded = Path(os.path.expanduser(raw_value)) | ||
| if expanded.is_file(): | ||
| return expanded.read_text(encoding="utf-8") | ||
| if expanded.is_dir(): | ||
| policy_parts = [ | ||
| path.read_text(encoding="utf-8") | ||
| for path in sorted(expanded.glob("*.cedar")) | ||
| ] | ||
| if not policy_parts: | ||
| raise ValueError(f"No Cedar policy files found in {expanded}") | ||
| return "\n".join(policy_parts) | ||
| return self.cedar_policies |
There was a problem hiding this comment.
Silent fallback to inline policy text masks path typos.
When cedar_policies looks like a path but expanded is neither a file nor a directory (e.g. typo, missing mount, wrong working dir), the method silently returns the original string and PolicySet(...) will later try to parse the path as Cedar source and fail with a confusing parse error. Since misconfigured policies in enforce mode will block traffic, it's worth being explicit:
- If the string contains path separators or ends in
.cedar, treat a non-existent target as an error rather than inline source. - Or log (at info) which branch was taken so operators can diagnose misconfiguration.
| @cached_property | ||
| def issuer(self) -> str: | ||
| """Stable issuer identifier for receipt metadata.""" | ||
| return f"scopeblind:{self.policy_hash[:16]}" |
There was a problem hiding this comment.
Issuer identifier isn't bound to the signing key.
issuer = f"scopeblind:{policy_hash[:16]}" derives solely from the policy set. Two deployments with identical policies but different Ed25519 keys will publish the same issuer string, and rotating the signing key leaves the issuer unchanged. For "issuer-blind" verification the verification_key field is what ultimately matters, but downstream verifiers/indexers that key off issuer (enterprise verification, OTel spans) will conflate distinct signers.
Consider mixing the public key into the issuer, e.g. f"scopeblind:{sha256(policy_hash + public_key_base58)[:16]}", so the identifier is stable per (policy, key) pair.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindu/extensions/scopeblind/extension.py` around lines 219 - 222, The issuer
property currently returns a value derived only from policy_hash (issuer in
method issuer), so different signing keys with the same policy produce identical
issuers; change issuer to incorporate the signer's public key (e.g.,
verification_key or the exported public_key_base58) by hashing policy_hash
concatenated with the public key and taking the first 16 hex chars (e.g.,
sha256(policy_hash + public_key_base58)[:16]) to produce a stable identifier per
(policy, key) pair; update the cached_property issuer implementation to
reference the class field that holds the verification/public key (e.g.,
verification_key) and compute the combined hash accordingly.
| def _build_context(self, request: Any, method: str, request_data: dict[str, Any]) -> dict[str, Any]: | ||
| """Build the JSON context passed to Cedar and receipts.""" | ||
| user_info = getattr(request.state, "user", None) | ||
| return { | ||
| "http_method": request.method, | ||
| "jsonrpc_method": method, | ||
| "path": request.url.path, | ||
| "client_ip": request.client.host if request.client else "unknown", | ||
| "authenticated": bool(getattr(request.state, "authenticated", False)), | ||
| "token_scopes": (user_info.get("scope", []) if isinstance(user_info, dict) else []), | ||
| "request_id": request_data.get("id"), | ||
| } |
There was a problem hiding this comment.
request.client / request.url.path assume Starlette and a live connection.
request.client.host and request.url.path work for Starlette/FastAPI requests but will AttributeError if this is ever invoked against a plain ASGI scope or during testing with a synthetic request object. The request.client if request.client else "unknown" guard covers only the None-client case, not the "no client attribute" case. Same for request.url. Since evaluate_request already wraps Cedar evaluation in try/except, but _build_context runs before the try, a missing attribute here will bubble up and crash the middleware entirely.
Consider getattr-based access consistent with how user and authenticated are already read on lines 275/281.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bindu/extensions/scopeblind/extension.py` around lines 273 - 284, The
_build_context function assumes Starlette Request attributes and can raise
AttributeError when request lacks .client or .url; change the reads to safe
getattr/getattr-like checks: replace direct access to request.client.host and
request.url.path with guarded retrieval using getattr(request, "client", None)
and getattr(request, "url", None) and then fall back to defaults like "unknown"
or "" (e.g., client = getattr(request, "client", None); client_ip = client.host
if client and getattr(client, "host", None) else "unknown"; path =
getattr(getattr(request, "url", None), "path", "")). Keep other fields (user,
authenticated, request_data.get("id")) unchanged and ensure this logic lives
inside _build_context to avoid AttributeError before evaluate_request's
try/except.
Standalone Bindu extension emitting Ed25519-signed authorization receipts in the Veritas Acta format (draft-farley-acta-signed- receipts-02). Responds to three concerns raised on GetBindu/Bindu#459 review: 1. Embedded-key rejection. sign_receipt()/verify_receipt() refuse payloads containing verification_key / issuer_key / signer_public_key. require_conformance_check=True runs a negative- conformance vector at init. 2. Policy content anchoring. Cedar policies live in a file directory; policy_digest is sha256 of the concatenated source. Inline policy strings are not supported (avoids silent fallback to literal-string on path typos). 3. VOPRF scope clarification. This extension emits Ed25519 receipts (tier T1) only. VOPRF issuer-blind tokens (tier T4) are a separate ScopeBlind product and are not in scope for this extension. Plus the maintainer-requested package boundary: ships as a separate installable, not inside bindu/extensions/ core. Design posture: - Shadow mode is the DEFAULT. Enforcement requires explicit opt-in. - Enforce mode requires a non-empty cedar_policy_dir; empty is a configuration error. - Verification key source is external only (PinnedTrustAnchor, JwksKeySource, DidDocumentKeySource, AgentCardKeySource). - Agent card extension block published by ScopeBlindExtension so verifiers can resolve the issuer pubkey without ever seeing it in the receipt body. Tests: 19 passing, covering default posture (4), signing and chain linkage (4), embedded-key rejection (4), tamper detection (1), key sources (4), agent card extension (1), enforce mode (1). Package files: - pyproject.toml, README.md, DESIGN.md, CALL-AGENDA.md - bindu_scopeblind/{__init__, extension, middleware, receipts, key_sources, cedar_bridge, conformance}.py - tests/test_extension.py CALL-AGENDA.md is the pre-call artifact for the design review with @raahulrahl scheduled for week of 2026-04-22.
Summary
ScopeBlindExtensionwith Cedar policy evaluation, middleware enforcement/shadow modes, and deterministic signed receipts attached to task/artifact lifecycle.Change Type (select all that apply)
Scope (select all touched areas)
Linked Issue/PR
User-Visible / Behavior Changes
New optional extension:
ScopeBlindExtensionNew config:
mode:"enforce"(default strict) or"shadow"cedar_policies: policy definition stringIn enforce mode: unauthorized actions are blocked
In shadow mode: unauthorized actions are allowed but logged
Task results and artifacts now include signed authorization receipts when extension is enabled
Security Impact (required)
Risk + Mitigation:
Risk: Incorrect Cedar policies may unintentionally deny or allow actions
Risk: Key misuse for signing receipts
Risk: Receipt tampering
Verification
Environment
Steps to Test
ScopeBlindExtensionwith a Cedar policyExpected Behavior
Allowed actions execute normally with attached receipt
Denied actions:
Receipts are deterministic and verifiable
Actual Behavior
Evidence (attach at least one)
(See:
.local/test-results.json,.local/scopeblind-pytest.txt)Human Verification (required)
Verified scenarios:
Edge cases checked:
What you did NOT verify:
Compatibility / Migration
If yes, exact upgrade steps:
ScopeBlindExtensionto configurationshadowrecommended initially)Failure Recovery (if this breaks)
How to disable/revert this change quickly:
ScopeBlindExtensionfrom configFiles/config to restore:
Known bad symptoms reviewers should watch for:
Risks and Mitigations
Risk: Policy misconfiguration blocks valid workflows
Risk: Increased latency due to signing/verification
Checklist
uv run pytest)uv run pre-commit run --all-files)Summary by CodeRabbit
New Features
Documentation
Configuration
Tests