Skip to content

Add ScopeBlind authorization receipts extension#459

Open
vipul674 wants to merge 3 commits intoGetBindu:mainfrom
vipul674:codex/scopeblind-receipts
Open

Add ScopeBlind authorization receipts extension#459
vipul674 wants to merge 3 commits intoGetBindu:mainfrom
vipul674:codex/scopeblind-receipts

Conversation

@vipul674
Copy link
Copy Markdown

@vipul674 vipul674 commented Apr 18, 2026

Summary

  • Problem: No standardized mechanism to enforce authorization policies and produce verifiable audit artifacts for agent actions.
  • Why it matters: Without cryptographically verifiable receipts, agent actions lack trust, auditability, and external verification—critical for multi-agent and cross-domain systems.
  • What changed: Introduced ScopeBlindExtension with Cedar policy evaluation, middleware enforcement/shadow modes, and deterministic signed receipts attached to task/artifact lifecycle.
  • What did NOT change (scope boundary): Existing extensions (DID, x402), task execution model, and middleware architecture remain unchanged.

Change Type (select all that apply)

  • Bug fix
  • Feature
  • Refactor
  • Documentation
  • Security hardening
  • Tests
  • Chore/infra

Scope (select all touched areas)

  • Server / API endpoints
  • Extensions (DID, x402, etc.)
  • Storage backends
  • Scheduler backends
  • Observability / monitoring
  • Authentication / authorization
  • CLI / utilities
  • Tests
  • Documentation
  • CI/CD / infra

Linked Issue/PR


User-Visible / Behavior Changes

  • New optional extension: ScopeBlindExtension

  • New config:

    • mode: "enforce" (default strict) or "shadow"
    • cedar_policies: policy definition string
  • In enforce mode: unauthorized actions are blocked

  • In shadow mode: unauthorized actions are allowed but logged

  • Task results and artifacts now include signed authorization receipts when extension is enabled


Security Impact (required)

  • New permissions/capabilities? Yes
  • Secrets/credentials handling changed? Yes
  • New/changed network calls? No
  • Database schema/migration changes? No
  • Authentication/authorization changes? Yes

Risk + Mitigation:

  • Risk: Incorrect Cedar policies may unintentionally deny or allow actions

    • Mitigation: Shadow mode allows safe validation before enforcement
  • Risk: Key misuse for signing receipts

    • Mitigation: Dedicated ScopeBlind key separation (not tied to DID identity)
  • Risk: Receipt tampering

    • Mitigation: SHA-256 hashing + Ed25519 signature verification ensures integrity

Verification

Environment

  • OS: Linux
  • Python version: 3.11
  • Storage backend: Default (local/in-memory)
  • Scheduler backend: Default worker-based execution

Steps to Test

  1. Enable ScopeBlindExtension with a Cedar policy
  2. Trigger agent action (allowed and denied cases)
  3. Inspect task/artifact metadata for attached receipt
  4. Verify receipt signature and integrity using verifier

Expected Behavior

  • Allowed actions execute normally with attached receipt

  • Denied actions:

    • Blocked in enforce mode
    • Logged but executed in shadow mode
  • Receipts are deterministic and verifiable

Actual Behavior

  • Matches expected behavior across all tested scenarios

Evidence (attach at least one)

  • Test output / logs
  • Failing test before + passing after

(See: .local/test-results.json, .local/scopeblind-pytest.txt)


Human Verification (required)

  • Verified scenarios:

    • Allow policy → execution succeeds with receipt
    • Deny policy (enforce) → execution blocked
    • Deny policy (shadow) → execution allowed with warning
    • Receipt verification (valid signature)
    • Receipt tampering detection
  • Edge cases checked:

    • Empty/invalid policies
    • Deterministic serialization consistency
    • Missing signature / corrupted payload
  • What you did NOT verify:

    • Large-scale distributed verification across external systems
    • Performance under extreme throughput

Compatibility / Migration

  • Backward compatible? Yes
  • Config/env changes? Yes
  • Database migration needed? No

If yes, exact upgrade steps:

  1. Add ScopeBlindExtension to configuration
  2. Provide Cedar policy string
  3. Choose mode (shadow recommended initially)

Failure Recovery (if this breaks)

  • How to disable/revert this change quickly:

    • Remove/disable ScopeBlindExtension from config
  • Files/config to restore:

    • Extension registration in application config
  • Known bad symptoms reviewers should watch for:

    • Unexpected action denials
    • Missing receipts in metadata
    • Signature verification failures

Risks and Mitigations

  • Risk: Policy misconfiguration blocks valid workflows

    • Mitigation: Use shadow mode before enforce
  • Risk: Increased latency due to signing/verification

    • Mitigation: Lightweight hashing + Ed25519 (minimal overhead)

Checklist

  • Tests pass (uv run pytest)
  • Pre-commit hooks pass (uv run pre-commit run --all-files)
  • Documentation updated (if needed)
  • Security impact assessed
  • Human verification completed
  • Backward compatibility considered

Summary by CodeRabbit

  • New Features

    • Added ScopeBlind authorization receipts with verifiable signatures and artifact digests.
    • Middleware evaluates policies (enforce/shadow) on protected requests, records decisions, and attaches receipts to completed tasks/artifacts.
    • Agent metadata now exposes ScopeBlind extension and verification info.
  • Documentation

    • Added comprehensive ScopeBlind docs covering configuration, modes, receipts, and verification.
  • Configuration

    • New ScopeBlind settings for policies, mode, keys, and metadata keys.
  • Tests

    • Added unit tests for middleware, extension, verification, and task receipt attachment.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 18, 2026

📝 Walkthrough

Walkthrough

Adds a new ScopeBlind extension that evaluates Cedar authorization policies, issues Ed25519-signed authorization receipts, wires middleware to evaluate requests and attach context, extends task/workers to create and attach receipts to artifacts/metadata, and provides verification utilities and tests.

Changes

Cohort / File(s) Summary
Extension core & public surface
bindu/extensions/scopeblind/__init__.py, bindu/extensions/scopeblind/extension.py
New ScopeBlindExtension, public exports, decision datatypes, key management, policy loading, request evaluation, receipt creation, and agent-extension exposure.
Receipts & verification
bindu/extensions/scopeblind/receipt.py, bindu/extensions/scopeblind/verifier.py
Deterministic JSON serialization, payload hashing, artifact digest computation, receipt dataclasses, attach/metadata helpers, and verification APIs (verify_receipt, verify_artifact_receipt) with VerificationResult.
Middleware & HTTP handling
bindu/server/middleware/scopeblind.py, bindu/server/middleware/__init__.py, bindu/server/endpoints/a2a_protocol.py
ScopeBlindMiddleware added and exported; middleware evaluates requests, populates request.state.scopeblind_context, enforces/shadows decisions, sets span attributes; A2A endpoint can attach scopeblind context to message metadata.
Application wiring
bindu/server/applications.py, bindu/penguin/bindufy.py, bindu/penguin/config_validator.py
Detects/initializes ScopeBlindExtension from manifest/config, validates scopeblind config, integrates extension into bindufy setup and application middleware wiring.
Task processing & workers
bindu/server/workers/manifest_worker.py, bindu/server/handlers/message_handlers.py, bindu/server/endpoints/agent_card.py
Task send params accept scopeblind_context; workers create and attach receipts to artifacts/metadata at terminal states; message handler forwards context; agent card serialization handles agent_extension.
Settings & types
bindu/settings.py, bindu/common/protocol/types.py
Added ScopeBlindSettings to Settings; TaskSendParams gains optional scopeblind_context field.
Utilities & capability detection
bindu/utils/capabilities.py, bindu/utils/__init__.py
Added get_scopeblind_extension_from_capabilities helper and re-exported it.
Docs, README & deps
docs/SCOPEBLIND.md, README.md, pyproject.toml
New ScopeBlind documentation; README tagline updated to list verifiable authorization receipts; added cedar-python==0.1.4 dependency.
Tests
tests/unit/extensions/scopeblind/..., tests/unit/server/middleware/test_scopeblind.py, tests/unit/server/workers/test_manifest_worker.py, tests/unit/penguin/test_bindufy.py, tests/unit/server/endpoints/test_agent_card.py, tests/unit/utils/test_capabilities.py
New unit tests for extension setup, middleware enforce/shadow behavior, receipt creation/verification, manifest worker receipt attachment, bindufy helper, extension serialization, and capability lookup.

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client
  participant Middleware as ScopeBlindMiddleware
  participant Extension as ScopeBlindExtension
  participant Worker as ManifestWorker
  participant Storage as ArtifactStorage
  participant Verifier as Verifier

  rect rgba(135,206,250,0.5)
    Client->>Middleware: POST JSON-RPC request (method, params)
    Middleware->>Extension: evaluate_request(request, method, request_data)
    Extension-->>Middleware: ScopeBlindDecision (Allow/Deny, policy_hash, verification_key)
    Middleware->>Middleware: store decision in request.state.scopeblind_context
    alt decision Deny & mode=enforce
      Middleware-->>Client: JSON-RPC error (403)
    else proceed
      Middleware->>Worker: submit task (includes scopeblind_context)
      Worker->>Extension: create_receipt(authorization_context, artifacts, task_id, ...)
      Extension-->>Worker: ScopeBlindReceipt
      Worker->>Storage: attach receipt to artifacts & update task metadata
    end
  end

  rect rgba(144,238,144,0.5)
    Verifier->>Verifier: verify_receipt(receipt)
    Verifier->>Storage: optionally recompute artifact digest
    Verifier-->>Client: VerificationResult (valid/signature/integrity)
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 I hopped through Cedar, nibbling rules so neat,

Signed receipts with tiny paws and quickened feet.
Enforce or shadow, I thump with delight,
Artifacts now bear proof, all shiny and bright.
A rabbit’s wink: authorization done just right.

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The PR title "Add ScopeBlind authorization receipts extension" clearly and concisely summarizes the main feature addition: a new ScopeBlind extension for authorization receipts.
Description check ✅ Passed The PR description is comprehensive and follows the template structure with all major sections completed: summary with problem/solution, change type and scope, linked issues, user-visible changes, security impact with mitigations, verification steps, compatibility assessment, failure recovery, risks and mitigations, and checklist items marked as completed.
Linked Issues check ✅ Passed The PR successfully implements all coding requirements from issue #439: ScopeBlindExtension with Cedar policy evaluation, enforce/shadow middleware modes, deterministic SHA-256 hashing with Ed25519 signatures, issuer-blind verification, separate signing keys, receipt attachment to artifacts, OpenTelemetry integration, and comprehensive test coverage with verification helpers.
Out of Scope Changes check ✅ Passed All changes are directly scoped to the ScopeBlind authorization receipts feature as defined in issue #439. Changes touch extensions, middleware, configuration, settings, and utilities while explicitly preserving existing DID, X402 extensions, task execution model, storage, and scheduler backends as required.
Docstring Coverage ✅ Passed Docstring coverage is 94.94% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
bindu/server/endpoints/agent_card.py (1)

47-57: ⚠️ Potential issue | 🟡 Minor

Silent fall-through when agent_extension is not a dict.

If an extension exposes an agent_extension attribute whose value is not a dict, the new branch neither returns a serialized value nor falls through to the isinstance(ext, dict) / unknown-type branches (those are elif). The function then returns None implicitly with no warning log, making misconfigurations hard to diagnose. Also note the ordering: an object that both has an agent_extension attr and is a dict will take this branch first, which is fine in practice but worth being explicit about.

Minor Ruff B009 nit on line 48: getattr(ext, "agent_extension") with a constant name can just be ext.agent_extension.

♻️ Proposed fix
-    elif hasattr(ext, "agent_extension"):
-        serialized = getattr(ext, "agent_extension")
-        if isinstance(serialized, dict):
-            return serialized
-    elif isinstance(ext, dict):
+    elif hasattr(ext, "agent_extension"):
+        serialized = ext.agent_extension
+        if isinstance(serialized, dict):
+            return serialized
+        logger.warning(
+            f"Extension {type(ext).__name__} exposes non-dict agent_extension "
+            f"({type(serialized).__name__}), skipping"
+        )
+        return None
+    elif isinstance(ext, dict):
         # Already in correct format
         return ext
     else:
         # Unknown extension type
         logger.warning(f"Unknown extension type: {type(ext)}, skipping")
         return None
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindu/server/endpoints/agent_card.py` around lines 47 - 57, The branch
handling ext.agent_extension silently returns None when that attribute exists
but is not a dict; update the logic around ext and agent_extension to explicitly
handle non-dict values: use ext.agent_extension (not getattr) to retrieve the
attribute, if it's a dict return it, otherwise log a warning via logger.warning
that the agent_extension is present but not a dict (including its actual
type/value) and return None; ensure the dict-vs-attribute checks are ordered so
dict ext still returns immediately and that any case with a non-dict
agent_extension produces the warning instead of falling through silently.
🧹 Nitpick comments (8)
tests/unit/server/endpoints/test_agent_card.py (1)

43-55: LGTM — good coverage for the new agent_extension branch.

Test correctly exercises the new branch in _serialize_extension. The Ruff RUF012 hint about the mutable class attribute is a false positive here — the attribute is only read and asserted for identity equality, never mutated.

Optional: consider also adding a negative test where ext.agent_extension is a non-dict (e.g. None or a string) to lock in the current fall-through behavior (returns None), which is currently untested and — as noted on agent_card.py — a bit subtle.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/server/endpoints/test_agent_card.py` around lines 43 - 55, Add a
negative test exercising the fall-through when an extension exposes a non-dict
agent_extension: create a test (e.g.
test_serialize_extension_agent_extension_non_dict) that defines a MockExtension
with agent_extension set to None and another case with a string, call
_serialize_extension(MockExtension()) for each, and assert the result is None to
lock in current behavior in _serialize_extension.
bindu/server/handlers/message_handlers.py (1)

155-162: LGTM — context extraction and forwarding look correct.

_scopeblind_context is popped from metadata (so it doesn't leak to stored message metadata) and forwarded into scheduler_params["scopeblind_context"], matching the new TaskSendParams field and consumed by ManifestWorker.run_task.

Nit: a blank line between the payment-context and scopeblind-context blocks would aid readability (they're currently visually glued together), and the stale "✅ SAFE payment context handling" comment on line 155 now also covers scopeblind — consider generalizing it to "extension-provided context handling".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindu/server/handlers/message_handlers.py` around lines 155 - 162, Update the
comment and spacing around the context-extraction blocks: replace the stale "✅
SAFE payment context handling" comment with a more general description like
"extension-provided context handling", insert a blank line between the
payment_context pop/block and the scopeblind_context pop/block for readability,
and ensure the extracted keys (payment_context and scopeblind_context) are still
forwarded into scheduler_params before calling self.scheduler.run_task so they
match TaskSendParams and ManifestWorker.run_task.
bindu/settings.py (1)

346-368: Consider an enabled flag and a signing-key passphrase setting.

ScopeBlindSettings follows the X402Settings pattern cleanly and correctly exposes values via app_settings.scopeblind. Two optional improvements worth considering:

  1. Add an enabled: bool = False toggle (like AuthSettings/HydraSettings) so operators can disable ScopeBlind globally without mutating the extension manifest. This also avoids a surprise runtime cost if the extension is auto-loaded.
  2. The Ed25519 signing key is persisted to disk at pki_dir/private_key_filename. If the key is stored unencrypted, add a private_key_passphrase_env: str = "" (or equivalent secret-sourced field) so deployments can encrypt it at rest — matching the "key separation / receipt non-repudiation" goal stated in the PR objectives. This won't hardcode secrets; it just makes the setting available via app_settings.scopeblind.

No functional bug; defaults look reasonable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindu/settings.py` around lines 346 - 368, Add an optional enabled toggle and
a private key passphrase field to ScopeBlindSettings so operators can disable
ScopeBlind without changing the manifest and can supply an encryption passphrase
for the persisted Ed25519 key; specifically add enabled: bool = False (matching
AuthSettings/HydraSettings pattern) and private_key_passphrase_env: str = "" (or
similarly named secret-sourced setting) as attributes on the ScopeBlindSettings
class so they are available via app_settings.scopeblind and populated from the
existing SCOPEBLIND__ env_prefix.
pyproject.toml (1)

36-36: cedar-python==0.1.4 is verified on PyPI with correct specifications.

Distribution exists as cedar-python on PyPI, version 0.1.4 is the latest, requires Python >=3.12, and is Apache-2.0 licensed. Wheels are available for Python 3.12–3.14 across macOS (x86_64, arm64), Linux (x86_64, aarch64), and Windows (x64). No security vulnerabilities detected.

Since cedar-python is exclusively imported within the ScopeBlind extension and ScopeBlind is entirely optional/configurable, consider moving it to an optional dependency group:

scopeblind = ["cedar-python==0.1.4"]

This prevents unnecessary installation costs for users who don't enable ScopeBlind authorization.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@pyproject.toml` at line 36, Move the pinned dependency "cedar-python==0.1.4"
out of the main dependencies and add it to an optional extras group named
"scopeblind" in pyproject.toml; specifically remove the line containing
cedar-python==0.1.4 from the top-level dependencies and add an extras entry like
scopeblind = ["cedar-python==0.1.4"] so ScopeBlind can opt-in the package
without forcing installation for all users.
tests/unit/penguin/test_bindufy.py (1)

185-203: Add a regression test for inline Cedar policies.

The path-based case is covered, but _setup_scopeblind_extension also needs to preserve inline cedar_policies strings so they are not rewritten as caller-relative paths.

Suggested test addition
     def test_setup_scopeblind_extension(self, tmp_path):
         """Test creating ScopeBlind extension from config."""
         policy_dir = tmp_path / "policies"
         policy_dir.mkdir(parents=True, exist_ok=True)
         (policy_dir / "policy.cedar").write_text(
             'permit(principal, action == Action::"message/send", resource);',
             encoding="utf-8",
         )
 
         extension = _setup_scopeblind_extension(
             {
                 "mode": "shadow",
                 "cedar_policies": str(policy_dir),
             },
             caller_dir=tmp_path,
         )
 
         assert extension.mode == "shadow"
         assert extension.cedar_policies == str(policy_dir)
+
+    def test_setup_scopeblind_extension_preserves_inline_policy(self, tmp_path):
+        """Test inline Cedar policies are not treated as filesystem paths."""
+        policy = 'permit(principal, action == Action::"message/send", resource);'
+
+        extension = _setup_scopeblind_extension(
+            {
+                "mode": "shadow",
+                "cedar_policies": policy,
+            },
+            caller_dir=tmp_path,
+        )
+
+        assert extension.mode == "shadow"
+        assert extension.cedar_policies == policy
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/penguin/test_bindufy.py` around lines 185 - 203, The test
currently only verifies path-based cedar_policies are preserved; add a
regression test that passes an inline Cedar policy string to
_setup_scopeblind_extension and assert it is left unchanged (not rewritten to a
caller-relative path). In the tests/unit/penguin/test_bindufy.py file add a test
(e.g., test_setup_scopeblind_extension_inline_policy) that calls
_setup_scopeblind_extension with {"mode":"shadow", "cedar_policies":
'permit(principal, action == Action::"message/send", resource);'} (or similar
inline policy text) and asserts extension.mode == "shadow" and
extension.cedar_policies equals the exact inline string; this will ensure
_setup_scopeblind_extension preserves inline cedar_policies.
bindu/server/middleware/scopeblind.py (1)

47-50: ASGI receive callable won't emit http.disconnect on repeated calls.

receive returns the same http.request frame on every invocation. Starlette's Request.stream() guards against this via _stream_consumed, so the immediate path works, but any downstream middleware that calls receive() to detect a client disconnect (e.g. long-polling / SSE helpers) will hang/loop. Consider yielding a one-shot body then an http.disconnect:

♻️ Suggested one-shot receive
-            async def receive():
-                return {"type": "http.request", "body": body}
+            sent = False
+            async def receive():
+                nonlocal sent
+                if not sent:
+                    sent = True
+                    return {"type": "http.request", "body": body, "more_body": False}
+                return {"type": "http.disconnect"}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindu/server/middleware/scopeblind.py` around lines 47 - 50, The custom ASGI
receive callable defined for StarletteRequest always returns the same {"type":
"http.request", "body": body} frame which can cause downstream code that polls
receive() (e.g., for http.disconnect) to hang; change the receive implementation
(the async def receive used to construct StarletteRequest) to be one-shot: keep
a local boolean/flag (e.g., consumed) and on first call return the http.request
frame with body, and on subsequent calls return {"type": "http.disconnect"} (or
an empty body then http.disconnect) so downstream middleware expecting a
disconnect will not loop indefinitely.
tests/unit/extensions/scopeblind/test_scopeblind_extension.py (1)

122-132: Consider using the production serializer to keep test and production receipt serialization synchronized.

Manually spreading receipt.payload.__dict__ and mapping artifact digests risks drifting from receipt_to_dict (the actual serialization used in attach_receipt_to_artifacts and build_task_receipt_metadata). If ScopeBlindReceipt or ScopeBlindReceiptPayload gains/renames a field, this test will silently continue passing while production breaks or vice versa. Using the same helper exercises the verifier against the same shape actually carried through the middleware/worker path.

Note: receipt_to_dict is not part of the public API, so you'll need to either import from the private module or have it exported from bindu.extensions.scopeblind.__init__.py.

♻️ Suggested refactor
+        from bindu.extensions.scopeblind.receipt import receipt_to_dict
-        receipt_dict = {
-            "payload": {
-                **receipt.payload.__dict__,
-                "artifacts": [digest.__dict__ for digest in receipt.payload.artifacts],
-            },
-            "payload_hash": receipt.payload_hash,
-            "verification_key": receipt.verification_key,
-            "signature": receipt.signature,
-            "algorithm": receipt.algorithm,
-            "issuer": receipt.issuer,
-        }
+        receipt_dict = receipt_to_dict(receipt)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/extensions/scopeblind/test_scopeblind_extension.py` around lines
122 - 132, Replace the manual construction of the receipt dictionary with the
production serializer: import and call receipt_to_dict(...) (from
bindu.extensions.scopeblind or the private module where it's implemented)
instead of manually spreading receipt.payload.__dict__ and mapping
receipt.payload.artifacts; this ensures the test uses the same shape as
attach_receipt_to_artifacts and build_task_receipt_metadata and stays in sync
with ScopeBlindReceipt/ScopeBlindReceiptPayload field changes.
bindu/extensions/scopeblind/extension.py (1)

123-138: policy_source can raise on long inline policies and inconsistently re-uses the unstripped value.

Two small issues:

  • raw_value = self.cedar_policies.strip() is used to construct Path(os.path.expanduser(raw_value)), but a multi-line inline Cedar policy (permit(...);\nforbid(...);) becomes a Path whose .is_file() / .is_dir() calls may raise OSError(ENAMETOOLONG) on some filesystems (e.g. Linux path limit ≈ 4096 bytes, component limit 255), instead of just returning False.
  • When falling through to the inline branch, the function returns self.cedar_policies (unstripped) even though the file/dir branches operated on the stripped/expanded value. This inconsistency will leak leading/trailing whitespace into the policy_hash and PolicySet parsing.
♻️ Proposed tweak
     `@cached_property`
     def policy_source(self) -> str:
         """Load Cedar policy text from a string, file, or directory."""
         raw_value = self.cedar_policies.strip()
-        expanded = Path(os.path.expanduser(raw_value))
-        if expanded.is_file():
-            return expanded.read_text(encoding="utf-8")
-        if expanded.is_dir():
-            policy_parts = [
-                path.read_text(encoding="utf-8")
-                for path in sorted(expanded.glob("*.cedar"))
-            ]
-            if not policy_parts:
-                raise ValueError(f"No Cedar policy files found in {expanded}")
-            return "\n".join(policy_parts)
-        return self.cedar_policies
+        # Heuristic: only treat as filesystem path when it looks like one and fits path limits.
+        looks_like_path = (
+            "\n" not in raw_value
+            and len(raw_value) < 4096
+            and ("/" in raw_value or raw_value.endswith(".cedar") or raw_value.startswith("~"))
+        )
+        if looks_like_path:
+            try:
+                expanded = Path(os.path.expanduser(raw_value))
+                if expanded.is_file():
+                    return expanded.read_text(encoding="utf-8")
+                if expanded.is_dir():
+                    policy_parts = [
+                        path.read_text(encoding="utf-8")
+                        for path in sorted(expanded.glob("*.cedar"))
+                    ]
+                    if not policy_parts:
+                        raise ValueError(f"No Cedar policy files found in {expanded}")
+                    return "\n".join(policy_parts)
+            except OSError:
+                pass
+        return raw_value
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindu/extensions/scopeblind/extension.py` around lines 123 - 138,
policy_source currently always treats the stripped cedar_policies string as a
filesystem path which can raise OSError on very long inline policies and then
returns the unstripped value on the inline branch; update policy_source to first
compute stripped = self.cedar_policies.strip(), and only attempt Path
expansion/IO when stripped does not contain newlines and its length is
reasonable (or wrap Path/os calls in a try/except OSError) to avoid
ENAMETOOLONG; when falling back to inline return stripped (not
self.cedar_policies) so the same normalized value is used for hashing/parsing,
and ensure any Path-related failures are caught and treated as “not a file/dir”
rather than bubbling the OSError.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bindu/extensions/scopeblind/extension.py`:
- Around line 170-178: generate_and_save_key_pair currently creates _key_dir and
writes the private key without setting strict permissions; update it so after
creating the directory (self._key_dir.mkdir(...)) you enforce owner-only
permissions (e.g. os.chmod(self._key_dir, 0o700)), and after writing the key
bytes from _generate_key_pair_data() call os.chmod(self.private_key_path, 0o600)
to ensure the private key is owner-readable/writable only; also set a reasonable
permission for the public key (e.g. os.chmod(self.public_key_path, 0o644)) so
the public key remains readable while the private key is protected.

In `@bindu/extensions/scopeblind/receipt.py`:
- Around line 116-128: attach_receipt_to_artifacts currently appends the same
receipt_dict instance to every artifact and silently skips artifacts whose
metadata or receipts shape is unexpected; change it so each artifact gets its
own copy of the receipt (e.g., dict(receipt_dict) or a shallow copy) before
appending, and add explicit logging warnings when metadata is not a dict or when
the existing app_settings.scopeblind.meta_receipts_key value exists but is not a
list (in that case log and replace it with a new list containing the copied
receipt so the artifact is not left without a receipt). Use the existing symbols
attach_receipt_to_artifacts, receipt_to_dict,
app_settings.scopeblind.meta_receipts_key, Artifact and ScopeBlindReceipt to
locate where to make the change and keep behavior in-place for artifacts.

In `@bindu/server/endpoints/a2a_protocol.py`:
- Around line 89-103: In _attach_scopeblind_context, scrub any client-supplied
reserved key before attaching the middleware-produced context: locate the
metadata on a2a_request["params"]["message"] (msg_obj), remove any existing
"_scopeblind_context" (pop it) from msg_obj["metadata"], then, if
request.state.scopeblind_context is present, set
msg_obj["metadata"]["_scopeblind_context"] = request.state.scopeblind_context so
only the middleware value is forwarded; keep the early returns for non-target
methods and missing params intact.

In `@bindu/server/middleware/scopeblind.py`:
- Around line 40-56: The middleware currently swallows all exceptions when
parsing the body (await request.body() / json.loads) and forwards the request
even in enforce mode; update ScopeBlind so parsing errors are handled by denying
the request when mode == "enforce" (return a 4xx/403 response) and only allow
fail-open in non-enforce modes; narrow the except clause to (UnicodeDecodeError,
json.JSONDecodeError) to avoid hiding unexpected errors; when rebuilding the
request use the existing Request class (remove the redundant inline
StarletteRequest import) and ensure the ASGI receive() returns
{"type":"http.request","body": body, "more_body": False} so downstream reads
correctly; keep the logger.warning but include the error string.

---

Outside diff comments:
In `@bindu/server/endpoints/agent_card.py`:
- Around line 47-57: The branch handling ext.agent_extension silently returns
None when that attribute exists but is not a dict; update the logic around ext
and agent_extension to explicitly handle non-dict values: use
ext.agent_extension (not getattr) to retrieve the attribute, if it's a dict
return it, otherwise log a warning via logger.warning that the agent_extension
is present but not a dict (including its actual type/value) and return None;
ensure the dict-vs-attribute checks are ordered so dict ext still returns
immediately and that any case with a non-dict agent_extension produces the
warning instead of falling through silently.

---

Nitpick comments:
In `@bindu/extensions/scopeblind/extension.py`:
- Around line 123-138: policy_source currently always treats the stripped
cedar_policies string as a filesystem path which can raise OSError on very long
inline policies and then returns the unstripped value on the inline branch;
update policy_source to first compute stripped = self.cedar_policies.strip(),
and only attempt Path expansion/IO when stripped does not contain newlines and
its length is reasonable (or wrap Path/os calls in a try/except OSError) to
avoid ENAMETOOLONG; when falling back to inline return stripped (not
self.cedar_policies) so the same normalized value is used for hashing/parsing,
and ensure any Path-related failures are caught and treated as “not a file/dir”
rather than bubbling the OSError.

In `@bindu/server/handlers/message_handlers.py`:
- Around line 155-162: Update the comment and spacing around the
context-extraction blocks: replace the stale "✅ SAFE payment context handling"
comment with a more general description like "extension-provided context
handling", insert a blank line between the payment_context pop/block and the
scopeblind_context pop/block for readability, and ensure the extracted keys
(payment_context and scopeblind_context) are still forwarded into
scheduler_params before calling self.scheduler.run_task so they match
TaskSendParams and ManifestWorker.run_task.

In `@bindu/server/middleware/scopeblind.py`:
- Around line 47-50: The custom ASGI receive callable defined for
StarletteRequest always returns the same {"type": "http.request", "body": body}
frame which can cause downstream code that polls receive() (e.g., for
http.disconnect) to hang; change the receive implementation (the async def
receive used to construct StarletteRequest) to be one-shot: keep a local
boolean/flag (e.g., consumed) and on first call return the http.request frame
with body, and on subsequent calls return {"type": "http.disconnect"} (or an
empty body then http.disconnect) so downstream middleware expecting a disconnect
will not loop indefinitely.

In `@bindu/settings.py`:
- Around line 346-368: Add an optional enabled toggle and a private key
passphrase field to ScopeBlindSettings so operators can disable ScopeBlind
without changing the manifest and can supply an encryption passphrase for the
persisted Ed25519 key; specifically add enabled: bool = False (matching
AuthSettings/HydraSettings pattern) and private_key_passphrase_env: str = "" (or
similarly named secret-sourced setting) as attributes on the ScopeBlindSettings
class so they are available via app_settings.scopeblind and populated from the
existing SCOPEBLIND__ env_prefix.

In `@pyproject.toml`:
- Line 36: Move the pinned dependency "cedar-python==0.1.4" out of the main
dependencies and add it to an optional extras group named "scopeblind" in
pyproject.toml; specifically remove the line containing cedar-python==0.1.4 from
the top-level dependencies and add an extras entry like scopeblind =
["cedar-python==0.1.4"] so ScopeBlind can opt-in the package without forcing
installation for all users.

In `@tests/unit/extensions/scopeblind/test_scopeblind_extension.py`:
- Around line 122-132: Replace the manual construction of the receipt dictionary
with the production serializer: import and call receipt_to_dict(...) (from
bindu.extensions.scopeblind or the private module where it's implemented)
instead of manually spreading receipt.payload.__dict__ and mapping
receipt.payload.artifacts; this ensures the test uses the same shape as
attach_receipt_to_artifacts and build_task_receipt_metadata and stays in sync
with ScopeBlindReceipt/ScopeBlindReceiptPayload field changes.

In `@tests/unit/penguin/test_bindufy.py`:
- Around line 185-203: The test currently only verifies path-based
cedar_policies are preserved; add a regression test that passes an inline Cedar
policy string to _setup_scopeblind_extension and assert it is left unchanged
(not rewritten to a caller-relative path). In the
tests/unit/penguin/test_bindufy.py file add a test (e.g.,
test_setup_scopeblind_extension_inline_policy) that calls
_setup_scopeblind_extension with {"mode":"shadow", "cedar_policies":
'permit(principal, action == Action::"message/send", resource);'} (or similar
inline policy text) and asserts extension.mode == "shadow" and
extension.cedar_policies equals the exact inline string; this will ensure
_setup_scopeblind_extension preserves inline cedar_policies.

In `@tests/unit/server/endpoints/test_agent_card.py`:
- Around line 43-55: Add a negative test exercising the fall-through when an
extension exposes a non-dict agent_extension: create a test (e.g.
test_serialize_extension_agent_extension_non_dict) that defines a MockExtension
with agent_extension set to None and another case with a string, call
_serialize_extension(MockExtension()) for each, and assert the result is None to
lock in current behavior in _serialize_extension.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: cb9c8f32-57eb-425c-bf28-d759f01e09ad

📥 Commits

Reviewing files that changed from the base of the PR and between 5cf0a20 and 6a55bb9.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (26)
  • README.md
  • bindu/common/protocol/types.py
  • bindu/extensions/scopeblind/__init__.py
  • bindu/extensions/scopeblind/extension.py
  • bindu/extensions/scopeblind/receipt.py
  • bindu/extensions/scopeblind/verifier.py
  • bindu/penguin/bindufy.py
  • bindu/penguin/config_validator.py
  • bindu/server/applications.py
  • bindu/server/endpoints/a2a_protocol.py
  • bindu/server/endpoints/agent_card.py
  • bindu/server/handlers/message_handlers.py
  • bindu/server/middleware/__init__.py
  • bindu/server/middleware/scopeblind.py
  • bindu/server/workers/manifest_worker.py
  • bindu/settings.py
  • bindu/utils/__init__.py
  • bindu/utils/capabilities.py
  • docs/SCOPEBLIND.md
  • pyproject.toml
  • tests/unit/extensions/scopeblind/test_scopeblind_extension.py
  • tests/unit/penguin/test_bindufy.py
  • tests/unit/server/endpoints/test_agent_card.py
  • tests/unit/server/middleware/test_scopeblind.py
  • tests/unit/server/workers/test_manifest_worker.py
  • tests/unit/utils/test_capabilities.py

Comment thread bindu/extensions/scopeblind/extension.py
Comment on lines +116 to +128
def attach_receipt_to_artifacts(
artifacts: list[Artifact],
receipt: ScopeBlindReceipt,
) -> list[Artifact]:
"""Attach the receipt to every artifact's metadata in-place."""
receipt_dict = receipt_to_dict(receipt)
for artifact in artifacts:
metadata = artifact.setdefault("metadata", {})
if isinstance(metadata, dict):
receipts = metadata.setdefault(app_settings.scopeblind.meta_receipts_key, [])
if isinstance(receipts, list):
receipts.append(receipt_dict)
return artifacts
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

attach_receipt_to_artifacts aliases the same receipt_dict across every artifact and silently drops receipts when metadata shape is unexpected.

  • The same receipt_dict object is appended to every artifact's scopeblind.receipts list (line 127). Any downstream code that mutates one artifact's attached receipt (e.g. attaches extra fields, sorts keys) will unexpectedly mutate it for all other artifacts as well.
  • The isinstance(metadata, dict) / isinstance(receipts, list) guards silently skip attachment if the shapes don't match, so a pre-existing non-list scopeblind.receipts value causes receipts to be dropped without any warning — the artifact then looks valid but carries no receipt, which is exactly the tampering signal verifiers rely on.
♻️ Proposed fix: copy the dict per artifact and log on unexpected shapes
 def attach_receipt_to_artifacts(
     artifacts: list[Artifact],
     receipt: ScopeBlindReceipt,
 ) -> list[Artifact]:
     """Attach the receipt to every artifact's metadata in-place."""
     receipt_dict = receipt_to_dict(receipt)
     for artifact in artifacts:
         metadata = artifact.setdefault("metadata", {})
-        if isinstance(metadata, dict):
-            receipts = metadata.setdefault(app_settings.scopeblind.meta_receipts_key, [])
-            if isinstance(receipts, list):
-                receipts.append(receipt_dict)
+        if not isinstance(metadata, dict):
+            raise TypeError(
+                f"Artifact metadata must be a dict, got {type(metadata).__name__}"
+            )
+        receipts = metadata.setdefault(app_settings.scopeblind.meta_receipts_key, [])
+        if not isinstance(receipts, list):
+            raise TypeError(
+                f"Artifact {app_settings.scopeblind.meta_receipts_key} must be a list, "
+                f"got {type(receipts).__name__}"
+            )
+        receipts.append(dict(receipt_dict))  # per-artifact copy to avoid aliasing
     return artifacts
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindu/extensions/scopeblind/receipt.py` around lines 116 - 128,
attach_receipt_to_artifacts currently appends the same receipt_dict instance to
every artifact and silently skips artifacts whose metadata or receipts shape is
unexpected; change it so each artifact gets its own copy of the receipt (e.g.,
dict(receipt_dict) or a shallow copy) before appending, and add explicit logging
warnings when metadata is not a dict or when the existing
app_settings.scopeblind.meta_receipts_key value exists but is not a list (in
that case log and replace it with a new list containing the copied receipt so
the artifact is not left without a receipt). Use the existing symbols
attach_receipt_to_artifacts, receipt_to_dict,
app_settings.scopeblind.meta_receipts_key, Artifact and ScopeBlindReceipt to
locate where to make the change and keep behavior in-place for artifacts.

Comment thread bindu/penguin/bindufy.py Outdated
Comment on lines +89 to +103
def _attach_scopeblind_context(request: Request, a2a_request: Any, method: str) -> None:
"""Attach ScopeBlind authorization context to message metadata if available."""
if method not in ("message/send", "message/stream"):
return

scopeblind_context = getattr(request.state, "scopeblind_context", None)
if scopeblind_context is None:
return

if "params" not in a2a_request or "message" not in a2a_request["params"]:
return

msg_obj = a2a_request["params"]["message"]
msg_obj.setdefault("metadata", {})
msg_obj["metadata"]["_scopeblind_context"] = scopeblind_context
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Scrub client-supplied _scopeblind_context before forwarding.

metadata is client-controlled. If request.state.scopeblind_context is absent, an incoming _scopeblind_context remains in the request and can be consumed downstream as internal authorization context. Pop the reserved key first, then attach only the middleware-produced value.

Proposed fix
 def _attach_scopeblind_context(request: Request, a2a_request: Any, method: str) -> None:
     """Attach ScopeBlind authorization context to message metadata if available."""
     if method not in ("message/send", "message/stream"):
         return
 
-    scopeblind_context = getattr(request.state, "scopeblind_context", None)
-    if scopeblind_context is None:
-        return
-
     if "params" not in a2a_request or "message" not in a2a_request["params"]:
         return
 
     msg_obj = a2a_request["params"]["message"]
-    msg_obj.setdefault("metadata", {})
-    msg_obj["metadata"]["_scopeblind_context"] = scopeblind_context
+    metadata = msg_obj.setdefault("metadata", {})
+    if not isinstance(metadata, dict):
+        return
+
+    metadata.pop("_scopeblind_context", None)
+
+    scopeblind_context = getattr(request.state, "scopeblind_context", None)
+    if scopeblind_context is None:
+        return
+
+    metadata["_scopeblind_context"] = scopeblind_context
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindu/server/endpoints/a2a_protocol.py` around lines 89 - 103, In
_attach_scopeblind_context, scrub any client-supplied reserved key before
attaching the middleware-produced context: locate the metadata on
a2a_request["params"]["message"] (msg_obj), remove any existing
"_scopeblind_context" (pop it) from msg_obj["metadata"], then, if
request.state.scopeblind_context is present, set
msg_obj["metadata"]["_scopeblind_context"] = request.state.scopeblind_context so
only the middleware value is forwarded; keep the early returns for non-target
methods and missing params intact.

Comment on lines +40 to +56
try:
body = await request.body()
request_data = json.loads(body.decode("utf-8"))
method = request_data.get("method", "")

from starlette.requests import Request as StarletteRequest

async def receive():
return {"type": "http.request", "body": body}

request = StarletteRequest(request.scope, receive)
except Exception as error:
logger.warning(
"ScopeBlind middleware could not parse request body",
error=str(error),
)
return await call_next(request)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fail-open on body parse failure bypasses enforce-mode authorization.

If await request.body() or json.loads(...) raises, the middleware logs a warning and forwards the request downstream without any Cedar evaluation — even when mode="enforce". Downstream JSON-RPC parsing will typically reject a malformed body as well, so real-world exposure is limited, but in enforce mode the safer default is to deny rather than pass through on an unknown parse error.

Also: except Exception is intentionally broad here (Ruff BLE001), which is acceptable for a middleware boundary, but consider narrowing to (UnicodeDecodeError, json.JSONDecodeError) so unexpected errors (e.g. a misbehaving receive in a test/harness) aren't swallowed silently.

🛡️ Suggested tighter handling
         try:
             body = await request.body()
             request_data = json.loads(body.decode("utf-8"))
             method = request_data.get("method", "")

-            from starlette.requests import Request as StarletteRequest
-
             async def receive():
-                return {"type": "http.request", "body": body}
+                return {"type": "http.request", "body": body, "more_body": False}

-            request = StarletteRequest(request.scope, receive)
-        except Exception as error:
+            request = Request(request.scope, receive)
+        except (UnicodeDecodeError, json.JSONDecodeError, AttributeError) as error:
             logger.warning(
                 "ScopeBlind middleware could not parse request body",
                 error=str(error),
             )
-            return await call_next(request)
+            if self.scopeblind_ext.mode == "enforce":
+                code, message = extract_error_fields(InsufficientPermissionsError)
+                return jsonrpc_error(
+                    code,
+                    message,
+                    "ScopeBlind could not evaluate the request body.",
+                    request_id=None,
+                    status=400,
+                )
+            return await call_next(request)

This also addresses the redundant inline import at line 45 (already imported at line 9 as Request) and the missing more_body: False on the ASGI receive dict.

🧰 Tools
🪛 Ruff (0.15.10)

[warning] 51-51: Do not catch blind exception: Exception

(BLE001)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindu/server/middleware/scopeblind.py` around lines 40 - 56, The middleware
currently swallows all exceptions when parsing the body (await request.body() /
json.loads) and forwards the request even in enforce mode; update ScopeBlind so
parsing errors are handled by denying the request when mode == "enforce" (return
a 4xx/403 response) and only allow fail-open in non-enforce modes; narrow the
except clause to (UnicodeDecodeError, json.JSONDecodeError) to avoid hiding
unexpected errors; when rebuilding the request use the existing Request class
(remove the redundant inline StarletteRequest import) and ensure the ASGI
receive() returns {"type":"http.request","body": body, "more_body": False} so
downstream reads correctly; keep the logger.warning but include the error
string.

vipul674 and others added 2 commits April 19, 2026 00:36
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bindu/penguin/bindufy.py`:
- Around line 161-171: The code currently treats any missing file path like
"policies/authz.cedar" as inline Cedar by falling back to the raw string; change
this to fail-fast for path-like inputs: detect path-like raw_cedar_policies
(e.g., contains os.path.sep, startswith "./" or "../", or endswith a Cedar
extension) and if resolved_policy_path.exists() is False then raise a clear
exception (or log and exit) instead of assigning cedar_policies =
raw_cedar_policies; update the logic around raw_cedar_policies / policy_path /
resolved_policy_path / cedar_policies (and use caller_dir) so only true inline
policy text is accepted and missing files produce an immediate error.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0d22e53a-0e6a-476b-a9a2-7c1df2081e69

📥 Commits

Reviewing files that changed from the base of the PR and between 6a55bb9 and 53f82a9.

📒 Files selected for processing (1)
  • bindu/penguin/bindufy.py

Comment thread bindu/penguin/bindufy.py
Comment on lines +161 to +171
raw_cedar_policies = scopeblind_config["cedar_policies"].strip()
policy_path = Path(os.path.expanduser(raw_cedar_policies))
resolved_policy_path = policy_path
if not resolved_policy_path.is_absolute() and caller_dir is not None:
resolved_policy_path = (caller_dir / resolved_policy_path).resolve()

cedar_policies = (
str(resolved_policy_path)
if resolved_policy_path.exists()
else raw_cedar_policies
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fail fast for missing path-like Cedar policy sources.

This correctly preserves inline Cedar, but a typo like policies/authz.cedar now falls back to the literal string instead of failing. Since the extension constructor only validates non-empty policy text, path mistakes can survive startup and fail during authorization.

Suggested guard for path-like inputs
     raw_cedar_policies = scopeblind_config["cedar_policies"].strip()
     policy_path = Path(os.path.expanduser(raw_cedar_policies))
     resolved_policy_path = policy_path
     if not resolved_policy_path.is_absolute() and caller_dir is not None:
         resolved_policy_path = (caller_dir / resolved_policy_path).resolve()
 
+    looks_like_policy_path = (
+        policy_path.suffix == ".cedar"
+        or "/" in raw_cedar_policies
+        or "\\" in raw_cedar_policies
+    )
+    if looks_like_policy_path and not resolved_policy_path.exists():
+        raise FileNotFoundError(
+            f"ScopeBlind Cedar policy path does not exist: {resolved_policy_path}"
+        )
+
     cedar_policies = (
         str(resolved_policy_path)
         if resolved_policy_path.exists()
         else raw_cedar_policies
     )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindu/penguin/bindufy.py` around lines 161 - 171, The code currently treats
any missing file path like "policies/authz.cedar" as inline Cedar by falling
back to the raw string; change this to fail-fast for path-like inputs: detect
path-like raw_cedar_policies (e.g., contains os.path.sep, startswith "./" or
"../", or endswith a Cedar extension) and if resolved_policy_path.exists() is
False then raise a clear exception (or log and exit) instead of assigning
cedar_policies = raw_cedar_policies; update the logic around raw_cedar_policies
/ policy_path / resolved_policy_path / cedar_policies (and use caller_dir) so
only true inline policy text is accepted and missing files produce an immediate
error.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
bindu/extensions/scopeblind/extension.py (1)

355-356: Signing the hex digest string rather than raw bytes.

sha256_digest(...) returns a hex string; self.private_key.sign(payload_hash.encode("utf-8")) therefore signs 64 ASCII bytes instead of the 32-byte raw digest. This is cryptographically fine as long as the verifier reconstructs the hex string identically, but it's non-standard (Ed25519 already hashes internally) and couples verifiers to the exact textual encoding. If you ever move payload_hash to bytes/base64 in the wire format, signatures will silently stop verifying.

Consider signing the deterministic JSON bytes directly (Ed25519 accepts arbitrary-length messages) and keeping payload_hash purely as a content identifier, or at minimum add a comment pinning the "sign the lowercase-hex digest" contract so verifier implementers don't diverge.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindu/extensions/scopeblind/extension.py` around lines 355 - 356, Currently
the code signs the hex string returned by sha256_digest by calling
self.private_key.sign(payload_hash.encode("utf-8")), which signs 64 ASCII hex
bytes instead of the 32 raw digest and couples verifiers to that textual
encoding; change this to sign deterministic message bytes (preferably the
canonical JSON bytes of the payload) or sign the raw 32-byte digest (decode the
hex to bytes before calling self.private_key.sign), and update the code around
sha256_digest, payload_hash and self.private_key.sign to reflect this; if you
must keep the hex-string contract, add a clear comment next to
sha256_digest/payload_hash and the sign call explicitly stating “we sign the
lowercase hex-encoded SHA-256 string” so verifier implementers are pinned to the
exact encoding.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bindu/extensions/scopeblind/extension.py`:
- Around line 219-222: The issuer property currently returns a value derived
only from policy_hash (issuer in method issuer), so different signing keys with
the same policy produce identical issuers; change issuer to incorporate the
signer's public key (e.g., verification_key or the exported public_key_base58)
by hashing policy_hash concatenated with the public key and taking the first 16
hex chars (e.g., sha256(policy_hash + public_key_base58)[:16]) to produce a
stable identifier per (policy, key) pair; update the cached_property issuer
implementation to reference the class field that holds the verification/public
key (e.g., verification_key) and compute the combined hash accordingly.
- Around line 273-284: The _build_context function assumes Starlette Request
attributes and can raise AttributeError when request lacks .client or .url;
change the reads to safe getattr/getattr-like checks: replace direct access to
request.client.host and request.url.path with guarded retrieval using
getattr(request, "client", None) and getattr(request, "url", None) and then fall
back to defaults like "unknown" or "" (e.g., client = getattr(request, "client",
None); client_ip = client.host if client and getattr(client, "host", None) else
"unknown"; path = getattr(getattr(request, "url", None), "path", "")). Keep
other fields (user, authenticated, request_data.get("id")) unchanged and ensure
this logic lives inside _build_context to avoid AttributeError before
evaluate_request's try/except.

---

Nitpick comments:
In `@bindu/extensions/scopeblind/extension.py`:
- Around line 355-356: Currently the code signs the hex string returned by
sha256_digest by calling self.private_key.sign(payload_hash.encode("utf-8")),
which signs 64 ASCII hex bytes instead of the 32 raw digest and couples
verifiers to that textual encoding; change this to sign deterministic message
bytes (preferably the canonical JSON bytes of the payload) or sign the raw
32-byte digest (decode the hex to bytes before calling self.private_key.sign),
and update the code around sha256_digest, payload_hash and self.private_key.sign
to reflect this; if you must keep the hex-string contract, add a clear comment
next to sha256_digest/payload_hash and the sign call explicitly stating “we sign
the lowercase hex-encoded SHA-256 string” so verifier implementers are pinned to
the exact encoding.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f9edcd76-8078-4fd3-8b57-6e753c8f200b

📥 Commits

Reviewing files that changed from the base of the PR and between 53f82a9 and bf81ea7.

📒 Files selected for processing (1)
  • bindu/extensions/scopeblind/extension.py

Comment on lines +123 to +138
@cached_property
def policy_source(self) -> str:
"""Load Cedar policy text from a string, file, or directory."""
raw_value = self.cedar_policies.strip()
expanded = Path(os.path.expanduser(raw_value))
if expanded.is_file():
return expanded.read_text(encoding="utf-8")
if expanded.is_dir():
policy_parts = [
path.read_text(encoding="utf-8")
for path in sorted(expanded.glob("*.cedar"))
]
if not policy_parts:
raise ValueError(f"No Cedar policy files found in {expanded}")
return "\n".join(policy_parts)
return self.cedar_policies
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Silent fallback to inline policy text masks path typos.

When cedar_policies looks like a path but expanded is neither a file nor a directory (e.g. typo, missing mount, wrong working dir), the method silently returns the original string and PolicySet(...) will later try to parse the path as Cedar source and fail with a confusing parse error. Since misconfigured policies in enforce mode will block traffic, it's worth being explicit:

  • If the string contains path separators or ends in .cedar, treat a non-existent target as an error rather than inline source.
  • Or log (at info) which branch was taken so operators can diagnose misconfiguration.

Comment on lines +219 to +222
@cached_property
def issuer(self) -> str:
"""Stable issuer identifier for receipt metadata."""
return f"scopeblind:{self.policy_hash[:16]}"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Issuer identifier isn't bound to the signing key.

issuer = f"scopeblind:{policy_hash[:16]}" derives solely from the policy set. Two deployments with identical policies but different Ed25519 keys will publish the same issuer string, and rotating the signing key leaves the issuer unchanged. For "issuer-blind" verification the verification_key field is what ultimately matters, but downstream verifiers/indexers that key off issuer (enterprise verification, OTel spans) will conflate distinct signers.

Consider mixing the public key into the issuer, e.g. f"scopeblind:{sha256(policy_hash + public_key_base58)[:16]}", so the identifier is stable per (policy, key) pair.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindu/extensions/scopeblind/extension.py` around lines 219 - 222, The issuer
property currently returns a value derived only from policy_hash (issuer in
method issuer), so different signing keys with the same policy produce identical
issuers; change issuer to incorporate the signer's public key (e.g.,
verification_key or the exported public_key_base58) by hashing policy_hash
concatenated with the public key and taking the first 16 hex chars (e.g.,
sha256(policy_hash + public_key_base58)[:16]) to produce a stable identifier per
(policy, key) pair; update the cached_property issuer implementation to
reference the class field that holds the verification/public key (e.g.,
verification_key) and compute the combined hash accordingly.

Comment on lines +273 to +284
def _build_context(self, request: Any, method: str, request_data: dict[str, Any]) -> dict[str, Any]:
"""Build the JSON context passed to Cedar and receipts."""
user_info = getattr(request.state, "user", None)
return {
"http_method": request.method,
"jsonrpc_method": method,
"path": request.url.path,
"client_ip": request.client.host if request.client else "unknown",
"authenticated": bool(getattr(request.state, "authenticated", False)),
"token_scopes": (user_info.get("scope", []) if isinstance(user_info, dict) else []),
"request_id": request_data.get("id"),
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

request.client / request.url.path assume Starlette and a live connection.

request.client.host and request.url.path work for Starlette/FastAPI requests but will AttributeError if this is ever invoked against a plain ASGI scope or during testing with a synthetic request object. The request.client if request.client else "unknown" guard covers only the None-client case, not the "no client attribute" case. Same for request.url. Since evaluate_request already wraps Cedar evaluation in try/except, but _build_context runs before the try, a missing attribute here will bubble up and crash the middleware entirely.

Consider getattr-based access consistent with how user and authenticated are already read on lines 275/281.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bindu/extensions/scopeblind/extension.py` around lines 273 - 284, The
_build_context function assumes Starlette Request attributes and can raise
AttributeError when request lacks .client or .url; change the reads to safe
getattr/getattr-like checks: replace direct access to request.client.host and
request.url.path with guarded retrieval using getattr(request, "client", None)
and getattr(request, "url", None) and then fall back to defaults like "unknown"
or "" (e.g., client = getattr(request, "client", None); client_ip = client.host
if client and getattr(client, "host", None) else "unknown"; path =
getattr(getattr(request, "url", None), "path", "")). Keep other fields (user,
authenticated, request_data.get("id")) unchanged and ensure this logic lives
inside _build_context to avoid AttributeError before evaluate_request's
try/except.

tomjwxf pushed a commit to ScopeBlind/bindu-scopeblind that referenced this pull request Apr 20, 2026
Standalone Bindu extension emitting Ed25519-signed authorization
receipts in the Veritas Acta format (draft-farley-acta-signed-
receipts-02).

Responds to three concerns raised on GetBindu/Bindu#459 review:

1. Embedded-key rejection. sign_receipt()/verify_receipt() refuse
   payloads containing verification_key / issuer_key /
   signer_public_key. require_conformance_check=True runs a negative-
   conformance vector at init.

2. Policy content anchoring. Cedar policies live in a file directory;
   policy_digest is sha256 of the concatenated source. Inline policy
   strings are not supported (avoids silent fallback to literal-string
   on path typos).

3. VOPRF scope clarification. This extension emits Ed25519 receipts
   (tier T1) only. VOPRF issuer-blind tokens (tier T4) are a separate
   ScopeBlind product and are not in scope for this extension.

Plus the maintainer-requested package boundary: ships as a separate
installable, not inside bindu/extensions/ core.

Design posture:

- Shadow mode is the DEFAULT. Enforcement requires explicit opt-in.
- Enforce mode requires a non-empty cedar_policy_dir; empty is a
  configuration error.
- Verification key source is external only (PinnedTrustAnchor,
  JwksKeySource, DidDocumentKeySource, AgentCardKeySource).
- Agent card extension block published by ScopeBlindExtension so
  verifiers can resolve the issuer pubkey without ever seeing it in
  the receipt body.

Tests: 19 passing, covering default posture (4), signing and chain
linkage (4), embedded-key rejection (4), tamper detection (1), key
sources (4), agent card extension (1), enforce mode (1).

Package files:
- pyproject.toml, README.md, DESIGN.md, CALL-AGENDA.md
- bindu_scopeblind/{__init__, extension, middleware, receipts,
  key_sources, cedar_bridge, conformance}.py
- tests/test_extension.py

CALL-AGENDA.md is the pre-call artifact for the design review with
@raahulrahl scheduled for week of 2026-04-22.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature]: ScopeBlind Receipt Extension for Verifiable Agent Actions

1 participant