Skip to content

Add orynq-ai-auditability community ability#249

Closed
realdecimalist wants to merge 3 commits intoopenhome-dev:devfrom
realdecimalist:add-orynq-ai-auditability
Closed

Add orynq-ai-auditability community ability#249
realdecimalist wants to merge 3 commits intoopenhome-dev:devfrom
realdecimalist:add-orynq-ai-auditability

Conversation

@realdecimalist
Copy link
Copy Markdown

Summary

Adds a community ability that creates tamper-proof, blockchain-anchored audit trails for AI conversations using Orynq's Proof-of-Inference protocol.

  • Builds SHA-256 rolling hash chains where each entry links to the previous one, making any tampering detectable
  • Uploads trace data to the Materios blob gateway (permissionless — no API key required)
  • The cert daemon committee (10 independent attestors) verifies data availability and certifies the receipt
  • Certified receipts are batched into Cardano mainnet anchor transactions (metadata label 8746)

No API key or wallet setup required. Works out of the box.

Files

File Purpose
main.py Python ability — builds hash chain, uploads to Materios gateway
README.md Docs with setup, example conversation, technical details
__init__.py Package init

Live Infrastructure

Checklist

  • PR targets dev branch
  • Files in community/orynq-ai-auditability/
  • main.py extends MatchingCapability with register_capability + call
  • README.md with description, trigger words, and setup
  • resume_normal_flow() called on every exit path (in finally block)
  • No print() statements (uses editor_logging_handler)
  • No blocked imports
  • No hardcoded API keys (gateway API key is optional, empty by default)
  • Error handling on all API calls

🤖 Generated with Claude Code

@realdecimalist realdecimalist requested a review from a team as a code owner April 13, 2026 16:05
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 13, 2026

🔀 Branch Merge Check

PR direction: add-orynq-ai-auditabilitydev

Passedadd-orynq-ai-auditabilitydev is a valid merge direction

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 13, 2026

✅ Community PR Path Check — Passed

All changed files are inside the community/ folder. Looks good!

@github-actions github-actions bot added the community-ability Community-contributed ability label Apr 13, 2026
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 13, 2026

✅ Ability Validation Passed

📋 Validating: community/orynq-ai-auditability
  ✅ All checks passed!

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 13, 2026

🔍 Lint Results

__init__.py — Empty as expected

Files linted: community/orynq-ai-auditability/main.py

✅ Flake8 — Passed

✅ All checks passed!

Creates tamper-proof, blockchain-anchored audit trails for AI
conversations using Orynq's Proof-of-Inference protocol.

- Builds SHA-256 rolling hash chains where each entry links to the
  previous one, making any tampering detectable
- Uploads trace data to the Materios blob gateway (permissionless,
  no API key required)
- The cert daemon committee (10 independent attestors) verifies
  data availability and certifies the receipt
- Certified receipts are batched into Cardano mainnet anchor
  transactions (metadata label 8746)

Trigger phrases: "audit my AI", "run orynq", "anchor this session",
"proof of inference", "blockchain audit", etc.

Docs: https://docs.fluxpointstudios.com/materios-partner-chain
Explorer: https://materios.fluxpointstudios.com/explorer
SDK: https://github.com/flux-point-studios/orynq-sdk
Copy link
Copy Markdown
Contributor

@uzair401 uzair401 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: Refactor as a Background Daemon

The hash chain logic and Materios/orynq-sdk integration look great — the rolling SHA-256 chain with content hashing and the privacy-first approach (only hashes on-chain, never raw content) are exactly right for AI auditability.

However, I think this would work much better as a background daemon instead of the current conversational loop pattern. Right now the user has to activate the trigger and dictate messages one by one for them to be hashed — but the real power of orynq's Proof-of-Inference is capturing AI agent actions passively and continuously. A background daemon would let the SDK do what it's designed for: building cryptographic receipts of every interaction automatically.

Suggested approach

Split into background.py + main.py:

  • background.py — Runs silently in a while True loop, polls get_full_message_history() every 60-90s, hashes new messages into the rolling chain. This gives you continuous process tracing, every conversation turn gets a cryptographic receipt without the user doing anything.

  • main.py — Triggered when the user says Trigger words. Reads the persisted chain, summarizes it, and handles the Materios upload / blockchain anchoring with user consent. This is where the user interacts with the orynq-sdk's anchoring options.

This way the hash chain builds up passively throughout the session, the chain is persisted across sessions via file storage, and the user can verify or anchor it anytime through the foreground ability. Your existing logic (_build_hash_chain_entry, _build_trace_content, _anchor_to_materios) is reusable as-is — it just needs to move into the right files.

Relevant docs

Happy to help if you have questions on the daemon pattern.

@uzair401
Copy link
Copy Markdown
Contributor

Hello @realdecimalist, please review the suggestions and implement the changes. Once done, we’ll reopen the PR and I’ll review it.

Also, please use the following audit prompt to improve voice naturalness, UX, and robustness for real-world spoken interactions:

"You are auditing a voice ability built for native US English speakers. The ability runs
on a smart speaker and all user interaction is spoken out loud — not typed. Review the
code below and find every place where the ability is brittle to natural spoken English,
has problematic voice output, or has UX issues specific to a voice device. Specifically
look for: 1. HARDCODED STRING MATCHING — any place that checks if a specific word or
phrase appears in user input (e.g. if "reschedule" in lower, if "yes" in response). For
each one, list 6-8 natural spoken English alternatives a native US speaker might say
instead. 2. LLM CLASSIFIER PROMPT EXAMPLES — any prompt that provides example phrases
to teach the LLM what the user might say. For each example phrase that sounds too formal
or textbook, suggest a natural spoken replacement. 3. EXIT / CONFIRMATION WORD LISTS —
any hardcoded list used to detect "stop", "yes", "no" or similar. For each list, add the
missing natural spoken variants. 4. VOICE OUTPUT PROBLEMS — any speak() string (or LLM
system prompt whose output goes directly into speak()) that contains: markdown
formatting (**, *, #, --), bullet points or numbered lists, emojis, URLs, or stage
directions like (pauses) or (laughs). Also flag any LLM system prompt that generates
spoken output but does not explicitly instruct the model to use plain spoken English
with no formatting. For each issue, quote the string and explain the fix. 5. RESPONSE
LENGTH — any speak() string that exceeds 30 words, or any LLM system prompt that does
not set a response length limit for spoken output. Suggest a tighter version of the
string or the length instruction to add to the prompt. Target: confirmations and
acknowledgements under 10 words, standard replies 1-2 sentences under 15 words, result
delivery 2-3 sentences max. 6. MENU-DRIVEN FLOW — if the ability contains three or more
sequential yes/no prompts asked one after another with no LLM routing between them (e.g.
repeated run_io_loop or speak + user_response calls each asking a yes/no question), flag
it. Note how many sequential prompts were found and suggest collapsing them into a
single open-ended question with LLM-based routing. For each issue found, return: - The
exact line or variable from the code - What is wrong or missing - The suggested fix or
addition Format your response as a numbered list, one issue per entry. Be specific —
quote the actual code. If the ability is already well-written across all six areas, say
so."

@uzair401 uzair401 closed this Apr 15, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community-ability Community-contributed ability

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants