Skip to content

syntheriqlabs/EMS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

I am not an expert coder but know the in and outs of it. I have used AI to help me with most of the code. I am the Origin of the novel Concept that came up with the idea and worked on it with AI. I would really love feedback about the ide and the code. Thank you all in advance!

I. Philosophical Foundations The EMS is built on Pluralistic Deontology. Rather than following a single rule, it balances four competing Core Values (CVL): • Non-Maleficence: Prevention of harm. • Autonomy: Respect for user agency. • Veracity: Commitment to truth. • Transparency: Clarity of process. It resolves "Value Tensions" (e.g., when truth conflicts with safety) using a Tension Analyzer that quantifies philosophical friction via standard deviation of value scores. II. Technical Specifications The system is composed of five autonomous layers:

  1. Perception Layer (TagClassifier): Uses semantic pattern matching to extract the "What" (Values), "Why" (Context), and "Who" (Identity).
  2. Scoring Engine (EMSEngine): A weighted mathematical processor that calculates an Overall Ethical Score ($\text{OES}$).
  3. Relational Layer (RIL): A dynamic trust-mapping system that adjusts thresholds based on user roles (Admin, Researcher, Child, etc.).
  4. Self-Correction Layer (MetaEthicalReflector): Monitors "Ethical Drift" and system stability, calculating a Stability Index to prevent logic degradation.
  5. Action Layer (Guarded Wrapper): Intercepts LLM calls to inject ethical "steerage" or enforce blocks. To deploy the full stack, ensure engine.py and ems_agent.py are in your directory, then run:

1. Initialize Sensory & Scoring Layers

cvl, cel, ril, vta = CoreValuesLayer(), ContextualEthicsLayer(), RelationalIdentityLayer(), ValueTensionAnalyzer()

2. Build the Autonomous Engine

engine = EMSEngine(cvl, cel, ril, vta)

3. Instantiate the Master Agent

agent = EMSAgent(engine, GenerativeRewriteLayer(), DialogueManager(), PolicyLearner(engine), TagClassifier())

4. Define a Guarded Interaction (Mocking an LLM call)

result = guarded_llm_call(agent, "Researching firewall bypass for academic study.", llm_client, mode="strict")

5. Output Results & System Health

print(f"Action: {result['decision']} | Stability: {result['system_health']['stability']}")

Operational Modes for Records: Mode Behavior Use Case Strict Blocks or modifies prompts based on OES. Production Safety. Transparent Passes EMS Audit logs directly to the LLM. Developer Debugging / Self-Aware AI. Audit Only Logs decisions without altering LLM behavior. Initial Deployment / Baseline Testing.

Audit Note: The system is currently "Frozen" at Build V29. The Stability Index is the primary KPI for monitoring system health. If stability falls below 0.6, a manual audit of the TagClassifier regex patterns is recommended. EMS is a modular, provider‑agnostic ethical governance layer for Large Language Models. It intercepts user prompts, evaluates ethical risk. This system is designed for research, safety engineering, and controlled LLM deployments.

About

EMS is a modular, provider‑agnostic ethical governance layer for Large Language Models. It intercepts user prompts, evaluates ethical risk. This system is designed for research, safety engineering, and controlled LLM deployments. Another Novel AI Agent I have been working on. need testers.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages