Skip to content
@gyrogovernance

Gyro Governance

AI Research & Development

Gyro Governance

Open source research for AI alignment, evaluation, and governance

Website Author License


About

Gyro Governance is an independent research lab founded in 2013, focused on artificial intelligence alignment, evaluation, and governance.

We build methods and systems that make AI more measurable, more inspectable, and easier to govern responsibly. Our work connects mathematical physics, practical engineering, and governance design across four areas:

  • AI safety evaluation and alignment
  • Verifiable AI systems
  • Governance infrastructure
  • Mathematical foundations

Rather than treating safety as a filter added after deployment, we study how observability, accountability, and coordination can be built directly into the structure of intelligent systems.


🔬 Labs

Lab Focus Repository
Mathematical Physics Science Gyroscopic Alignment Research science
❤️ ASI/AGI Architecture Gyroscopic Alignment Models superintelligence
🌟 AI Safety Diagnostics Gyroscopic Alignment Evaluation diagnostics
🧭 AI Quality Governance Gyroscopic Alignment Behaviour tools

🧭 Projects

AI Safety Epistemological Framework and Taxonomy for Risks Detection and Mitigation. Maps all AI safety failures to four structural displacement risks: Governance Traceability (GTD), Information Variety (IVD), Inference Accountability (IAD), and Intelligence Integrity (IID). Applications include jailbreak testing, control evaluations, alignment detection, research funding, and regulatory compliance. Validated on 90+ million sparse autoencoder features across sixteen language models.

📄 Interactive NotebookLM · Claude Opus 4.6 Report · ChatGPT 5.2 Report

🌟 GyroGem: AI Safety Agent

Tailored AI safety assistant built on The Human Mark, explaining AI and mitigating risks of technological illiteracy. Supports the practical ability to use technology well, question outputs critically, and understand where tools help, where they fail, and how they affect people and society.

💬 Instagram · Google Gemini

🕵️ AI Inspector

Browser extension for evaluation and governance of AI outputs. Gadgets for rapid testing, policy auditing, AI infection sanitization, and THM meta-evaluation. Full evaluation suite with Quality Index, Superintelligence Index, Alignment Rate, and 20+ metrics. Local-first storage, works with ChatGPT, Claude, Gemini. No API keys required.

🧩 Chrome Web Store

Quantum Computing Advantage on Standard Silicon. The aQPU is a compact, finite-state kernel for AGI with verified quantum speedups, 33% holographic compression, and intrinsic error detection. QuBEC is its Bose-Einstein byte medium enabling quantum properties on standard CPUs/GPUs without qubits or cryogenics. 1.26B ops/s, 499 tests passing, 4,096 states, zero qubits.

📊 Strategic Significance · SDK Spec · Climate Brief

Intelligence-Agnostic Meta-Computing. GyroLabe is the substitutional execution layer that upgrades neural models by swapping their internal engine, with native C/C++ backends and llama.cpp integration. GyroGraph is a multicellular quantum AI coordinating distributed computation through four bridge domains. Verified: 100% native matmul routing, 284× faster encode than softmax, zero transcendental functions.

📄 GyroLabe Spec · GyroGraph Spec

AI Safety Capacity-Building Stack for Human-AI Coordination and Governance. Routes human capacity into paid work with full replayable provenance, removing institutional gates. Contributors map work to four governance capacities (Intelligence Cooperation, Inference Interaction, Information Curation, Governance Management). For labs and funders: verifiable outcomes and auditable compliance for ISO 42001 and AI Legislation. Coordinates across Economy, Employment, Education, Ecology.

📄 AIR Brief · AIR Logistics

Attentiveness-based monetary system for Post-AGI Transformative AI Risks Mitigation. Civil governance framework where coordination capacity is physically abundant and verifiable, anchored in the SI second and the aQPU state-space. Total capacity: 7.94 × 10²⁶ Moment-Units. Unconditional High Income baseline of 240 MU/day. Native commodity: AI Generated Tokens as verified inference events. No debt issuance, no discretionary monetary policy.

📄 Whitepaper · Specification

Post-AGI/ASI governance sandbox modeling human-AI systems alignment across Economy, Employment, Education, and Ecology. Shows robust convergence to stable equilibrium under seven coordination strategies: poverty resolves through surplus distribution, unemployment becomes alignment work, miseducation shifts to epistemic literacy, ecological degradation appears as upstream displacement.

📊 Interactive Results

Physics grounded evaluation and pathology detection for AI Safety and Alignment. Production-ready suite with 5 targeted challenges, 20-metric assessment, and pathology detection (hallucination, sycophancy, goal drift, semantic instability). First framework to operationalize superintelligence measurement from axiomatic principles. Latest evaluations: ChatGPT 5 (73.92% Quality, SUPERFICIAL) and Claude Sonnet 4.5 (82.00% Quality, VALID).

LLM Alignment Protocol making AI 30-50% smarter and safer by adding structured reasoning to each response. Proven gains: ChatGPT +32.9% quality, +50.9% structural reasoning, +62.7% accountability; Claude Sonnet +37.7% quality, +67.1% structural reasoning, +92.6% traceability. Works with any AI model without retraining.


📚 Resources

Newsletter

The Walk - A Journey of Self-Discovery, Augmented Intelligence (AI) & Good Governance. Weekly insights on AI adoption, alignment, and ethical governance. 🔗 LinkedIn Newsletter

Foundational Theory

⚗️ Common Governance Model (CGM) - The mathematical physics foundation for all research. Formal proofs, geometric analyses, and axioms grounding AI safety and governance work.

Datasets

Guides

Publications

  • 📄 AI Quality Governance - Human Data Evaluation and Responsible AI Behavior Alignment
  • 📄 AI Canon - Sensory Ethics for Biological and Artificial Entities

Experiments

Media


👤 Founder & AI Governance Lead

Basil Korompilias - AI Governance Lead with over two decades of multidisciplinary experience spanning product design, change management and applied research.

🌐 Website · LinkedIn · ✉️ basilkorompilias@gmail.com


All repositories are open source and actively maintained.
Contributions welcome from researchers, developers, and AI safety enthusiasts.

Pinned Loading

  1. diagnostics diagnostics Public

    AI Safety Diagnostics: Gyroscopic Alignment Evaluation Lab

    Python 21

  2. superintelligence superintelligence Public

    Artificial Superintelligence Architecture (ASI/AGI): Gyroscopic Alignment Models Lab

    Python 12

  3. science science Public

    Mathematical Physics Science: Gyroscopic Alignment Research Lab

    Python 6

  4. tools tools Public

    AI Quality Governance: Gyroscopic Alignment Behaviour Lab

    Python 5 1

Repositories

Showing 7 of 7 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…