Skip to content

PlatformNetwork/prism

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

13 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

PRISM

Decentralized neural architecture search for frontier-model research

Overview β€’ Architecture β€’ Submissions β€’ Scoring β€’ Scaling β€’ Security β€’ Operators β€’ API

Python FastAPI License Bittensor Platform

PRISM Banner


Overview

PRISM is a Platform Network challenge for decentralized neural architecture search. Miners submit Python projects that define model architectures, training recipes, loss functions, optimizer setup, training steps, and inference hooks. PRISM evaluates those projects in isolated GPU containers on smaller proxy models, then rewards the ideas that produce better architecture and training behavior.

The goal is not to train frontier models directly inside the challenge. Instead, PRISM searches the design space around frontier-model building blocks using compact evaluations that are fast enough for a subnet, but rich enough to surface useful architecture, optimizer, loss, inference, and scaling-law signals.

What PRISM Rewards

  • Architecture discovery: first discovery of a meaningful architecture family earns architecture ownership.
  • Training and inference improvement: later miners can improve optimizer setup, inference logits, loss computation, or train-step code for an existing architecture and earn training ownership.
  • Robust improvements: dynamic thresholds and noise checks prevent tiny random metric changes from stealing rewards.
  • Scaling-aware signals: PRISM emphasizes smooth loss curves, stable gradients, activation stability, and consistent improvements across model size, depth, sequence length, and batch scaling.
  • Secure execution: submitted code is reviewed statically and by optional LLM policy checks, then executed only inside isolated containers through the Platform Docker broker.

Documentation Index


System Flow

flowchart LR
    Miner[Miner] --> Platform[Platform]
    Platform --> Prism[PRISM]
    Prism --> Review[Review]
    Review --> Broker[Docker Broker]
    Broker --> GPU[GPU Eval]
    GPU --> Scale[Scaling Signals]
    Scale --> Scores[Scores]
    Scores --> Weights[Weights]
Loading
sequenceDiagram
    participant M as Miner
    participant P as Platform
    participant R as PRISM
    participant D as Docker
    participant W as Weights
    M->>P: signed ZIP upload
    P->>R: verified hotkey submission
    R->>R: static and LLM review
    R->>D: isolated GPU evaluation
    D-->>R: q_arch, q_recipe, hook, stability metrics
    R->>R: scaling-aware attribution
    R->>W: split component rewards
Loading

Quick Start

git clone https://github.com/PlatformNetwork/prism.git
cd prism
python -m venv .venv
.venv/bin/python -m pip install -e ".[dev]"
.venv/bin/pytest

Run the API locally with a development shared token:

PRISM_SHARED_TOKEN=dev-secret \
PRISM_DATABASE_URL=sqlite+aiosqlite:///./prism.sqlite3 \
.venv/bin/uvicorn prism_challenge.app:app --host 0.0.0.0 --port 8000

Validate the project:

.venv/bin/ruff check src tests
.venv/bin/ruff format --check src tests
.venv/bin/mypy --config-file pyproject.toml src
.venv/bin/pytest tests

Miner Project Contract

Miners submit a .zip project with Python code and an optional prism.yaml manifest.

kind: full
architecture:
  entrypoint: src/model.py
training:
  entrypoint: src/train.py

The architecture entrypoint must expose:

def build_model(ctx):
    ...

def get_recipe(ctx):
    ...

Optional hooks are first-class signals for training and inference attribution:

  • configure_optimizer(model, recipe, ctx)
  • inference_logits(model, batch, ctx) or infer(model, batch, ctx); inference_logits takes precedence when both exist.
  • compute_loss(model, batch, ctx)
  • train_step(model, batch, optimizer, ctx)

PRISM records hook presence and usage metrics, fingerprints hook-bearing files, and attributes training/inference ownership to the miner whose code produces a meaningful, scalable improvement. See Submission Format for complete examples.


Scaling-Law Evaluation Philosophy

PRISM is designed to avoid rewarding signals that often fail at scale. Weak predictors include early MMLU-style benchmarks, subjective chat quality, final perplexity alone, single-seed results, and very short training runs without extrapolation.

The strongest proxy signals are:

  • smooth loss curves without oscillation;
  • stable gradient norms without silent explosion;
  • absence of activation spikes, especially for paths that could scale beyond 10B parameters;
  • coherent improvements across model sizes, such as similar gains at 125M, 350M, and 1B proxy scales;
  • depth, sequence, and batch scaling tests that expose residual-stream drift, MoE routing collapse, KV-cache degradation, normalization failures, overflow, NaNs, and gradient-noise problems.

See Scaling Evaluation for the complete scaling policy.


Challenge Contract

PRISM is a standard Platform challenge. It exposes:

  • GET /health
  • GET /version
  • POST /v1/submissions
  • GET /v1/submissions/{submission_id}
  • GET /v1/leaderboard
  • GET /v1/architectures
  • GET /v1/training-variants
  • GET /internal/v1/get_weights

Platform also forwards verified uploads to:

  • POST /internal/v1/bridge/submissions

Validators can use internal assignment routes when PRISM is run in validator-assignment mode.


Repository Layout

prism/
  assets/                     # README and documentation images
  docs/                       # Project documentation
  src/prism_challenge/        # FastAPI app, repository, evaluator, SDK helpers
  src/prism_challenge/evaluator/
    components.py             # Architecture/training manifest parsing and fingerprints
    container.py              # Isolated Docker/GPU evaluation runner
  tests/                      # API, scoring, broker, executor, and safety tests
  config.example.yaml         # Production-oriented example config
  Dockerfile                  # Challenge image

Current Status

PRISM currently supports:

  • Platform bridge uploads with verified miner hotkeys.
  • ZIP multi-file Python projects.
  • GPU-only remote evaluation through the Platform Docker broker.
  • Static source checks, optional LLM review, plagiarism review, and ZIP hardening.
  • Architecture-family ownership.
  • Training-variant ownership for existing architectures.
  • Semantic agent review for architecture and training attribution, including holds for low-confidence cases.
  • Hook usage metrics for optimizer, inference, loss, and train-step customization.
  • Scaling-aware evaluation guidance covering loss curves, gradients, activations, size/depth/sequence/batch extrapolation.
  • Dynamic absolute, relative, and z-score improvement thresholds.
  • Standard Platform get_weights integration.

License

Apache-2.0

About

[πŸ”¬] PRISM is a Platform challenge for decentralized neural architecture search, where miners submit architectures and training recipes to discover scalable AI improvements through competitive evaluation.

Resources

License

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors