Android 16 fork. AI as a platform primitive. Twelve capabilities, one shared runtime, every app. OEM-pluggable. Apache 2.0.
-
Updated
Apr 23, 2026 - Shell
Android 16 fork. AI as a platform primitive. Twelve capabilities, one shared runtime, every app. OEM-pluggable. Apache 2.0.
Open source Node.js runtime for local LLM inference, on-device AI, and private model execution.
Multi-provider LLM runtime core: routing, key management, and resilient fallback execution for agent orchestration.
Shared JavaScript runtime core for local LLM inference across Node.js, browser, and React Native.
A domain-agnostic, production-oriented, high-performance Rust video perception runtime built on GStreamer.
Add a description, image, and links to the inference-runtime topic page so that developers can more easily learn about it.
To associate your repository with the inference-runtime topic, visit your repo's landing page and select "manage topics."