Production-grade Kotlin agent runtime for the JVM.
kore gives Kotlin and Java teams everything they need to run AI agents in production: a coroutine-based agent loop, multi-LLM backends, MCP protocol support, budget enforcement, and first-class testing utilities — wired together with a clean Kotlin DSL.
LangChain4j provides an LLM call abstraction but does not give you:
- A reactive execution model (coroutines, structured concurrency, cancellation propagation)
- A built-in skills system with YAML-driven pattern activation
- Token budget governance integrated into the agent loop
- An event bus for agent lifecycle events (Kotlin Flows by default; Kafka opt-in)
- OpenTelemetry spans and Micrometer metrics on every LLM call and tool use
- A testing module with scripted mock backends for deterministic agent tests
kore fills that gap for enterprise Java/Kotlin teams.
Add the Spring Boot starter to your Gradle build:
implementation("io.github.unityinflow:kore-spring-boot-starter:0.0.1-SNAPSHOT")Write an agent:
import io.github.unityinflow.kore.core.AgentResult
import io.github.unityinflow.kore.core.AgentTask
import io.github.unityinflow.kore.core.dsl.agent
import io.github.unityinflow.kore.llm.claude
import io.github.unityinflow.kore.mcp.mcp
import java.util.UUID
val runner = agent("my-agent") {
model = claude(apiKey = System.getenv("ANTHROPIC_API_KEY"))
tools(mcp("github", "npx", "-y", "@github/github-mcp-server"))
budget(maxTokens = 10_000)
}
val result = runner.run(AgentTask(id = UUID.randomUUID().toString(), input = "Create a GitHub issue")).await()
when (result) {
is AgentResult.Success -> println("Done: ${result.output}")
is AgentResult.BudgetExceeded -> println("Budget exceeded after ${result.tokenUsage.totalTokens} tokens")
is AgentResult.ToolError -> println("Tool failed: ${result.message}")
is AgentResult.LLMError -> println("LLM error: ${result.message}")
}The kore-spring starter is the fastest way to get an agent running in a real
Spring Boot app. Add one Gradle dependency, write one @Bean, visit /kore.
1. Add the starter to build.gradle.kts:
implementation("io.github.unityinflow:kore-spring:0.0.1-SNAPSHOT")2. Configure your LLM provider in application.yml — never commit API keys:
kore:
llm:
claude:
api-key: ${KORE_CLAUDE_API_KEY} # set as an environment variable
skills:
directory: ./kore-skills # YAML skills auto-loaded from here3. Drop a YAML skill in ./kore-skills/code-review.yaml:
name: code-review
description: Activates on review requests and injects review heuristics
version: 1.0.0
activation:
task_matches:
- "(?i)review"
- "(?i)pull request"
prompt: |
When reviewing code, check correctness, clarity, security and tests.4. Register an agent as a Spring @Bean:
@SpringBootApplication
class MyApp {
@Bean
fun reviewAgent(
claude: io.github.unityinflow.kore.llm.ClaudeBackend,
): AgentRunner = agent("review-agent") {
model = claude
budget(maxTokens = 10_000)
}
}5. Run the app and visit:
http://localhost:8090/kore— live HTMX dashboard (active agents, recent runs, cost summary)http://localhost:8080/actuator/kore— Spring Actuator health endpoint
That's the entire developer experience. kore-spring auto-wires the event bus,
budget enforcer, audit log, skill registry, observability, and dashboard from
classpath detection — your agent inherits all of it for free.
import io.github.unityinflow.kore.core.dsl.agent
import io.github.unityinflow.kore.core.internal.fallbackTo
import io.github.unityinflow.kore.llm.claude
import io.github.unityinflow.kore.llm.gpt
val runner = agent("resilient-agent") {
model = claude(apiKey = System.getenv("ANTHROPIC_API_KEY")) fallbackTo
gpt(apiKey = System.getenv("OPENAI_API_KEY"))
budget(maxTokens = 5_000)
}If the Claude backend fails (rate limit, network error, API outage), the agent automatically retries with exponential backoff, then falls back to GPT-4o without any changes to your application code.
import io.github.unityinflow.kore.core.dsl.agent
import io.github.unityinflow.kore.llm.ollama
val runner = agent("local-agent") {
model = ollama(baseUrl = "http://localhost:11434", model = "llama3")
}Zero API keys, zero cloud dependencies. The same agent { } DSL works with any backend.
import io.github.unityinflow.kore.core.AgentResult
import io.github.unityinflow.kore.core.AgentTask
import io.github.unityinflow.kore.core.LLMChunk
import io.github.unityinflow.kore.core.dsl.agent
import io.github.unityinflow.kore.test.MockLLMBackend
import io.kotest.matchers.shouldBe
import io.kotest.matchers.types.shouldBeInstanceOf
import kotlinx.coroutines.test.runTest
import org.junit.jupiter.api.Test
class MyAgentTest {
@Test
fun `agent returns scripted response without network`() = runTest {
val runner = agent("test-agent") {
model = MockLLMBackend("mock")
.whenCalled(
LLMChunk.Text("Hello from kore!"),
LLMChunk.Usage(inputTokens = 20, outputTokens = 10),
LLMChunk.Done,
)
budget(maxTokens = 1_000)
}
val result = runner.run(AgentTask(id = "test-1", input = "say hello")).await()
val success = result.shouldBeInstanceOf<AgentResult.Success>()
success.output shouldBe "Hello from kore!"
success.tokenUsage.inputTokens shouldBe 20
}
}MockLLMBackend scripts exact LLMChunk sequences. No network, no flakiness, no API costs in CI. This is a key differentiator — most JVM agent frameworks have no testing story at all.
| Module | Maven Artifact | What it provides |
|---|---|---|
kore-core |
io.github.unityinflow:kore-core |
Agent loop, DSL, sealed result types, port interfaces, in-memory stubs. No external dependencies except kotlinx-coroutines-core. |
kore-llm |
io.github.unityinflow:kore-llm |
LLM backend adapters: Claude, GPT-4, Ollama, Gemini. DSL factory functions: claude(), gpt(), ollama(), gemini(). |
kore-mcp |
io.github.unityinflow:kore-mcp |
MCP protocol client (stdio + SSE) and server. DSL factory functions: mcp(), mcpSse(). |
kore-test |
io.github.unityinflow:kore-test |
MockLLMBackend, MockToolProvider, session recording and replay for deterministic agent tests. |
kore-observability |
io.github.unityinflow:kore-observability |
OpenTelemetry spans + Micrometer metrics on every LLM call and tool use. |
kore-storage |
io.github.unityinflow:kore-storage |
PostgreSQL audit log via Exposed R2DBC + Flyway migrations. |
kore-skills |
io.github.unityinflow:kore-skills |
YAML skill definitions (./kore-skills/*.yaml) with pattern-based auto-activation. |
kore-dashboard |
io.github.unityinflow:kore-dashboard |
Embedded HTMX dashboard (Ktor 3.2 CIO): active agents, recent runs, cost summary. |
kore-spring |
io.github.unityinflow:kore-spring |
Spring Boot 4 auto-configuration starter that wires every module above from a single Gradle dependency. |
- Kotlin 2.0+ / JVM 21+
- Gradle 9.x (Kotlin DSL)
For LLM backends, you will need the relevant API keys (see LlmBackends.kt for environment variable names).
./gradlew build # compile + test all modules
./gradlew test # run all tests
./gradlew lintKotlin # ktlint check
./gradlew formatKotlin # ktlint formatSee CONTRIBUTING.md.
MIT — 2026 Jiří Hermann / UnityInFlow contributors