Releases: aleksanderpalamar/julia-code
Publish Release
User-Invocable Skills via Slash Commands
Skills can now be invoked directly from the chat input using slash commands. Any skill file with user_invocable: true in its frontmatter is automatically
registered as a /skill-name <args> command. The argument is interpolated into the skill content via the $ARGUMENTS placeholder and injected into the
system prompt for the first iteration.
YAML Frontmatter for Skills
All skill and temperament files now support a YAML frontmatter block with the following fields:
| Field | Description |
|---|---|
| name | Skill identifier |
| description | Shown in the slash command list |
| argument_hint | Usage hint displayed on the TUI |
| user_invocable | Exposes the skill as a slash command |
| always_load | Controls whether the skill loads automatically |
Improved Default Skills
- coder: Rewritten with detailed guidelines covering pre-edit checklist, style fidelity, verification steps, and per-language practices.
- memory: Added Immediate Save Rule (save before responding when the user says "remember X") and Conflict Resolution (always overwrite stale facts).
- subagent: Added Context Passing (self-contained subtask descriptions) and Result Validation (verify results before reporting to the user).
- Temperaments: All three (auto, sharp, warm) received specific guidelines for code review requests.
Publish Release
v0.5.1 - Long-term memory fixes
Bug fixes
- memory recall failed to find obvious entries on natural-language queries. The search used a plain LIKE
'%query%', so a query like "user name identity who" was matched as a single substring and returned 0 results,
even when memories such as user-name and user-fullname clearly existed. The query is now tokenized
(Unicode-aware, case-insensitive, dropping tokens shorter than 2 characters) and each token becomes a LIKE
clause OR'd together, with a fallback to the raw query when tokenization yields nothing (preserves literal
punctuation lookups). - Julia answered specific questions with unrequested extra context. Asking "what is my name?" returned name +
role + employer + country. Added an "Answer ONLY What Was Asked" rule to the memory skill: a specific
question gets a specific answer. Includes EN/PT examples ("what is my name?", "what OS do I use?") and the
single legitimate exception ("who am I?", where aggregating multiple identity memories is appropriate).
Tests
- New test/memory-search.test.ts with a direct regression for the production case ("user name identity who"
must hit user-name, user-fullname, user-os), plus coverage for single-word queries, case-insensitivity,
category filtering, and the punctuation fallback. - Full suite: 256/256 green.
Publish Release
Semantic memory pipeline
Turns Julia's long-term memory from a flat recency-ordered key/value store into a semantic retrieval pipeline with
embeddings, ranking, layered gating, and graceful degradation — without breaking any existing memory, tool call, or
config.
Highlights
- Embeddings first, keyword as fallback. When enabled, Julia picks the top memories by cosine similarity +
importance + exponential-decay recency instead of dumping the 30 most recent into the system prompt. - Zero-break rollout. The whole semantic path sits behind memory.semantic.enabled (default false). With the flag
off, the system prompt is byte-identical to 0.4.0. - Degradation is a first-class citizen. If Ollama is offline, the embedding model is missing, or any embed call
fails, Julia silently falls back to the legacy recent-memories injection, it never errors out because of a missing
vector. - Resumable backfill. Existing memories get their embeddings populated by juju memory backfill, idempotent by design
(WHERE embedding IS NULL), with per-item retry and abort-after-N-consecutive-failures safety.
Added
- New src/memory/ domain folder:
- embeddings/: EmbeddingProvider interface, OllamaEmbeddingProvider, NullEmbeddingProvider sentinel, barrel with
provider cache + TTL capability detection. - similarity.ts: cosine similarity and recencyScore(createdAt, halflifeDays) using exponential decay (fixes the
1/Δms bug that would silently zero the recency term). - retrieval.ts: retrieveRelevantMemories(input, deps) with injectable candidate loader; degrades to [] when the
provider is unavailable, the embed call fails, no embedded candidates exist, or vector dimensions mismatch. - context-builder.ts: buildContextBlock(ranked, budget) that respects the existing token budget.
- gating.ts: layered gating (pure-greeting regex + significant-token count). Fixes the length<15 heuristic that
wrongly rejected useful short queries like "qual meu OS?". Exposes a customGate hook for future LLM gates or
classifiers. - pipeline.ts: prepareMemoryContext(sessionId, input, budget) that composes gating + availability + retrieval
with fallback at every failure point, plus legacyRecentMemoriesBlock(budget) reproducing the 0.4.0 format
byte-for-byte. - embed-writer.ts: ensureEmbedding(key) for fire-and-forget write-behind from the save path.
- backfill.ts: backfillMissingEmbeddings(opts) with onProgress, AbortSignal, and maxConsecutiveFailures safety
net.
- embeddings/: EmbeddingProvider interface, OllamaEmbeddingProvider, NullEmbeddingProvider sentinel, barrel with
- memory.semantic.* section in config (schema + settings) with conservative defaults:
- enabled: false, provider: 'ollama', embeddingModel: 'nomic-embed-text'
- rankingWeights: { similarity: 0.6, importance: 0.3, recency: 0.1 }
- recencyHalflifeDays: 30, maxMemories: 5, availabilityCheckTtlMs: 30_000
- autoBackfillOnStart: false
- CLI subcommand juju memory backfill with stdout progress indicator and non-zero exit on failure/abort.
- memory tool gained an optional semantic?: boolean parameter on recall (auto-picks semantic when the provider is
available, explicit false always forces keyword). The JSON schema change is strictly additive. - New DB helpers in src/session/manager.ts: updateMemoryEmbedding, getEmbeddedMemories, getMemoriesWithoutEmbedding,
countMemoriesWithoutEmbedding, getLatestUserMessage. - 8 new Vitest suites covering schema migration, embeddings with capability TTL, similarity math, semantic retrieval
- context building, gating corner cases, pipeline fallback decision tree, tool behavior, embed-writer, and
resumable backfill.
Changed
- memories table gained four additive columns: embedding BLOB, embedding_model TEXT, importance REAL,
last_accessed_at TEXT. The migration follows the existing conditional pragma('table_info') + ALTER TABLE ADD COLUMN
pattern and is idempotent, re-running initSchema on a legacy DB is a no-op. - saveMemory(key, content, category, sourceSessionId?) now also accepts an options object { sourceSessionId?,
embedding?, embeddingModel?, importance? }. Both call shapes work. Re-saving text without passing an embedding
preserves the stored vector via SQL COALESCE. - src/agent/context.ts replaces its 30-line inline memories block with a single call to prepareMemoryContext. With
the flag off, the emitted system prompt is unchanged. - README.md and README.pt-BR.md document the new settings block, the rollout steps, and the degradation guarantee.
Tests
- 251 tests passing (164 → 251, 87 new cases).
- No regressions in the 164 pre-existing tests, including the full ACP orchestration suite.
Migration notes
Upgrading from 0.4.0 is a no-op:
- Start juju normally, the schema migration runs automatically on the first DB open.
- The semantic pipeline stays off until you flip memory.semantic.enabled.
To turn semantic memory on:
ollama pull nomic-embed-text
then in ~/.juliacode/settings.json:
"memory": { "semantic": { "enabled": true } }
juju memory backfill
Set memory.semantic.autoBackfillOnStart: true if you want future boots to resume the backfill automatically.
Commits
b5555f3 feat(memory): migrate memories table with embedding columns (additive)
f6e8206 feat(memory): add embedding provider abstraction with capability detection
1130dc4 feat(memory): add semantic retrieval with graceful degradation
eb7ead5 feat(memory): add layered memory gating without char-length bias
8eaacff feat(memory): integrate memory pipeline into context builder with legacy fallback
3e7c962 feat(memory): evolve memory tool with async embedding and semantic recall flag
67e46fd feat(memory): add resumable embedding backfill job
b578c9f docs(memory): document semantic memory flag and backfill workflow
Publish Release
Highlights
Deterministic orchestration with LLM as planner/tool no behavior change when acpEnabled=false.
New
- Structured JSONL observability (
~/.juliacode/logs/events.jsonl) tracking planner decisions, subagent
lifecycle, tool calls, retries, and loop ends /statscommand summarizing planner hit-rate, orchestration runs, subagents, loop iterations, and top tools- Shared-context snapshot (read-only, ~500 token cap) propagated from parent session to spawned subagents
- Per-model concurrency sub-limits (
acp.modelLimits) with per-model FIFO queue on top of the existing global
cap - Deterministic retry hints: on tool failure matching "not found"/ENOENT, Julia probes the project with
glob("**/<basename>")and appends candidates to the error the LLM sees - LLM decomposition cache: per-session, 5-min TTL, 128-entry cap skips redundant planner round-trips on
near-identical prompts
Changed
- Orchestration planner is now gated by a deterministic complexity heuristic (regex-based, EN/PT) that
short-circuits before invoking the LLM decomposer - Single-subtask verdicts skip subagent spawn and fall through to the parent loop
Internal
subagentmanager exports the class for testable isolationPlannerViatelemetry adds'cache'variant
Publish Release
TUI layout fixes
This release fixes two long-standing visual glitches in the Julia Code TUI
that made the startup screen look broken on typical terminal widths.
What's fixed
No more ghost banner frames at startup. The ── Julia Code vX.Y.Z
header used to appear stamped multiple times at the top of the screen
before the welcome box settled in. This was caused by raw
process.stderr.write calls in the MCP transport leaking child-process
stderr into Ink's render stream, which broke Ink's line tracking and left
stale frames visible. All MCP, config, and skills diagnostics are now
routed through a dedicated file logger, so nothing bypasses Ink anymore.
Welcome box is now side-by-side on normal terminals. The two-column
layout (logo + model info on the left, tips + recent activity on the
right) previously only kicked in at 120+ columns, so most users saw
everything stacked vertically. The breakpoint has been lowered to 90
columns, the panels now use flex grow instead of fixed 50% widths, and
long cwd paths are collapsed to ~/.../last-two-segments so they fit
cleanly inside the left column.
Where MCP logs live now
MCP server output (including lines like Context7 Documentation MCP Server v2.1.8 running on stdio and connection success/failure messages)
is now written to:
~/.juliacode/logs/mcp.log
You can tail it with tail -f ~/.juliacode/logs/mcp.log when debugging
MCP servers. In-TUI observability is unchanged — /mcp list still shows
the connection state and tool count for each configured server.
Changed files
src/mcp/logger.ts(new) — file-based logger for MCP and related
diagnostics.src/mcp/transport.ts,src/mcp/manager.ts,
src/config/settings-io.ts,src/skills/loader.ts— stderr writes
replaced withlogMcp().src/tui/responsive.ts—widebreakpoint 120 → 90,
medium80 → 70.src/tui/components/StatusBar.tsx— flex-based two-column layout,
tightened indentation, newformatCwd()helper.
Upgrade notes
No configuration changes required. Existing ~/.juliacode/settings.json
is untouched. The new log directory is created lazily on first MCP
activity.
Publish Release
Critical data loss fix in settings.json
Bug fixes
- Prevents loss of
mcpServersin~/.juliacode/settings.json. Manually configured fields
(especially mcpServers and trustedDirectories) could be silently deleted in several situations:
when Zod stripped unknown fields, when an MCP server with HTTP transport (without command)
crashed the schema parsing, or when a juju subprocess was recursively spawned via the exec tool and
rewrote the file in a child without a TTY.
-
Reading and writing to settings.json now operates on pure JSON (
src/config/settings-io.ts). If the parsing
fails, the operation is aborted without overwriting the file ever again "the file turned into{}out of nowhere". -
initMcpServersskips servers withoutcommand(HTTP/SSE transport is not yet supported) instead of
letting the Zod schema fail and bring down all MCP servers. Valid stdio servers continue
connecting normally. -
Guard in the
exectool against recursive spawning ofjuju. Prevents the model, hallucinating, from executingjuju --helpor similar and triggering Julia's own recursive bootstrap on a child without a TTY which rewrote the
settings.json before crashing. -
Guard in the
write/edittools against direct editing in~/.juliacode/settings.json. The file is
managed by Julia via slash commands (/mcp,/model,/trust); direct edits to the model are
blocked to prevent unintentional omission of fields.
Compatibility
No API changes. Existing Settings.json files remain valid fields unknown to the schema are now preserved via .passthrough().
Acknowledgements
For a shorter version (for the inline GitHub changelog):
Fixed
- The
mcpServersandtrustedDirectoriesfields were being deleted from~/.juliacode/settings.jsonin multiple scenarios (Zod silent parse, MCP HTTP server withoutcommand, recursivejujusubprocess). Now the settings.json file is managed via raw JSON with abort-on-parse-fail, invalid MCP servers are skipped with a warning, and theexec/write/edittools have specific guards against these situations.
Publish Release
Orchestration progress in the Input bar
Highlights
The Input footer now surfaces live progress for parallel subagent orchestration (ACP), so you can track what your
agents are doing without leaving the prompt.
What's new
- ACP progress indicator: while subagents are running, the Input bar shows
ACP [completed/total done, N running, N queued, N failed]with a spinner. - No more duplicate spinners: the generic "thinking" indicator is hidden
while orchestration is active, keeping the footer clean.
Under the hood
Inputgained an optionalorchestrationProgressprop
(total,completed,failed,running,queued).Appnow forwards the existingorchestrationProgresssession state
down toInput— no new orchestration logic, just visibility.
Files touched
src/tui/app.tsxsrc/tui/components/Input.tsx
Upgrade notes
No breaking changes. Update and restart the TUI to see the new indicator.
Publish Release
Worktree Isolation for Parallel Subagents
Problem
When multiple subagents ran in parallel (via ACP orchestration), they all shared the same filesystem. If two
subagents edited the same file simultaneously, the last write would silently overwrite the other no warning, no
conflict detection, no recovery.
Solution
Each subagent now operates in its own git worktree: A lightweight, isolated copy of the repository that shares
the .git history. When the subagent finishes, its changes are merged back to the main branch with full conflict
detection.
What changed
New: Filesystem isolation via git worktree
- Each subagent gets a dedicated worktree at
/tmp/juju-wt-<id>/ - All tool calls (read, write, edit, exec, glob, grep) automatically resolve paths to the subagent's worktree via
AsyncLocalStorage - No global state mutation the main agent and other subagents are unaffected
New: Safe merge with conflict detection
- When a subagent completes, its changes are committed and merged back with
--no-ff - If two subagents modify the same file and conflict, the merge is aborted and the branch is preserved for manual
resolution - An async
Mutexserializes merges to prevent git corruption from concurrent merge attempts
New: Automatic cleanup
- Orphaned worktrees from crashed sessions are cleaned up on startup
- Signal handlers (
SIGINT,SIGTERM) ensure cleanup on unexpected exit - Temporary branches (
subagent/*) are pruned automatically
New: Configuration
acp.worktreeIsolation(default:true): disable to revert to shared filesystem behavior- Graceful fallback: if the project is not a git repo or worktree creation fails, subagents run in the shared
directory as before
Files affected
| Type | Files |
|---|---|
| New | src/agent/mutex.ts, src/agent/worktree.ts |
| Core | src/agent/subagent.ts, src/tools/registry.ts, src/security/paths.ts |
| Tools | exec.ts, glob.ts, read.ts, write.ts, edit.ts, grep.ts, fetch.ts, memory.ts, sessions.ts, |
subagent.ts |
|
| Config | src/config/types.ts, src/config/index.ts |
| Entry | juju.ts |
How to test
# Prompt that triggers parallel orchestration with no conflicts:
"Create 3 utilities TypeScript independentes: src/utils/slugify.ts, src/utils/debounce.ts, src/utils/retry.ts"
# Prompt that triggers merge conflict (same file, two subagents):
"Make 2 changes to the file. src/utils/slugify.ts em paralelo:
1. Added removeAccents() 2. Adicione truncateSlug()"
# Verify after:
git log --oneline -10 # should show merge commits
git worktree list # should show only main worktree
git branch --list "subagent/*" # should be empty
Breaking changes
None. The ToolDefinition.execute() signature now accepts an optional second parameter context?: ToolContext, which
is backwards-compatible. Existing tools and MCP integrations continue to work without modification.Publish Release
Update package-lock.json
Publish Release
Context Persistence Across Model Switches
Fixes
-
Agent no longer stops responding after tool calls: fixed a bug where the
error handler returned without signaling completion, leaving the UI frozen in
thinking state. Added retry logic for errors and empty responses after tool
execution. -
Conversation context is now preserved when switching models: switching via
/modelno longer loses the conversation history. Each message now tracks
which model generated it, and the new model receives a context injection
explaining it's continuing an existing session. -
Memories are checked before executing tools: the memory prompt is now
directive, instructing the model to answer from saved memories instead of
making unnecessary tool calls for information already known.
Technical Details
- New
modelcolumn in the messages table tracks the source model per message - Model-switch context is injected as a system message when the active model
differs from the previous one - Retry mechanism (1 attempt) for empty or errored responses after tool results
doneevent is always emitted on error paths, preventing UI deadlocks