Multi-stage AI translation pipeline for scholars
A desktop application that chains multiple LLM passes — draft, refinement, audit — to produce publication-quality translations. Built for philologists, classicists, and translators who need precision over speed.
Glossa runs your source text through a configurable pipeline of LLM stages, each with its own prompt, model, and provider. An AI judge then audits the final translation against your glossary and instructions, scoring it on accuracy, fluency, glossary adherence, and grammar.
Source text
│
├─► Stage 1: Initial Pass (Gemini / Ollama / ...)
│ ↓
├─► Stage 2: Refinement (OpenAI / Anthropic / ...)
│ ↓
├─► Stage N: (add as many as you need)
│ ↓
└─► AI Judge: audit score + issues + suggested fixes
Translations stream token-by-token in real time. You can edit the candidate translation manually before auditing, re-run only the audit, and iterate until the quality meets your standards.
| Category | Details |
|---|---|
| 5 LLM providers | Gemini, OpenAI, Anthropic, DeepSeek, Ollama (local models) |
| Streaming | Real-time token display during translation |
| Multi-stage pipeline | Add/remove/reorder stages, each with its own model and prompt |
| AI Judge | LLM-as-a-judge audit with score (0–100), categorized issues, and fixes |
| Glossary | Keyword registry enforced across all stages and the audit |
| Auto-segmentation | Splits source text by paragraphs for chunk-by-chunk processing |
| Project management | Save/load projects with full pipeline config and translations |
| File I/O | Import .txt/.md, export as plain text or bilingual Markdown |
| Secure keys | API keys stored in OS keychain (GNOME Keyring / macOS Keychain / Windows Credential Manager) |
| i18n | English and Italian interface |
| Desktop native | Tauri v2 — lightweight binaries, no browser runtime |
- Node.js ≥ 18
- Rust ≥ 1.77
- System libraries for Tauri (Linux only):
sudo apt install libwebkit2gtk-4.1-dev libgtk-3-dev libayatana-appindicator3-dev librsvg2-dev libsecret-1-dev
git clone https://github.com/nikazzio/glossa.git
cd glossa
npm install
npm run tauri:dev # development mode with hot reloadnpm run tauri:buildOutputs .deb, .rpm, and .AppImage on Linux; .dmg on macOS; .msi on Windows.
Bundles are in src-tauri/target/release/bundle/.
Open Settings (⚙️ icon) and paste your API keys. They are stored in your operating system's keychain — never in plain text, never sent anywhere except to the provider's API.
| Provider | Get a key |
|---|---|
| Gemini | ai.google.dev |
| OpenAI | platform.openai.com |
| Anthropic | console.anthropic.com |
| DeepSeek | platform.deepseek.com |
For fully offline, private translation with models running on your own hardware:
- Install Ollama: ollama.com/download
- Pull a model:
ollama pull llama3.2(ormistral,gemma2, etc.) - Start the server:
ollama serve - In Glossa Settings, the Ollama section will show connected status and available models.
No API key is needed. All data stays on your machine.
In the left panel (Global Setup):
- Choose source and target languages
- Configure pipeline stages:
- Each stage has its own provider, model, and prompt
- Stage 1 typically does a literal draft; Stage 2 refines for fluency
- Add more stages for specialized tasks (terminology, register, etc.)
- Set up the Audit Guard with a judge model and audit instructions
- Add terms to the Keyword Registry (glossary) to enforce consistent terminology
In the center panel (Production Stream):
- Paste or import your source text
- Click "Stage Content to Stream" to segment the text
- Click "Begin Pipeline" — tokens stream in real time for each stage
- Review the candidate translation, edit it manually if needed
- The AI Judge automatically scores the result
In the right panel (Audit Logs):
- Composite score (0–100) across all chunks
- Issues categorized by type (glossary, fluency, accuracy, grammar) and severity
- Suggested fixes for each issue
- Click "Re-Evaluate Drafts" after manual edits to get an updated score
- 📂 Projects: Save your entire pipeline config + translations. Reload anytime.
- ⬆ Import: Load
.txtor.mdfiles via native OS dialog - ⬇ Export: Save as plain
.txt(translation only) or bilingual.md(source + translation + audit) - 💾 Save: Persist the current project state to SQLite
┌──────────────────────────────────────────┐
│ Frontend (React 19 + Zustand + Vite) │
│ ├── PipelineConfig (left panel) │
│ ├── ProductionStream (center panel) │
│ ├── AuditPanel (right panel) │
│ ├── SettingsModal (API keys, Ollama) │
│ └── ProjectPanel (CRUD projects) │
├──────────────────────────────────────────┤
│ Tauri IPC (invoke / events) │
├──────────────────────────────────────────┤
│ Rust Backend │
│ ├── LLM calls (reqwest + SSE stream) │
│ ├── API keys (OS keyring) │
│ └── Plugins (SQLite, FS, Dialog) │
└──────────────────────────────────────────┘
| Layer | Tech |
|---|---|
| Desktop shell | Tauri v2 (webview + Rust sidecar) |
| Frontend | React 19, TypeScript, Tailwind CSS, Zustand |
| LLM integration | Rust reqwest with SSE streaming |
| Storage | SQLite via tauri-plugin-sql |
| API key security | OS keychain via keyring crate |
| i18n | react-i18next with bundled JSON |
glossa/
├── src/ # React frontend
│ ├── components/ # UI components (pipeline, audit, settings, projects)
│ ├── hooks/ # usePipeline (execution logic)
│ ├── services/ # llmService, projectService, fileService, dbService
│ ├── stores/ # Zustand stores (pipeline, project)
│ ├── i18n/ # en.json, it.json
│ └── utils/ # retry logic, helpers
├── src-tauri/ # Rust backend
│ ├── src/
│ │ ├── lib.rs # Tauri app entry, plugin registration
│ │ └── llm.rs # All LLM providers, streaming, Ollama, keychain
│ ├── Cargo.toml
│ └── tauri.conf.json
└── package.json
See CONTRIBUTING.md for development setup, commit conventions, and the release process.
MIT — see LICENSE for details.