Local voice I/O for Claude Code and LLM workflows. Built for eyes and hands that need a break.
readback is a small macOS CLI that gives any LLM tool a voice. It uses Kokoro TTS to read responses aloud through a high-quality local voice model, and faster-whisper to turn your spoken prompts back into text. It ships with a Claude Code Stop hook that speaks every assistant reply automatically, with per-session pinning so only the chats you choose get read.
Everything runs locally. No API keys. No cloud calls. No subscription. No audio leaves your machine.
If you spend hours a day reading Claude Code responses, long ChatGPT outputs, or any other long-form LLM text — and you want to give your eyes, wrists, or attention a rest — readback gives you two things:
- Automatic voice output for Claude Code. When Claude Code finishes a response,
readbackspeaks it aloud through Kokoro, one of the best open-source text-to-speech models available. A free, local alternative to ElevenLabs, OpenAI TTS, and macOS's built-in voices — with no per-token billing and no internet dependency. - Voice input via Whisper. A single
readback listencommand records your mic, transcribes locally withfaster-whisper, and either prints the text, copies it to your clipboard, or pastes it directly into whatever app has focus.
Think of it as a tiny voice layer for LLM CLIs: Claude Code today, Cursor and Codex next.
People with eye strain, low vision, or visual fatigue. If reading a 2,000-word LLM response is painful but you can still use your hands, readback lets you listen while you look away from the screen.
People with RSI or typing limits. If typing a long prompt hurts, readback listen --paste lets you speak it and have it appear in the LLM's input box.
Developers who want hands-free LLM workflows. Long code reviews, architectural discussions, debugging sessions — it's often easier to listen while walking around or glancing at code than to sit staring at a wall of text.
Anyone who's tired of tab-switching. Claude Code replies, readback speaks it, you keep your eyes on your editor or your hands off the keyboard. The loop just works.
This project does not try to be Superwhisper or a macOS accessibility suite. It's a small, focused CLI you can install in five minutes and audit in an hour.
- Kokoro-82M for TTS — a genuinely excellent open-source voice model that runs locally via ONNX. Far better than the macOS default voices.
- faster-whisper for STT — fast, accurate, local speech recognition.
- A Claude Code Stop hook that reads responses aloud automatically, with per-session pinning so only the conversations you want get read.
- A single
readbackCLI with all the commands you'll use daily.
All of it is shell + a couple of small Python scripts. No daemons, no background services, no database.
Prerequisites: macOS, Homebrew, uv (brew install uv), jq (brew install jq).
git clone https://github.com/shivaam/readback.git
cd readback
./install.shThe installer will:
brew install espeak-ng portaudio- Create a Python venv inside the repo
- Download ~340MB of Kokoro model files
- Install a small shim at
~/.local/bin/readback - Create default config at
~/.config/readback/config - Run a smoke test (you'll hear "Readback is ready")
It does not automatically wire up the Claude Code hook — that's explicit, see below.
# Speak some text
readback say "hello world"
# Record a voice note and print the transcript
readback listen
# Record and paste into whatever app has focus
readback listen --paste# One-time: register the Stop hook in ~/.claude/settings.json
readback install-hook
# Inside an active Claude Code chat, pin that session:
readback pin
# That's it. From now on, every response in that session is spoken aloud.To pin another session, just run readback pin from inside it. To stop auto-read for a session, readback unpin. To see what's currently pinned, readback list.
readback stopOr just send a new message — readback auto-kills prior playback whenever a new response arrives.
Config lives at ~/.config/readback/config as a plain env file:
VOICE=af_heart
SPEED=1.0
LANG=en-us
WHISPER_MODEL=base.en
WHISPER_DEVICE=cpu
WHISPER_COMPUTE=int8
Edit directly, or use the CLI:
readback config set VOICE bf_emma # switch to a British female voice
readback config set SPEED 1.2 # speed up playback
readback config set WHISPER_MODEL small.en # better STT accuracy
readback voices # list all available voices
readback models # list all STT models| Command | What it does |
|---|---|
readback say [text] |
Speak text (or stdin) through Kokoro |
readback stop |
Kill in-progress playback |
readback listen [--paste|--copy] |
Record + transcribe |
readback voices |
List TTS voices |
readback models |
List STT models |
readback pin [session] |
Pin a Claude Code session for auto-read |
readback unpin [session|--all] |
Unpin |
readback list |
List pinned sessions |
readback config get/set/show |
Manage config |
readback install-hook |
Register Claude Code Stop hook |
readback uninstall-hook |
Remove Claude Code Stop hook |
readback update-hook |
Refresh the installed hook from the repo |
readback doctor |
Sanity check all pieces |
readback version |
Print version |
I can hear the wrong response, or a stale one.
The hook walks the transcript backwards to find the current turn's assistant text. If Claude Code's transcript writer is slow, it retries for up to 2 seconds. Check /tmp/readback.log — you should see extracted on attempt N for every response.
Nothing happens when Claude Code responds.
Run readback doctor. The most likely cause: the session isn't pinned. Run readback pin inside the active chat. Second most likely: the hook isn't registered — run readback install-hook.
Mic permission denied.
macOS asks for microphone permission the first time you run readback listen. Approve it in System Settings → Privacy & Security → Microphone.
readback listen --paste doesn't paste.
macOS asks for Accessibility permission the first time. Approve in System Settings → Privacy & Security → Accessibility.
The voice sounds robotic.
You're probably using a macOS built-in voice. readback uses Kokoro, not macOS. If Kokoro isn't installed properly, readback doctor will tell you.
docs/claude-code.md— Claude Code integration in depthdocs/voices.md— voice tuning for long-form technical contentdocs/accessibility.md— the "why" storydocs/roadmap.md— what's next
readback uninstall-hook # remove the Claude Code hook
rm ~/.local/bin/readback # remove the shim
rm -rf ~/.config/readback # remove config + session pins
rm -rf ~/workspace/readback # remove the repoBrew deps (espeak-ng, portaudio) are kept — they're useful for other tools too.
| readback | ElevenLabs | OpenAI TTS | macOS Speak Selection | Superwhisper | |
|---|---|---|---|---|---|
| Runs locally | ✓ | ✗ | ✗ | ✓ | ✓ |
| Zero cost | ✓ | from $5/mo | per-char billing | ✓ | $8/mo |
| Voice quality | Kokoro (top open-source) | best-in-class | very good | fair | n/a |
| Claude Code auto-read | ✓ | ✗ | ✗ | ✗ | ✗ |
| Voice input | ✓ (Whisper) | ✗ | ✗ | ✗ | ✓ |
| Open source | ✓ (MIT) | ✗ | ✗ | ✗ | ✗ |
| Accessibility-first framing | ✓ | ✗ | ✗ | ✓ | ✗ |
readback is not trying to replace ElevenLabs for voiceover work or Superwhisper for global push-to-talk dictation. It fills a specific gap: voice I/O for the LLM-CLI workflow, running entirely on your machine.
MIT. See LICENSE.
For people searching: Claude Code text-to-speech, Claude Code voice response, Claude Code TTS plugin, Kokoro TTS Claude Code, local Whisper speech-to-text CLI, ElevenLabs alternative open source, free local TTS macOS, accessibility tool for LLM, screen reader for AI responses, hands-free Claude Code, voice interface for Claude, faster-whisper CLI, text-to-speech for developers, offline TTS no API key, voice I/O for LLM workflows.