Stop losing track of the important stuff you screenshot.
Afterthought watches your screenshot folder, reads them with AI, and automatically pulls out tasks, notes, and anything worth remembering. Everything stays on your computer.
You take a screenshot of something important. Maybe it's a to-do list, a recipe, a calendar invite, or just something you want to remember later. Afterthought automatically:
- Spots the new screenshot
- Sends it to OpenAI's latest cost-efficient multimodal model (
gpt-5.4-nano) to read and understand it - Extracts any text
- Figures out what category it belongs to (you define these)
- Pulls out actionable tasks if there are any
- Stores everything locally so you can search and manage it
No more forgetting about screenshots buried in your folders. No more manually uploading images to tools. It just works in the background.
I screenshot everything. Meeting notes, bugs I need to fix, random ideas, things to buy. But then I never look at them again because they just sit there in a folder. This solves that problem. The AI reads them for me and makes them actually useful.
Also: I don't trust cloud services with my screenshots. They could have anything in them. So everything in Afterthought stays on your machine.
Main interface - Your vault and tasks side by side
The left panel shows all your captured images organized by category. The right panel shows tasks that were automatically extracted. Click anything to see details.
Settings - Custom categories
Define your own categories with custom prompts. Want to track receipts differently than meeting notes? Just tell the AI what to look for.
Download the latest .msi installer from Releases
The app will watch your Screenshots folder by default (C:\Users\YourName\Pictures\Screenshots)
Download the latest .dmg from Releases
Default folder: ~/Pictures/Screenshots
Download the .deb package or .AppImage from Releases
Default folder: ~/Pictures/Screenshots
- Open Afterthought
- Complete the short onboarding flow
- Add your OpenAI API key when prompted (get one here)
- Take a screenshot and watch it move through the review prompt into the vault automatically.
Note: You can replay onboarding, change watched folders, or enable launch on login later in Settings.
The app is free. You just need an OpenAI API key.
Processing cost depends on screenshot size and output length, but the default model is gpt-5.4-nano because OpenAI describes it as its cheapest GPT-5.4-class model for simple high-volume tasks. If you want stronger reasoning for harder screenshots, you can switch to gpt-5.4-mini in Settings.
That's way cheaper than most productivity tools. And you're not locked into a subscription.
Point it at your screenshot folder and forget about it. New screenshots get detected and processed automatically.
If you want it ready right after sign-in, Settings can also register Afterthought to launch on login.
The app uses gpt-5.4-nano to:
- Read text from your screenshots (OCR)
- Categorize them based on rules you define
- Extract actionable tasks with priority levels
- Pull out deadlines and create reminders
Out of the box, the AI will categorize screenshots for you. But you can define your own categories with custom prompts:
- Receipts: "Images containing receipts, invoices, or purchase confirmations"
- Meetings: "Screenshots of calendars, meeting invites, or agenda items"
- Code: "Programming code, terminal output, or error messages"
Each category can have its own color and you can choose whether it should auto-generate tasks.
Tasks extracted from screenshots show up in the Tasks panel. You can:
- Mark them as complete
- Set reminders
- Edit titles and descriptions
- Add your own manual tasks
- Filter by status (pending, done, today, overdue)
The AI figures out priority and due dates when it can.
Press Ctrl+K to search across all your screenshots and tasks. It searches:
- Extracted text
- Categories
- Task titles and descriptions
- Your own notes
Because clicking is slow:
Ctrl+K- SearchN- Create new taskJ/Kor↑/↓- Navigate tasksDorSpace- Toggle task completionEnter- Open selected itemDelete- Delete selected itemEsc- Close modals/cancel actions?- Show all shortcuts
- All screenshots and data stored locally in SQLite
- No cloud sync, no servers, no tracking
- Your OpenAI API key is stored securely using your OS keychain
- Images are only sent to OpenAI's API when processing (and only to analyze, not stored there)
- You control everything
For anyone curious about how this works:
Frontend: React + TypeScript + TailwindCSS
Backend: Rust + Tauri 2.0
Database: SQLite with FTS5 (full-text search)
AI: OpenAI multimodal API with gpt-5.4-nano by default
File watching uses Tauri's native fs-watch plugin. Processing happens in a background queue so it doesn't block the UI. The database schema is simple and stored in your app data directory.
See docs/ARCHITECTURE.md and docs/DEVELOPMENT.md if you want the implementation details.
Want to build this yourself or contribute?
# Clone the repo
git clone https://github.com/deepunyk/afterthought.git
cd afterthought
# Install dependencies
npm install
# Run in development mode
npm run tauri:dev
# Build for production
npm run tauri:buildLinux:
# Debian/Ubuntu
sudo apt-get update
sudo apt-get install -y libgtk-3-dev libwebkit2gtk-4.1-dev libappindicator3-dev librsvg2-dev patchelf
# Arch
sudo pacman -S webkit2gtk base-devel
# Fedora
sudo dnf install webkit2gtk4.1-devel openssl-develmacOS:
- Xcode Command Line Tools:
xcode-select --install
Windows:
- No additional dependencies required
The project includes GitHub Actions workflows that automatically build for Windows, macOS (Intel + Apple Silicon), and Linux. You can also build locally:
# Build for your current platform
npm run tauri:build
# The output will be in src-tauri/target/release/bundle/
# - Windows: .msi and .exe files
# - macOS: .dmg and .app files
# - Linux: .deb and .AppImage filesIf you're using an AI coding agent, start with AGENTS.md. Claude users should also load CLAUDE.md, and Codex users can use the repo-local skill at ./.codex/skills/afterthought-feature-builder/SKILL.md.
Before opening a pull request, read CONTRIBUTING.md, CODE_OF_CONDUCT.md, and SECURITY.md.
- AGENTS.md - Primary build guidance for AI coding agents
- CLAUDE.md - Claude entrypoint that points at the shared agent docs
- docs/ARCHITECTURE.md - System structure and data flow
- docs/DEVELOPMENT.md - Setup, validation, and common change recipes
- Repo-local Codex Skill - Project-specific feature-building workflow
- Windows only shows basic notifications - Action buttons on notifications don't work on Windows (Tauri limitation)
- Processing takes a few seconds - Image analysis still depends on screenshot size and API latency
- Only supports OpenAI for now - Planning to add support for local LLMs and other providers
- No mobile app - It's a desktop-only tool
Q: Does this work offline? A: Partially. The app works offline and you can browse your existing screenshots and tasks. But processing new screenshots requires an internet connection to reach OpenAI's API. When you're offline, screenshots get queued and processed when you're back online.
Q: Can I import existing screenshots? A: Yes. Just add the folder containing your screenshots in Settings > Watched Folders. The app will detect them and you can process them individually or in bulk.
Q: What image formats are supported? A: PNG, JPG, JPEG, BMP, GIF, and WebP.
Q: Can I use a different AI provider? A: Not yet, but it's planned. OpenAI is the only option right now.
Q: Where is my data stored? A: Screenshots stay where you saved them. The database is stored in:
- Windows:
C:\Users\YourName\AppData\Roaming\com.afterthought.app\ - macOS:
~/Library/Application Support/com.afterthought.app/ - Linux:
~/.config/com.afterthought.app/
Q: Can I export my data? A: Not built-in yet, but since everything is in SQLite, you can access the database file directly if needed.
Found a bug? Have a feature request?
- Issues: github.com/deepunyk/afterthought/issues
- Discussions: github.com/deepunyk/afterthought/discussions
Contributions are welcome. For setup, validation commands, and pull request expectations, start with CONTRIBUTING.md.
Security-sensitive reports should follow SECURITY.md instead of public issues.
MIT License - see LICENSE for details.
Use it however you want. Build on it, sell it, whatever. Just don't blame me if something breaks.