Pods, Prompts & Prototypes β Three containerized AI applications built for the Pods, Prompts & Prototypes Hackathon at The Open Accelerator, Boston.
All apps run entirely in Podman containers and showcase different approaches to AI integration: two fully local with Ollama, one cloud-powered via Anthropic Claude.
| Level | App | Description | Port |
|---|---|---|---|
| Beginner | π AI Story Forge | Choose-your-own-adventure with local AI | 8503 |
| Intermediate | π RAG Document Q&A | Upload docs & chat with them β fully local | 8501 |
| Intermediate/Advanced | π AI Code Reviewer | Expert code reviews powered by Claude | 8502 |
π RAG Document Q&A
Upload documents and chat with them using a fully local AI pipeline β no data ever leaves your machine.
Drop in PDFs, text files, or source code, and the app chunks, embeds, and indexes them into a local vector store. Ask questions in a chat interface and get grounded answers with source citations.
| Component | Technology |
|---|---|
| LLM | Ollama β Granite 3.1 8B (local) |
| Embeddings | all-MiniLM-L6-v2 (local) |
| Vector Store | ChromaDB |
| Orchestration | LangChain |
| Frontend | Streamlit |
cd rag-doc-qa
podman compose up -d
# β http://localhost:8501Note: The first run downloads the Granite 3.1 8B model (~4.9 GB). This can take 5β10 minutes depending on your connection.
π AI Code Reviewer
Paste code and get expert-level reviews with a quality score, categorized findings, an improved version of your code, and plain-English explanations.
Supports Python, JavaScript, TypeScript, Java, Go, Rust, C/C++, and Ruby. Choose focus areas like security, performance, readability, and best practices.
| Component | Technology |
|---|---|
| AI Model | Anthropic Claude Sonnet 4 |
| Syntax Highlighting | Pygments |
| Frontend | Streamlit |
cd ai-code-reviewer
ANTHROPIC_API_KEY=sk-ant-... podman compose up -d
# β http://localhost:8502Tip: You can also enter your API key via the sidebar UI after launch. The key is session-only and never stored to disk.
π AI Story Forge
Choose your own adventure with local AI β pick a genre, make choices, and watch an interactive story unfold in real time. Great for beginners!
| Component | Technology |
|---|---|
| LLM | Ollama β Granite 3.1 8B (local) |
| Frontend | Streamlit |
cd ai-story-forge
podman compose up -d
# β http://localhost:8503Note: Shares the same Ollama model as the RAG app. If you've already run that app, startup is instant.
All three applications are built on a common foundation:
- Containers β Podman (rootless, daemonless, OCI-compliant)
- Compose β
podman composewith standardcompose.yml - Frontend β Streamlit with custom dark-themed UIs
- Language β Python 3.11
git clone <repo-url>
cd PodmanAI-HackathonRAG Document Q&A (fully local β no API key needed):
cd rag-doc-qa
podman compose up -d
# Open http://localhost:8501AI Code Reviewer (requires Anthropic API key):
cd ai-code-reviewer
ANTHROPIC_API_KEY=sk-ant-... podman compose up -d
# Open http://localhost:8502AI Story Forge (fully local β no API key needed):
cd ai-story-forge
podman compose up -d
# Open http://localhost:8503podman compose downTo also remove volumes (model cache, vector store data):
podman compose down -v| Variable | App | Default | Description |
|---|---|---|---|
ANTHROPIC_API_KEY |
Code Reviewer | (none, required) | Your Anthropic API key |
ANTHROPIC_MODEL |
Code Reviewer | claude-sonnet-4-20250514 |
Claude model to use for reviews |
OLLAMA_MODEL |
RAG Doc Q&A, Story Forge | granite3.1-dense:8b |
Ollama model for generation |
OLLAMA_BASE_URL |
RAG Doc Q&A, Story Forge | http://ollama:11434 |
Ollama server URL (set automatically by compose) |
PodmanAI-Hackathon/
β
βββ rag-doc-qa/ # App 1 β Local RAG pipeline
β βββ Containerfile
β βββ compose.yml # Ollama + Streamlit app
β βββ .gitignore
β βββ README.md
β βββ app/
β βββ app.py
β βββ requirements.txt
β
βββ ai-code-reviewer/ # App 2 β Claude-powered reviewer
β βββ Containerfile
β βββ compose.yml
β βββ .gitignore
β βββ README.md
β βββ app/
β βββ app.py
β βββ requirements.txt
β
βββ ai-story-forge/ # App 3 β Choose-your-own-adventure (Beginner)
β βββ Containerfile
β βββ compose.yml
β βββ .gitignore
β βββ README.md
β βββ app/
β βββ app.py
β βββ requirements.txt
β
βββ LICENSE
βββ README.md # β You are here
These projects span Beginner, Intermediate, and Intermediate/Advanced challenge tiers of the Pods, Prompts & Prototypes hackathon:
- The Local AI API β containerized AI microservice (Story Forge β Beginner)
- RAG on Your Data β local document retrieval and generation (RAG Doc Q&A β Intermediate)
- AI-Powered Code Reviewer β cloud-powered code analysis (Code Reviewer β Intermediate/Advanced)
All three are designed around the hackathon judging rubric:
| Criterion | How We Address It |
|---|---|
| User Flow & UI/UX | Dark-themed Streamlit UIs with intuitive chat, paste-to-review, and interactive story flows |
| Engineering Quality | Clean separation of concerns, structured prompts, proper RAG pipeline |
| Grounding & Verifiability | Source citations (RAG), line-specific findings with diffs (Reviewer), choice-driven narrative (Story Forge) |
| Creativity & Innovation | Local-first zero-cloud RAG; structured code review with scoring; interactive fiction with AI |
| Real-World Fit | Enterprise doc Q&A without data leaving the network; democratized code review; engaging AI demo for newcomers |
- Podman Desktop (or Podman CLI)
- ~10 GB free disk space (Granite 3.1 8B model is ~4.9 GB, plus container images)
- An Anthropic API key (for the Code Reviewer only)
| Issue | Solution |
|---|---|
| Port 8501, 8502, or 8503 already in use | Stop the conflicting process, or edit ports in the relevant compose.yml |
| Ollama model download is slow | The Granite 3.1 8B model is ~4.9 GB. On slower connections, the initial pull may take 10+ minutes. Check progress with podman logs rag-ollama-pull -f |
| RAG app says "Ollama not reachable" | Ensure the Ollama container is healthy: podman ps should show rag-ollama as running. If not, check logs: podman logs rag-ollama |
| Code Reviewer returns auth error | Verify your ANTHROPIC_API_KEY is valid. You can also enter it via the sidebar UI |
| Switching Ollama models | Set OLLAMA_MODEL=mistral:7b (or another supported model) before podman compose up. The model will be pulled automatically on first run |
- Fork the repository
- Create a feature branch:
git checkout -b my-feature - Make your changes
- Test locally with
podman compose up -d - Submit a pull request
# RAG Doc Q&A
cd rag-doc-qa/app
pip install -r requirements.txt
# Requires a running Ollama instance at http://localhost:11434
OLLAMA_BASE_URL=http://localhost:11434 streamlit run app.py
# AI Code Reviewer
cd ai-code-reviewer/app
pip install -r requirements.txt
ANTHROPIC_API_KEY=sk-ant-... streamlit run app.py --server.port 8502
# AI Story Forge
cd ai-story-forge/app
pip install -r requirements.txt
# Requires a running Ollama instance at http://localhost:11434
OLLAMA_BASE_URL=http://localhost:11434 streamlit run app.py --server.port 8503This project is licensed under the MIT License. See LICENSE for details.