Skip to content

holzerjm/PodmanAI-Hackathon

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

8 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸš€ PodmanAI-Hackathon

Pods, Prompts & Prototypes β€” Three containerized AI applications built for the Pods, Prompts & Prototypes Hackathon at The Open Accelerator, Boston.

All apps run entirely in Podman containers and showcase different approaches to AI integration: two fully local with Ollama, one cloud-powered via Anthropic Claude.

πŸ“‹ Quick Index

Level App Description Port
Beginner πŸ“– AI Story Forge Choose-your-own-adventure with local AI 8503
Intermediate πŸ“š RAG Document Q&A Upload docs & chat with them β€” fully local 8501
Intermediate/Advanced πŸ” AI Code Reviewer Expert code reviews powered by Claude 8502

πŸ“¦ Projects

Upload documents and chat with them using a fully local AI pipeline β€” no data ever leaves your machine.

Drop in PDFs, text files, or source code, and the app chunks, embeds, and indexes them into a local vector store. Ask questions in a chat interface and get grounded answers with source citations.

Component Technology
LLM Ollama β€” Granite 3.1 8B (local)
Embeddings all-MiniLM-L6-v2 (local)
Vector Store ChromaDB
Orchestration LangChain
Frontend Streamlit
cd rag-doc-qa
podman compose up -d
# β†’ http://localhost:8501

Note: The first run downloads the Granite 3.1 8B model (~4.9 GB). This can take 5–10 minutes depending on your connection.


Paste code and get expert-level reviews with a quality score, categorized findings, an improved version of your code, and plain-English explanations.

Supports Python, JavaScript, TypeScript, Java, Go, Rust, C/C++, and Ruby. Choose focus areas like security, performance, readability, and best practices.

Component Technology
AI Model Anthropic Claude Sonnet 4
Syntax Highlighting Pygments
Frontend Streamlit
cd ai-code-reviewer
ANTHROPIC_API_KEY=sk-ant-... podman compose up -d
# β†’ http://localhost:8502

Tip: You can also enter your API key via the sidebar UI after launch. The key is session-only and never stored to disk.


Choose your own adventure with local AI β€” pick a genre, make choices, and watch an interactive story unfold in real time. Great for beginners!

Component Technology
LLM Ollama β€” Granite 3.1 8B (local)
Frontend Streamlit
cd ai-story-forge
podman compose up -d
# β†’ http://localhost:8503

Note: Shares the same Ollama model as the RAG app. If you've already run that app, startup is instant.


πŸ› οΈ Shared Stack

All three applications are built on a common foundation:

  • Containers β€” Podman (rootless, daemonless, OCI-compliant)
  • Compose β€” podman compose with standard compose.yml
  • Frontend β€” Streamlit with custom dark-themed UIs
  • Language β€” Python 3.11

πŸš€ Getting Started

1. Clone the repository

git clone <repo-url>
cd PodmanAI-Hackathon

2. Start an app

RAG Document Q&A (fully local β€” no API key needed):

cd rag-doc-qa
podman compose up -d
# Open http://localhost:8501

AI Code Reviewer (requires Anthropic API key):

cd ai-code-reviewer
ANTHROPIC_API_KEY=sk-ant-... podman compose up -d
# Open http://localhost:8502

AI Story Forge (fully local β€” no API key needed):

cd ai-story-forge
podman compose up -d
# Open http://localhost:8503

3. Stop an app

podman compose down

To also remove volumes (model cache, vector store data):

podman compose down -v

βš™οΈ Environment Variables

Variable App Default Description
ANTHROPIC_API_KEY Code Reviewer (none, required) Your Anthropic API key
ANTHROPIC_MODEL Code Reviewer claude-sonnet-4-20250514 Claude model to use for reviews
OLLAMA_MODEL RAG Doc Q&A, Story Forge granite3.1-dense:8b Ollama model for generation
OLLAMA_BASE_URL RAG Doc Q&A, Story Forge http://ollama:11434 Ollama server URL (set automatically by compose)

πŸ“ Repository Structure

PodmanAI-Hackathon/
β”‚
β”œβ”€β”€ rag-doc-qa/                  # App 1 β€” Local RAG pipeline
β”‚   β”œβ”€β”€ Containerfile
β”‚   β”œβ”€β”€ compose.yml              # Ollama + Streamlit app
β”‚   β”œβ”€β”€ .gitignore
β”‚   β”œβ”€β”€ README.md
β”‚   └── app/
β”‚       β”œβ”€β”€ app.py
β”‚       └── requirements.txt
β”‚
β”œβ”€β”€ ai-code-reviewer/            # App 2 β€” Claude-powered reviewer
β”‚   β”œβ”€β”€ Containerfile
β”‚   β”œβ”€β”€ compose.yml
β”‚   β”œβ”€β”€ .gitignore
β”‚   β”œβ”€β”€ README.md
β”‚   └── app/
β”‚       β”œβ”€β”€ app.py
β”‚       └── requirements.txt
β”‚
β”œβ”€β”€ ai-story-forge/              # App 3 β€” Choose-your-own-adventure (Beginner)
β”‚   β”œβ”€β”€ Containerfile
β”‚   β”œβ”€β”€ compose.yml
β”‚   β”œβ”€β”€ .gitignore
β”‚   β”œβ”€β”€ README.md
β”‚   └── app/
β”‚       β”œβ”€β”€ app.py
β”‚       └── requirements.txt
β”‚
β”œβ”€β”€ LICENSE
└── README.md                    # ← You are here

πŸ† Hackathon Alignment

These projects span Beginner, Intermediate, and Intermediate/Advanced challenge tiers of the Pods, Prompts & Prototypes hackathon:

  • The Local AI API β€” containerized AI microservice (Story Forge β€” Beginner)
  • RAG on Your Data β€” local document retrieval and generation (RAG Doc Q&A β€” Intermediate)
  • AI-Powered Code Reviewer β€” cloud-powered code analysis (Code Reviewer β€” Intermediate/Advanced)

All three are designed around the hackathon judging rubric:

Criterion How We Address It
User Flow & UI/UX Dark-themed Streamlit UIs with intuitive chat, paste-to-review, and interactive story flows
Engineering Quality Clean separation of concerns, structured prompts, proper RAG pipeline
Grounding & Verifiability Source citations (RAG), line-specific findings with diffs (Reviewer), choice-driven narrative (Story Forge)
Creativity & Innovation Local-first zero-cloud RAG; structured code review with scoring; interactive fiction with AI
Real-World Fit Enterprise doc Q&A without data leaving the network; democratized code review; engaging AI demo for newcomers

πŸ“‹ Prerequisites

  • Podman Desktop (or Podman CLI)
  • ~10 GB free disk space (Granite 3.1 8B model is ~4.9 GB, plus container images)
  • An Anthropic API key (for the Code Reviewer only)

πŸ”§ Troubleshooting

Issue Solution
Port 8501, 8502, or 8503 already in use Stop the conflicting process, or edit ports in the relevant compose.yml
Ollama model download is slow The Granite 3.1 8B model is ~4.9 GB. On slower connections, the initial pull may take 10+ minutes. Check progress with podman logs rag-ollama-pull -f
RAG app says "Ollama not reachable" Ensure the Ollama container is healthy: podman ps should show rag-ollama as running. If not, check logs: podman logs rag-ollama
Code Reviewer returns auth error Verify your ANTHROPIC_API_KEY is valid. You can also enter it via the sidebar UI
Switching Ollama models Set OLLAMA_MODEL=mistral:7b (or another supported model) before podman compose up. The model will be pulled automatically on first run

🀝 Contributing

  1. Fork the repository
  2. Create a feature branch: git checkout -b my-feature
  3. Make your changes
  4. Test locally with podman compose up -d
  5. Submit a pull request

Local Development (without containers)

# RAG Doc Q&A
cd rag-doc-qa/app
pip install -r requirements.txt
# Requires a running Ollama instance at http://localhost:11434
OLLAMA_BASE_URL=http://localhost:11434 streamlit run app.py

# AI Code Reviewer
cd ai-code-reviewer/app
pip install -r requirements.txt
ANTHROPIC_API_KEY=sk-ant-... streamlit run app.py --server.port 8502

# AI Story Forge
cd ai-story-forge/app
pip install -r requirements.txt
# Requires a running Ollama instance at http://localhost:11434
OLLAMA_BASE_URL=http://localhost:11434 streamlit run app.py --server.port 8503

πŸ“„ License

This project is licensed under the MIT License. See LICENSE for details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors