Skip to content

pun33th45/RAG-Based-Developer-Assistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

12 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

DevInsight AI πŸš€

Python FastAPI React LangChain FAISS Project Status GitHub stars GitHub forks GitHub issues License

DevInsight AI is a full-stack Retrieval-Augmented Generation developer assistant. Upload a codebase or technical document, then ask practical engineering questions such as:

  • "Explain this project"
  • "Find bugs"
  • "What does this function do?"
  • "Where are the API routes defined?"

The app reads your files, chunks the content, embeds the chunks into FAISS, retrieves the most relevant context for each question, and sends that grounded context to an LLM for a structured developer-focused answer.

Screenshots

Upload UI

Chat UI

Results

Tech Stack

Layer Technology
Backend Python, FastAPI, Uvicorn
RAG LangChain, RecursiveCharacterTextSplitter
Vector Store FAISS
LLM + Embeddings Gemini API via langchain-google-genai
Frontend React, Vite, TailwindCSS
HTTP Client Axios
DevOps Dockerfile, GitHub Actions CI

Note: The original architecture is provider-agnostic through LangChain. This repository is currently configured for Gemini because it works without an OpenAI API key. You can swap the embedding/chat services to OpenAI if preferred.

Architecture

User -> Upload -> Chunk -> Embed -> FAISS -> Query -> Retrieve -> LLM -> Answer
React + Tailwind UI
        |
        | Axios
        v
FastAPI API
        |
        | Load zip/source/doc files
        v
RecursiveCharacterTextSplitter
        |
        | Gemini embeddings
        v
FAISS vector index
        |
        | similarity_search(top-k)
        v
Prompt + Gemini chat model
        |
        v
Answer + retrieved source context

Features

  • Upload .zip, .py, .js, .jsx, .ts, .tsx, .txt, .md, .json, .yaml, .css, and .html files.
  • In-memory zip processing to avoid Windows/OneDrive upload file locks.
  • Structure-aware chunking with LangChain.
  • Local FAISS vector storage.
  • Senior-developer system prompt for architecture explanations and bug analysis.
  • Optional retrieved-context toggle in the chat UI.
  • Dark, responsive, recruiter-friendly interface.
  • Error sanitization so API keys are not leaked in frontend messages.
  • Dockerfile for backend deployment.
  • GitHub Actions CI for backend import and frontend build.

Project Structure

devinsight-ai/
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ app/
β”‚   β”‚   β”œβ”€β”€ routes/
β”‚   β”‚   β”œβ”€β”€ services/
β”‚   β”‚   β”œβ”€β”€ utils/
β”‚   β”‚   β”œβ”€β”€ config.py
β”‚   β”‚   └── main.py
β”‚   β”œβ”€β”€ Dockerfile
β”‚   β”œβ”€β”€ requirements.txt
β”‚   └── .env.example
β”œβ”€β”€ frontend/
β”‚   β”œβ”€β”€ public/screenshots/
β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”œβ”€β”€ components/
β”‚   β”‚   β”œβ”€β”€ api.js
β”‚   β”‚   └── App.jsx
β”‚   └── package.json
β”œβ”€β”€ .github/workflows/ci.yml
β”œβ”€β”€ .gitignore
β”œβ”€β”€ LICENSE
└── README.md

Setup

1. Clone

git clone https://github.com/pun33th45/RAG-Based-Developer-Assistant.git
cd RAG-Based-Developer-Assistant

2. Backend

cd backend
python -m venv .venv
.venv\Scripts\activate
pip install -r requirements.txt

Create backend/.env:

GOOGLE_API_KEY=your_gemini_api_key_here

Run the API:

uvicorn app.main:app --reload

Backend:

http://localhost:8000

Health check:

http://localhost:8000/health

Swagger docs:

http://localhost:8000/docs

3. Frontend

cd frontend
npm install
npm run dev

Frontend:

http://localhost:5173

API Endpoints

POST /upload

Uploads and indexes a supported source/document file or zip archive.

{
  "message": "Upload indexed successfully.",
  "filename": "repo.zip",
  "documents": 12,
  "chunks": 48
}

POST /query

Asks a question against the indexed project context.

{
  "question": "Find potential bugs",
  "k": 5
}

Response:

{
  "answer": "The project has a possible authentication issue...",
  "sources": [
    {
      "source": "app/auth.py",
      "file_type": ".py",
      "content": "..."
    }
  ]
}

Docker

cd backend
docker build -t devinsight-ai-backend .
docker run --env-file .env -p 8000:8000 devinsight-ai-backend

Sample Prompts

  • "Explain this project"
  • "Explain the architecture"
  • "Find bugs"
  • "What does this function do?"
  • "How can this codebase be improved?"

Future Improvements

  • Multi-project indexes and index reset controls.
  • Streaming responses.
  • Authentication and per-user workspaces.
  • Pinecone or hosted vector database option.
  • GitHub repository ingestion by URL.
  • Syntax-highlighted retrieved context.

Author

Built by Puneeth Raj.

About

Built an AI-powered RAG-based developer assistant that allows users to upload entire codebases or documents and interact with them using natural language. The system leverages embeddings, vector search (FAISS), and LLMs to explain architecture, detect bugs, and provide contextual insights, significantly improving developer productivity.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors