⭐ If this roadmap helps you build practical AI systems, star the repository and share it with another developer.
Build real AI systems, RAG pipelines, and AI agents in 90 days.
A practical roadmap for developers who want to become AI Engineers by building real projects instead of studying theory.
Most AI learning paths focus on theory only.
This roadmap focuses on building, shipping, and evaluating real AI systems in a clear 90-day path.
This repository is designed for developers who:
- know basic Python
- want hands-on projects instead of only theory
- want to understand how production LLM systems work
- want a practical transition into AI engineering
By completing this roadmap, you will learn how to:
- build LLM-powered applications with clean architecture
- design and implement RAG pipelines
- work with embeddings and semantic retrieval
- build AI agents that use tools and multi-step workflows
- evaluate AI system quality and reliability
- deploy AI systems as APIs and production services
Core topics in this roadmap:
- prompt engineering
- embeddings
- retrieval augmented generation (RAG)
- vector databases
- AI agents
- evaluation and monitoring
- API design and deployment
After finishing this roadmap, you should be able to:
- ship an end-to-end AI chatbot with retrieval support
- build and test a RAG search system over custom documents
- implement an agent loop that plans and executes tasks
- expose AI functionality through a production-style API
- evaluate output quality using practical metrics and traces
A simple weekly workflow that works well for most developers:
- Monday: read the weekly README and review core concepts
- Tuesday: run the example scripts and inspect code paths
- Wednesday: implement one small extension or refactor
- Thursday: build or improve the weekly mini-project
- Friday: write notes, document learnings, and share progress
- Weekend (optional): revisit weak points or contribute a PR
graph TD
A[Python for AI] --> B[Machine Learning Basics]
B --> C[Embeddings]
C --> D[Large Language Models]
D --> E[RAG Systems]
E --> F[Vector Databases]
F --> G[AI Agents]
G --> H[Evaluation and Monitoring]
H --> I[Deploy AI Systems]
| Days | Track | Focus | Target Outcome |
|---|---|---|---|
| 01-09 | week01_python_for_ai | Python for AI | Data structures, scripts, APIs |
| 10-18 | week02_machine_learning | Machine Learning Basics | Train and evaluate ML models |
| 19-27 | week03_embeddings | Embeddings | Build vector representations |
| 28-36 | week04_llms | Large Language Models | Prompting and LLM usage |
| 37-45 | week05_rag | Retrieval Augmented Generation | Build RAG pipeline |
| 46-54 | week06_vector_databases | Vector Databases | Indexing and search |
| 55-63 | week07_agents | AI Agents | Tool use and multi-step reasoning |
| 64-72 | week08_ai_tools | Evaluation and Monitoring | Metrics and tracing |
| 73-81 | week09_build_projects | Build Projects | Implement real systems |
| 82-90 | week10_deploy_ai | Deploy AI Systems | Ship to cloud |
During the roadmap you will build systems similar to real AI products:
- AI chatbot for conversational UX
- RAG knowledge base / search engine for grounded answers
- AI code assistant for developer workflows
- AI research assistant (stretch goal) for literature and synthesis
- AI API for integration and serving
- AI document analyzer for document-level tasks
The capstone project combines the full AI engineering stack into one production-style system.
It combines:
- RAG
- AI agents
- vector search
- API
- deployment
Capstone objective: build an assistant that retrieves trusted knowledge, reasons across tools, and serves answers through an API endpoint ready for deployment.
Question: How do I evaluate a RAG system?
Answer: To evaluate a RAG system you should measure:
- retrieval precision
- context relevance
- answer faithfulness
- latency
git clone https://github.com/your-username/ai-engineer-in-90-days.git
cd ai-engineer-in-90-days
python3 examples/embeddings.py
python3 examples/vector_search.py
python3 examples/rag_pipeline.py
python3 examples/agent_loop.pyfrom examples.rag_pipeline import retrieve, generate_answer
question = "How do I evaluate a RAG system?"
chunks = retrieve(question)
print(generate_answer(question, chunks))Use this learning flow:
roadmap -> examples -> exercises -> projects
Exercises for practice:
Use these interview prep guides to practice practical AI engineering questions:
- AI Engineer Interview Questions
- RAG Interview Questions
- LLM Interview Questions
- AI System Design Questions
Practical failure patterns and mitigations for production AI systems:
Practical debugging runbook for retrieval, prompts, hallucinations, tools, agents, and latency:
Practical trade-offs for architecture and tooling decisions in production AI systems:
Practical checklists for shipping and operating production AI systems:
Practical evaluation guides for improving AI system quality in production:
- Retrieval Evaluation
- Answer Quality Evaluation
- Faithfulness Checking
- Prompt Comparison
- Model Comparison
- Agent Evaluation
Common AI engineering architecture patterns with practical trade-offs:
- Simple LLM App
- RAG Pipeline
- Ingestion Pipeline
- Tool-Calling Assistant
- Planner-Executor Agent
- Batch Evaluation Pipeline
Practical AI engineering case studies from problem framing to implementation path:
- Support Assistant Case Study
- Documentation Search Assistant Case Study
- Internal Knowledge Base Assistant Case Study
- AI Document Analyzer Case Study
Practical definitions of core AI engineering terms:
Choose a path based on your background and goal:
Practical engineering comparisons for common AI tooling choices:
- Vector Databases Comparison
- LLM Frameworks Comparison
- Evaluation Tools Comparison
- Observability Tools Comparison
Lightweight benchmark-style experiments for core retrieval decisions:
Practical implementation guide for hardening AI apps from notebook to production:
Full reference list: resources/tools.md
Common tools used throughout the projects:
- Python
- Jupyter
- uv / pip / poetry
- OpenAI API
- Anthropic API
- Google Gemini API
- LangChain
- LlamaIndex
- DSPy
- FAISS
- Chroma
- Qdrant
- Pinecone
- FastAPI
- Docker
- GitHub Actions
- Langfuse
- Helicone
- Promptfoo
- Ragas
- Week 01: Python for AI
- Week 02: Machine Learning Basics
- Week 03: Embeddings
- Week 04: LLMs
- Week 05: RAG
- Week 06: Vector Databases
- Week 07: Agents
- Week 08: Evaluation and Monitoring
- Week 09: Build Projects
- Week 10: Deploy AI Systems
- Ship
ai_chatbotMVP - Ship
rag_search_engineMVP - Ship
ai_code_assistantMVP - Ship
ai_document_analyzerMVP - Ship
ai_apiMVP
ai-engineer-in-90-days/
├── README.md
├── CONTRIBUTING.md
├── LICENSE
├── weeks/
├── exercises/
├── interview-prep/
├── checklists/
├── evaluation-recipes/
├── architecture-patterns/
├── case-studies/
├── tool-comparisons/
├── benchmarks/
├── projects/
├── diagrams/
├── examples/
└── resources/
Contributions, ideas, and constructive feedback are welcome.
- Open an issue to suggest improvements or report gaps
- Open a pull request for fixes, examples, or docs upgrades
- Share progress and lessons learned in GitHub Discussions (or issues if discussions are not enabled)
See CONTRIBUTING.md for contribution guidelines.
Companion repositories can be linked here as the ecosystem grows:
ai-engineer-in-90-days-starter(coming soon)ai-engineer-in-90-days-projects(coming soon)ai-engineer-in-90-days-evals(coming soon)
Use these repository topics for better discoverability:
aiai-engineeringmachine-learningragllmai-agentsvector-databaseprompt-engineeringai-roadmap
Distributed under the MIT License. See LICENSE.



