Skip to content

ganesh44we/api-reliability-rl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

title API Reliability RL Environment
emoji 🚀
colorFrom purple
colorTo blue
sdk docker
app_file app.py
pinned false

🚀 Cost-Aware API Reliability RL Environment

🧠 Overview

This project implements a real-world reinforcement learning (RL) environment that simulates API reliability challenges in backend systems.

Agents must make intelligent decisions under uncertainty to balance:

  • ✅ Success rate
  • ⏱️ Latency
  • 💰 Cost

🎯 Objective

Enable agents to learn optimal strategies for handling unreliable APIs using the OpenEnv framework.


🧩 State Space (Observation)

Feature Description
api_status success / slow / failed
latency Response time in ms
retry_count Number of retries performed
api_cost Cost of API usage
system_load low / medium / high

⚡ Action Space

Action Description
accept Accept current API response
retry Retry the same API
switch_api Switch to backup API
use_cache Use cached response (fast, cheap)
return_error Stop and return failure

🏆 Reward Function

  • +8 → successful response
  • −0.02 × latency
  • −5 × api_cost
  • −2 × retry_count
  • −8 → failure
  • Bonus/penalty for decision quality

🧪 Tasks (Difficulty Levels)

Task Description
Easy Low failure probability (~30%)
Medium Moderate failures (~50%)
Hard High failure + cascading effects

📁 Project Structure

api-reliability-rl/
├── server/
│   ├── __init__.py        # Makes server a Python package
│   ├── app.py             # FastAPI server (OpenEnv)
│   └── environment.py     # RL Environment logic
├── app.py                 # Gradio UI
├── models.py              # Action, Observation, State models
├── inference.py           # Agent inference script
├── requirements.txt       # Python dependencies
├── openenv.yaml           # OpenEnv config
├── Dockerfile             # Docker deployment
└── README.md

📦 Local Setup

pip install -r requirements.txt

# Terminal 1 - Start FastAPI server
uvicorn server.app:app --host 0.0.0.0 --port 8000

# Terminal 2 - Start Gradio UI
python app.py

🧪 Inference Script

python inference.py

🐳 Docker Setup

docker build -t api-env .
docker run -p 8000:8000 -p 7860:7860 api-env

🌐 API Endpoints

Endpoint Method Description
/reset POST Reset environment
/step POST Take an action
/state GET Get current state

🛠️ Tech Stack

  • OpenEnv
  • FastAPI
  • Gradio
  • Docker
  • Hugging Face Spaces
  • OpenAI-compatible API (Qwen via HF Router)

👥 Team

Releases

No releases published

Packages

 
 
 

Contributors