Skip to content

RedEye1605/CrowdCounting

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

18 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

CrowdVision - AI Crowd Counting

CrowdVision Logo

AI-powered crowd counting using deep learning models

Python FastAPI PyTorch License

Live Demo β€’ Documentation β€’ Deploy Guide


🎯 Overview

CrowdVision is an end-to-end web application for crowd counting using state-of-the-art deep learning models. It offers two complementary approaches:

Method Model Description
Density Map CSRNet Generates heat map showing crowd density distribution
Point Localization P2PNet Detects and marks individual head positions

✨ Features

  • 🎨 Modern Web Interface - Clean, responsive UI with mint green theme
  • πŸ“€ Drag & Drop Upload - Easy image upload or use sample images
  • πŸ”₯ Dual Detection Methods - Choose between density map or point localization
  • ⚑ Optimized Models - Dynamic quantization for efficient CPU inference
  • πŸŽ›οΈ Adjustable Threshold - Fine-tune detection sensitivity for P2PNet
  • πŸ“Š Visual Results - Interactive visualization with download option

πŸš€ Quick Start

Prerequisites

  • Python 3.10+
  • pip package manager

Local Development

# Clone the repository
git clone https://github.com/RedEye1605/CrowdCounting.git
cd CrowdCounting

# Install dependencies
pip install -r requirements.txt

# Run server
python -m uvicorn app.main:app --reload --host 0.0.0.0 --port 8000

Open http://localhost:8000 in your browser.

Docker

# Build image
docker build -t crowdvision .

# Run container
docker run -p 8080:8080 crowdvision

🌍 Deployment

Fly.io (Recommended for FastAPI)

# Install flyctl
# Windows: powershell -Command "irm https://fly.io/install.ps1 | iex"
# macOS/Linux: curl -L https://fly.io/install.sh | sh

# Login and deploy
flyctl auth login
flyctl launch --no-deploy
flyctl deploy

Hugging Face Spaces (Free GPU)

  1. Create a new Space at huggingface.co/spaces
  2. Select Gradio as the SDK
  3. Clone your Space and copy files:
git clone https://huggingface.co/spaces/YOUR_USERNAME/crowdvision
cd crowdvision

# Copy required files
cp -r app/ weights/ app_gradio.py ./
cp requirements_hf.txt requirements.txt
cp README_HF.md README.md

# Push to deploy
git add .
git commit -m "Initial deployment"
git push

πŸ’‘ Tip: HF Spaces offers free GPU for faster inference!

πŸ“ Project Structure

CrowdCounting/
β”œβ”€β”€ app/
β”‚   β”œβ”€β”€ main.py                    # FastAPI application
β”‚   β”œβ”€β”€ models/
β”‚   β”‚   β”œβ”€β”€ csrnet.py              # CSRNet architecture
β”‚   β”‚   └── p2pnet.py              # P2PNet architecture
β”‚   β”œβ”€β”€ inference/
β”‚   β”‚   β”œβ”€β”€ density_inference.py   # Density map inference
β”‚   β”‚   └── localization_inference.py  # Point detection inference
β”‚   β”œβ”€β”€ static/
β”‚   β”‚   β”œβ”€β”€ css/style.css
β”‚   β”‚   β”œβ”€β”€ js/main.js
β”‚   β”‚   β”œβ”€β”€ images/
β”‚   β”‚   └── samples/               # Sample images for testing
β”‚   └── templates/
β”‚       └── index.html
β”œβ”€β”€ weights/
β”‚   β”œβ”€β”€ densitymap_model.pth       # CSRNet weights (~65MB)
β”‚   └── p2pnet_model.pth           # P2PNet weights (~86MB)
β”œβ”€β”€ notebooks/                     # Training notebooks
β”œβ”€β”€ app_gradio.py                  # Gradio app for HF Spaces
β”œβ”€β”€ requirements.txt               # Fly.io dependencies
β”œβ”€β”€ requirements_hf.txt            # HF Spaces dependencies
β”œβ”€β”€ Dockerfile                     # Docker configuration
└── fly.toml                       # Fly.io configuration

πŸ“Š Models

CSRNet (Density Map)

  • Architecture: VGG16 frontend + Dilated convolutional backend
  • Output: Density map where sum of pixels = crowd count
  • Best for: Large crowds, density distribution analysis

P2PNet (Point Localization)

  • Architecture: VGG16_bn backbone + FPN decoder + Regression/Classification heads
  • Output: Point locations of detected heads
  • Best for: Precise head counting, sparse to medium density crowds
  • Parameters:
    • Confidence Threshold: 0.4 (40%)
    • NMS Distance: 8% of image diagonal

πŸ“ API Reference

Endpoints

Method Endpoint Description
GET / Web interface
GET /health Health check
POST /predict/density Density map prediction
POST /predict/localization Point detection prediction

Example Request

# Density Map
curl -X POST -F "file=@image.jpg" http://localhost:8000/predict/density

# Point Localization
curl -X POST -F "file=@image.jpg" "http://localhost:8000/predict/localization?threshold=0.4"

Response Format

{
  "count": 25,
  "method": "localization",
  "visualization": "base64_encoded_image",
  "points": [
    {"x": 100.5, "y": 200.3, "confidence": 0.42}
  ],
  "threshold": 0.4
}

πŸ”§ Configuration

Environment Variables

Variable Default Description
PORT 8080 Server port
HOST 0.0.0.0 Server host

Tuning P2PNet

  • Lower threshold (0.3): More detections, more false positives
  • Higher threshold (0.5): Fewer detections, more accurate
  • Recommended: 0.4 for balanced results

πŸ“š Documentation

Training

The models were trained on custom crowd counting datasets. See the notebooks/ directory for:

  • densitymap.ipynb - CSRNet training
  • p2pnet.ipynb - P2PNet training

Preprocessing

  • Image normalized with ImageNet mean/std
  • P2PNet images resized to multiple of 128 pixels
  • Point NMS applied to filter duplicate detections

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

About

AI-powered Crowd Counting solution using the P2PNet model (VGG16 backbone). Includes a responsive Web UI, Gradio demo, Docker support for easy deployment, and fine-tuned parameters for precise head localization.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages