This application uses Streamlit and a local DuckDB database to manage FRC scouting data.
Install uv (Python package manager).
Clone the repo and check out the active branch:
git clone <repo-url>
cd Data2026
git checkout 2026-season-migration
uv sync
This creates .venv/ and installs all packages from uv.lock.
The TBA API key must not be stored in files that are checked into git.
Create .streamlit/secrets.toml in the repo root:
[tba]
auth_key = "your-tba-api-key-here"
[cache]
cache_path = "."
enabled = "False"Get a free TBA API key at thebluealliance.com/account.
Note: A
[motherduck]token is no longer required. The app now uses a local DuckDB file atdata/frc2026.duckdb.
Initialize the database schema (scouting tables) before syncing data:
uv run python -m frc_data_281.db
This creates:
scouting.pit— pit scouting form submissionsscouting.tags— team tags/notesscouting.test— test table
The database file doesn't exist yet on a fresh checkout — the pipeline creates it automatically:
uv run python -m frc_data_281.the_blue_alliance.pipeline
This will:
- Create
data/frc2026.duckdb - Populate
tba.teams,tba.matches,tba.event_rankings, andtba.oprsfor all configured events - Also initializes the
scoutingschema if not already created
uv run frc-scouting
Or alternatively:
streamlit run frc_data_281/app/Home.py
To pull the latest match data from TBA:
- Click "Refresh Data from TBA" on the Data Refresh page in the app, or
- Run directly:
uv run python -m frc_data_281.the_blue_alliance.pipeline
The pipeline uses merge disposition — re-running is always safe and won't duplicate records.
This app can be deployed to Render.io using the included render.yaml configuration.
- A Render account (free tier available)
- This GitHub repository connected to Render
- A TBA API key (see Setup on a New Machine)
-
Push your code to your GitHub repository on the branch you want to deploy.
-
Create a new Web Service on Render:
- Go to Render Dashboard
- Click "New +" → "Web Service"
- Connect your GitHub repository
- Select the branch (e.g.,
2026-season-migration) - Render will auto-detect the
render.yamlfile
-
Add environment variables:
- In the Render dashboard, go to Environment for your service
- Add
TBA_KEYwith your Blue Alliance API key value - The
render.yamlalso sets Streamlit-specific environment variables automatically
-
Create a persistent disk:
- Render will auto-create the disk defined in
render.yaml(scouting-db, 5GB) - This persists the local DuckDB file (
data/frc2026.duckdb) across deployments
- Render will auto-create the disk defined in
-
Deploy:
- Click "Create Web Service"
- Render will:
- Install
uvand dependencies (uv sync) - Initialize the scouting schema (
python -m frc_data_281.db) - Sync TBA data (
python -m frc_data_281.the_blue_alliance.pipeline) - Start the Streamlit app
- Install
- Your app will be live at
https://<your-service-name>.onrender.com
- Database persistence: DuckDB file is stored on the persistent disk and survives redeploys
- Data refresh: The pipeline runs during every build. To force a data refresh without redeploying, SSH into the Render instance and run the pipeline manually
- Cold starts: Free tier Render instances spin down after 15 minutes of inactivity; there may be a delay on first access after a period of inactivity
Data2026/
├── frc_data_281/ # Main application package
│ ├── __main__.py # Entry point for running the app (frc-scouting)
│ ├── app/ # Streamlit web application
│ │ ├── Home.py # Landing page for the Streamlit app
│ │ ├── run.py # Helper module to run app programmatically
│ │ ├── components/ # Reusable UI components (event selector, team stats, styling)
│ │ └── pages/ # Streamlit pages (match scouting, team analysis, data entry, etc.)
│ ├── the_blue_alliance/ # The Blue Alliance API integration
│ │ ├── client.py # API client for fetching FRC data
│ │ └── pipeline.py # Data pipeline for syncing TBA data to database
│ ├── db/ # Database layer
│ │ ├── connection.py # DuckDB connection management (local file)
│ │ ├── schema.py # Database schema definitions
│ │ └── cached_queries.py # Cached query functions for performance
│ ├── analysis/ # Data analysis modules
│ │ ├── opr.py # OPR (Offensive Power Rating) calculations
│ │ ├── season_specific/ # Season-specific analysis logic
│ │ │ ├── season_2025.py # 2025 game: Reefscape (coral/reef/barge)
│ │ │ └── season_2026.py # 2026 game: Hub scoring, Tower, Energized/Supercharged/Traversal RPs
│ │ ├── numerizer.py # Dataset numeric transformation utilities
│ │ └── dataset_tools.py # Data manipulation and analysis helpers
│ ├── jobs/ # Background job scheduling
│ │ └── scheduler.py # Scheduled tasks (TBA sync, etc.)
│ └── utils/ # Utility functions
│ └── helpers.py # General helper functions
├── tests/ # Test suite
├── example_pages/ # Example Streamlit pages for reference
├── utilities/ # Development utilities and scripts
├── data/ # Local data storage (frc2026.duckdb — not committed to git)
└── pyproject.toml # Project dependencies and configuration
OPR (Offensive Power Rating)
- A statistical measure of how many points a team contributes to their alliance's score
- Calculated using linear regression on match data to isolate individual team contributions
- Higher OPR indicates stronger offensive performance
DPR (Defensive Power Rating)
- A measure of how many points a team prevents the opposing alliance from scoring
- Calculated similarly to OPR but focused on defensive impact
- Higher DPR indicates stronger defensive capabilities
CCWM (Calculated Contribution to Winning Margin)
- A measure of a team's contribution to their alliance's margin of victory
- Provided by The Blue Alliance API as a standard FRC metric
- Accounts for both offensive and defensive contributions
CCM (Component Contribution Metrics)
- Extended analysis that applies OPR-style calculations to individual game components
- Breaks down performance into granular metrics (hub scoring, tower points, auto points, etc.)
- Provides detailed insights into team strengths and weaknesses across all game elements
Z-Score (Standard Score)
- A statistical measure indicating how many standard deviations a value is from the mean
- Formula:
z = (value - mean) / standard_deviation - Allows comparison of different metrics on the same scale:
z = 0: Average performancez > 0: Above average (z = 1 means one standard deviation above)z < 0: Below average (z = -1 means one standard deviation below)|z| > 2: Statistically significant outlier
- Used throughout the app to normalize and compare team performance across different metrics