feat(costs): cost analytics page (#557)#590
Conversation
Add a /costs page that surfaces total AI spend from the workspace's token_usage table. Includes summary cards (total spend, tasks run, average cost per task), a 7/30/90-day time-range selector, and a pure SVG bar chart for daily spend. Backend - TokenRepository.get_costs_summary(days) aggregates token_usage into daily buckets with zero-fill for missing days. Excludes NULL task_ids from the task count. - New v2 router GET /api/v2/costs/summary?days=30 returns the summary. Opens the workspace DB directly to avoid the pre-existing schema conflict between the v2 workspace schema and SchemaManager; returns an empty-state payload when token_usage doesn't yet exist. Frontend - /costs page mirrors the workspace page pattern (useSWR, workspace selector guard, loading skeleton, error banner). - SpendBarChart is a pure-SVG/CSS chart (no Recharts) — empty state, proportional bar heights, hover tooltips, date-fns labels. - Sidebar entry between Proof and Settings. Tests: 11 backend + 11 frontend.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
🚧 Files skipped from review as they are similar to previous changes (3)
WalkthroughAdds TokenRepository.get_costs_summary(days), a v2 FastAPI endpoint (/api/v2/costs/summary) that reads workspace SQLite and returns totals plus daily zero-filled buckets, wires the router into the server, and implements a typed frontend Costs page, SpendBarChart, API client, mocks, and tests. ChangesCost Analytics Implementation
Sequence Diagram(s)sequenceDiagram
participant User
participant CostsPage
participant SWR
participant CostsAPI
participant WorkspaceDB
User->>CostsPage: Select workspace + time range
CostsPage->>SWR: Fetch summary (workspace_path, days)
SWR->>CostsAPI: GET /api/v2/costs/summary?workspace_path=...&days=30
CostsAPI->>WorkspaceDB: Open DB, verify token_usage table
alt DB opened and table present
CostsAPI->>WorkspaceDB: TokenRepository.get_costs_summary(days) via sync_conn
WorkspaceDB-->>CostsAPI: totals + daily rows
CostsAPI->>CostsAPI: backfill missing days, compute avg
CostsAPI-->>SWR: CostSummaryResponse
else DB open fails or table missing
CostsAPI-->>SWR: empty CostSummaryResponse (zero-filled)
end
SWR-->>CostsPage: data ready
CostsPage->>CostsPage: Render KPI cards + SpendBarChart
CostsPage-->>User: Display totals and daily bars
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
PR #590 Review — Cost Analytics PageOverall: Good implementation, ready with one medium-severity fix. This is a clean, self-contained feature that follows existing project patterns — rate limiting, SWR, direct SQLite access for the schema-conflict workaround, and solid test coverage across all three layers. The empty-state handling and zero-filled daily buckets are exactly right for a chart consumer. Medium: Timestamp format mismatch in SQL boundaryBoth start_iso = datetime(
start_date.year, start_date.month, start_date.day, tzinfo=timezone.utc
).isoformat()
# → "2026-05-13T00:00:00+00:00" (T-separator, +00:00 suffix)SQLite's …because ASCII space (0x20) < Fix: use the format SQLite's own functions understand — start_iso = start_date.strftime("%Y-%m-%d %H:%M:%S") # "2026-05-13 00:00:00"or keep the T-form but strip the offset: start_iso = f"{start_date.isoformat()}T00:00:00"The same fix is needed in both Low: Duplicated SQL logic
from codeframe.persistence.repositories.token_repository import TokenRepository
conn = sqlite3.connect(db_path)
conn.row_factory = sqlite3.Row
repo = TokenRepository(conn)
summary = repo.get_costs_summary(days)This eliminates the duplication without needing Low: Missing index for time-range queries
Low:
|
…y in router Address claude-review feedback on #590: - get_costs_summary built its window-start with `.isoformat()`, producing "YYYY-MM-DDTHH:MM:SS+00:00". SQLite's `DEFAULT CURRENT_TIMESTAMP` writes "YYYY-MM-DD HH:MM:SS" (space, no offset). Lexicographic comparison then drops every row inserted via the schema default ("space" < "T"). Switch the boundary to `strftime("%Y-%m-%d %H:%M:%S")`, which sorts correctly against both formats, and add a regression test that inserts a space-separated timestamp directly. - Refactor `costs_v2._query_costs` to instantiate TokenRepository with a raw sqlite3 connection instead of duplicating the two aggregate queries. Eliminates drift risk between the router and the repository.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@codeframe/persistence/repositories/token_repository.py`:
- Around line 307-314: The aggregate SQL in the cursor.execute block computing
total_spend and task_count currently only filters with "WHERE timestamp >= ?"
and therefore can include future-dated rows; update that query to constrain
timestamp to the same daily window as the `daily` list by adding an upper bound
(e.g., "AND timestamp < ?" or "AND timestamp <= ?") using the same
end_date/end_date+1 value you use for the daily window so the aggregates over
`token_usage` (fields estimated_cost_usd, task_id, timestamp) match the per-day
breakdown for total_spend and task_count.
In `@codeframe/ui/routers/costs_v2.py`:
- Around line 63-126: The _query_costs function contains DB aggregation/business
logic that belongs in core/repository; extract its SQL and aggregation into a
repository function (e.g., create get_costs_summary(db_path: str, days: int) in
the core/repository module) that returns the same dict structure
(total_spend_usd, total_tasks, avg_cost_per_task, daily) and keeps the
token_usage queries there, then replace _query_costs implementation in the
router with a thin adapter that calls get_costs_summary(db_path, days) and
returns its result (handle errors by returning _empty_summary(days) if the repo
call fails).
- Around line 72-77: The current code only catches sqlite3.Error during
sqlite3.connect() and returns _empty_summary(days), but query-time exceptions
(e.g., during cursor.execute/fetch) are not caught; update the function that
opens the DB (around sqlite3.connect and conn.row_factory) to also wrap the
query execution block in a try/except sqlite3.Error that returns
_empty_summary(days) on any DB error, and ensure the connection is closed in a
finally block (or use a context manager) so resources are released; reference
the sqlite3.connect call, conn.row_factory assignment, the query execution block
(cursor.execute / cursor.fetchall), and the _empty_summary(days) return to
locate where to add the additional except/finally handling.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3a9a8ee9-f70a-45ea-817a-f6b8a33e8408
📒 Files selected for processing (13)
codeframe/persistence/repositories/token_repository.pycodeframe/ui/routers/costs_v2.pycodeframe/ui/server.pytests/persistence/test_token_repository_costs.pytests/ui/test_costs_v2.pyweb-ui/__mocks__/@hugeicons/react.jsweb-ui/src/__tests__/components/costs/CostsPage.test.tsxweb-ui/src/__tests__/components/costs/SpendBarChart.test.tsxweb-ui/src/app/costs/page.tsxweb-ui/src/components/costs/SpendBarChart.tsxweb-ui/src/components/layout/AppSidebar.tsxweb-ui/src/lib/api.tsweb-ui/src/types/index.ts
|
Follow-up to Prior Review. Good progress on the two issues I flagged previously. Timestamp format (Medium): Fixed - start_iso now uses strftime format in token_repository.py. SQL duplication (Low): Fixed - _query_costs() now delegates to repo.get_costs_summary(days). One remaining item: _query_costs only catches sqlite3.Error during connect() - if the query itself fails the exception propagates as a 500. Add an except sqlite3.Error clause around the query block before finally: conn.close(). Low severity but consistent with the endpoint goal of never erroring for missing workspaces. Good to merge once this is addressed. |
Follow-up to claude-review on #590: the existing try/except only covered the `sqlite3.connect()` call, so a query-time failure (locked DB, corrupt schema) would 500 the dashboard. Wrap the cursor/repo block in its own except so the endpoint stays as graceful as the empty-state path.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@codeframe/ui/routers/costs_v2.py`:
- Around line 92-104: The async handler get_costs_summary is calling the
blocking _query_costs (which uses TokenRepository.get_costs_summary) on the
event loop; offload that DB work to a threadpool to avoid blocking. Replace the
direct call to _query_costs with an awaited call using
asyncio.to_thread(_query_costs, str(workspace.db_path), days) (or make the route
synchronous) so sqlite3 operations run off the event loop; update any
callers/returns accordingly to keep the same response shape.
- Around line 70-74: The current code calls sqlite3.connect(db_path) which will
create the DB file on a GET; instead either short-circuit if the file is missing
or open the DB read-only to avoid mutation. Modify the block around
sqlite3.connect/db_path to first check for existence (e.g.,
os.path.exists(db_path) and return _empty_summary(days) if absent) or open with
sqlite3.connect(f"file:{db_path}?mode=ro", uri=True) and handle
sqlite3.OperationalError to return _empty_summary(days); keep the existing
conn.row_factory = sqlite3.Row and callers that use conn unchanged.
In `@tests/persistence/test_token_repository_costs.py`:
- Around line 69-71: The test test_default_days_is_30 currently passes an
explicit days=30 so it doesn't verify the default behavior; update it to call
db.token_usage.get_costs_summary() without the days argument and assert
len(summary["daily"]) == 30 to validate the default window (or alternatively
rename the test to test_fixed_window_of_30_and keep the explicit days=30 if you
intend to test fixed-size behavior rather than the default). Ensure you modify
the test function test_default_days_is_30 and its call to get_costs_summary to
reflect the chosen intent.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0e11d7d0-83a6-4766-8294-0a298cba1309
📒 Files selected for processing (3)
codeframe/persistence/repositories/token_repository.pycodeframe/ui/routers/costs_v2.pytests/persistence/test_token_repository_costs.py
🚧 Files skipped from review as they are similar to previous changes (1)
- codeframe/persistence/repositories/token_repository.py
| try: | ||
| conn = sqlite3.connect(db_path) | ||
| conn.row_factory = sqlite3.Row | ||
| except sqlite3.Error: | ||
| return _empty_summary(days) |
There was a problem hiding this comment.
Avoid creating a database file on a read-only GET.
sqlite3.connect(db_path) will create the SQLite file when it is missing, so this endpoint can mutate workspace state and silently mask a bad workspace.db_path. Open read-only (or at least short-circuit when the file is absent) before falling back to _empty_summary(days).
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@codeframe/ui/routers/costs_v2.py` around lines 70 - 74, The current code
calls sqlite3.connect(db_path) which will create the DB file on a GET; instead
either short-circuit if the file is missing or open the DB read-only to avoid
mutation. Modify the block around sqlite3.connect/db_path to first check for
existence (e.g., os.path.exists(db_path) and return _empty_summary(days) if
absent) or open with sqlite3.connect(f"file:{db_path}?mode=ro", uri=True) and
handle sqlite3.OperationalError to return _empty_summary(days); keep the
existing conn.row_factory = sqlite3.Row and callers that use conn unchanged.
| async def get_costs_summary( | ||
| request: Request, | ||
| workspace: Workspace = Depends(get_v2_workspace), | ||
| days: int = Query(30, ge=7, le=90, description="Window size in days (7-90)"), | ||
| ): | ||
| """Return total spend, task count, average cost, and a daily series. | ||
|
|
||
| Reads from the workspace's `token_usage` table. Returns zero-filled | ||
| daily buckets so the client can render a chart without conditionals. | ||
| If the table doesn't exist (no agent has run in this workspace yet), | ||
| returns an empty-state response rather than an error. | ||
| """ | ||
| summary = _query_costs(str(workspace.db_path), days) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "costs_v2.py" | head -20Repository: frankbria/codeframe
Length of output: 98
🏁 Script executed:
cat -n codeframe/ui/routers/costs_v2.py | head -120Repository: frankbria/codeframe
Length of output: 4611
🏁 Script executed:
# Check the _query_costs function definition
rg "_query_costs" -B 3 -A 15Repository: frankbria/codeframe
Length of output: 2233
🏁 Script executed:
find . -type f -name "token_repository.py" -o -name "*repository.py" | grep -E "token_repository|repositories"Repository: frankbria/codeframe
Length of output: 1208
🏁 Script executed:
# Check TokenRepository.get_costs_summary to see if it's blocking
rg "get_costs_summary" -B 2 -A 20Repository: frankbria/codeframe
Length of output: 16330
🏁 Script executed:
# Check how other async routers handle blocking operations
rg "async def" codeframe/ui/routers/ -A 10 | head -100Repository: frankbria/codeframe
Length of output: 6740
🏁 Script executed:
# Search for threadpool/to_thread usage in the codebase to see the pattern
rg "to_thread|run_in_threadpool|ThreadPoolExecutor" -B 2 -A 2Repository: frankbria/codeframe
Length of output: 14276
🏁 Script executed:
# Check TokenRepository.get_costs_summary implementation
cat -n codeframe/persistence/repositories/token_repository.py | grep -A 60 "def get_costs_summary"Repository: frankbria/codeframe
Length of output: 3096
🏁 Script executed:
# Check if there's any pattern preference - look at which is more common
rg "async def.*\(.*Depends|Query" codeframe/ui/routers/*.py | wc -l
rg "def .*\(.*Depends|Query" codeframe/ui/routers/*.py | wc -lRepository: frankbria/codeframe
Length of output: 69
Do not run synchronous SQLite work on the event loop.
This async handler calls _query_costs(), which performs blocking sqlite3 operations (via TokenRepository.get_costs_summary()). These blocking calls will run on the event loop and can stall unrelated requests under lock contention or in larger workspaces. Make the route synchronous or wrap _query_costs() with asyncio.to_thread() to offload to a threadpool.
The codebase has established a consistent pattern for this — see settings_v2.py (uses run_in_threadpool), discovery.py, and chat.py (use asyncio.to_thread) for examples.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@codeframe/ui/routers/costs_v2.py` around lines 92 - 104, The async handler
get_costs_summary is calling the blocking _query_costs (which uses
TokenRepository.get_costs_summary) on the event loop; offload that DB work to a
threadpool to avoid blocking. Replace the direct call to _query_costs with an
awaited call using asyncio.to_thread(_query_costs, str(workspace.db_path), days)
(or make the route synchronous) so sqlite3 operations run off the event loop;
update any callers/returns accordingly to keep the same response shape.
| def test_default_days_is_30(self, db): | ||
| summary = db.token_usage.get_costs_summary(days=30) | ||
| assert len(summary["daily"]) == 30 |
There was a problem hiding this comment.
This test does not verify the default window.
Passing days=30 explicitly means this still passes if the default changes or only exists at the API layer. Either rename it to reflect fixed-size behavior or move the default assertion to the router test that omits the query parameter.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@tests/persistence/test_token_repository_costs.py` around lines 69 - 71, The
test test_default_days_is_30 currently passes an explicit days=30 so it doesn't
verify the default behavior; update it to call
db.token_usage.get_costs_summary() without the days argument and assert
len(summary["daily"]) == 30 to validate the default window (or alternatively
rename the test to test_fixed_window_of_30_and keep the explicit days=30 if you
intend to test fixed-size behavior rather than the default). Ensure you modify
the test function test_default_days_is_30 and its call to get_costs_summary to
reflect the chosen intent.
Code Review — PR #590: Cost Analytics PageThis is a clean, well-scoped implementation of the Bugs / CorrectnessSQL queries lack an upper timestamp bound Suggested fix in both WHERE timestamp >= ? AND timestamp < ?
# params: (start_iso, (end_date + timedelta(days=1)).strftime("%Y-%m-%d %H:%M:%S"))Design / Tech DebtDirect DB connection in the router is a workaround, not a fix
MinorSilent exception swallowing in
What's solid
The main ask is the upper-bound SQL fix — the rest is low-stakes cleanup. |
There was a problem hiding this comment.
♻️ Duplicate comments (2)
codeframe/ui/routers/costs_v2.py (2)
97-109:⚠️ Potential issue | 🟠 Major | ⚡ Quick winOffload synchronous DB work from the async request handler.
Line 109 calls
_query_costs(...)directly from anasyncroute. This runs blocking SQLite I/O on the event loop and can stall concurrent requests under lock contention or slow storage.Suggested patch
+import asyncio import sqlite3 from datetime import datetime, timedelta, timezone from typing import Dict, List @@ - summary = _query_costs(str(workspace.db_path), days) + summary = await asyncio.to_thread(_query_costs, str(workspace.db_path), days)#!/bin/bash # Verify this async route currently executes sync DB work directly. rg -n -C3 'async def get_costs_summary|summary = _query_costs\(' codeframe/ui/routers/costs_v2.py🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@codeframe/ui/routers/costs_v2.py` around lines 97 - 109, The async route get_costs_summary is calling the synchronous _query_costs directly (blocking SQLite I/O) which can stall the event loop; change get_costs_summary to offload that call to a threadpool (e.g., using starlette.concurrency.run_in_threadpool or asyncio.get_event_loop().run_in_executor) so _query_costs executes off the loop and await the result, ensuring the rest of the handler remains async-compatible and returns the same summary object. Make sure you only wrap the _query_costs(...) invocation and preserve existing parameters and return handling.
70-74:⚠️ Potential issue | 🟠 Major | ⚡ Quick winUse read-only SQLite connection for this GET path.
Line 71 (
sqlite3.connect(db_path)) can create a database file when missing, which mutates state on a read request and can mask an invalid workspace DB path. Prefer read-only URI mode and keep the empty-state fallback on open failure.Suggested patch
- try: - conn = sqlite3.connect(db_path) + try: + conn = sqlite3.connect(f"file:{db_path}?mode=ro", uri=True) conn.row_factory = sqlite3.Row except sqlite3.Error: return _empty_summary(days)In Python sqlite3, does sqlite3.connect("path.db") create the file if it does not exist, and is sqlite3.connect("file:path.db?mode=ro", uri=True) the recommended read-only pattern?🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@codeframe/ui/routers/costs_v2.py` around lines 70 - 74, The GET handler currently opens SQLite with sqlite3.connect(db_path) which can create the DB file; change it to open in read-only URI mode by using sqlite3.connect with a "file:...?...mode=ro" URI and uri=True (so the open fails instead of creating a file), keep setting conn.row_factory = sqlite3.Row and preserve the except sqlite3.Error fallback that returns _empty_summary(days); update the code around the conn creation (the sqlite3.connect call that sets conn) to use the read-only URI form referencing db_path, and ensure all subsequent logic using conn remains unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Duplicate comments:
In `@codeframe/ui/routers/costs_v2.py`:
- Around line 97-109: The async route get_costs_summary is calling the
synchronous _query_costs directly (blocking SQLite I/O) which can stall the
event loop; change get_costs_summary to offload that call to a threadpool (e.g.,
using starlette.concurrency.run_in_threadpool or
asyncio.get_event_loop().run_in_executor) so _query_costs executes off the loop
and await the result, ensuring the rest of the handler remains async-compatible
and returns the same summary object. Make sure you only wrap the
_query_costs(...) invocation and preserve existing parameters and return
handling.
- Around line 70-74: The GET handler currently opens SQLite with
sqlite3.connect(db_path) which can create the DB file; change it to open in
read-only URI mode by using sqlite3.connect with a "file:...?...mode=ro" URI and
uri=True (so the open fails instead of creating a file), keep setting
conn.row_factory = sqlite3.Row and preserve the except sqlite3.Error fallback
that returns _empty_summary(days); update the code around the conn creation (the
sqlite3.connect call that sets conn) to use the read-only URI form referencing
db_path, and ensure all subsequent logic using conn remains unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 39e78e83-e882-49ba-8c6d-3a1274afadf1
📒 Files selected for processing (1)
codeframe/ui/routers/costs_v2.py
…t TODO Address claude-review on #590 (commit 594368a): - get_costs_summary used `WHERE timestamp >= ?` with no upper bound, so rows dated past today would inflate the KPI cards but not appear in the daily chart (which is built from a fixed list of in-window dates). Add an exclusive `< end_of_today` bound to both queries so KPIs and chart always cover the same set of rows. Regression test included. - Log a warning when _query_costs falls back to the empty summary (DB open or query failure) so corrupted/locked DBs surface in logs. - Document the schema-conflict workaround inline with TODO(schema-conflict) so the next engineer doesn't propagate the raw-sqlite pattern blindly.
|
Follow-up Review -- PR 590 Both issues I flagged in my first review are resolved in the latest commits. FIXED: Timestamp format -- start_iso and end_iso now use strftime("%Y-%m-%d %H:%M:%S") throughout, so lexicographic comparisons work correctly against SQLites CURRENT_TIMESTAMP default (YYYY-MM-DD HH:MM:SS). The mismatch I described is gone. FIXED: SQL duplication -- _query_costs() already delegates to TokenRepository(sync_conn=conn).get_costs_summary(days). No duplicated SQL in the router. Remaining minor items (none blocking): Task01Icon mock -- page.tsx imports Task01Icon but only MoneyBag02Icon, Analytics01Icon, and ChartLineData01Icon were added to the mock file in this PR. Since 11 new tests pass, Task01Icon is presumably already mocked in the pre-existing set. Quick verification before merge: confirm it appears in web-ui/mocks/@hugeicons/react.js. timestamp index -- Still no migration adds an index on token_usage.timestamp. Full table scan grows linearly with agent runs. Fine for now; worth a follow-up ticket before workspaces accumulate months of history. formatCurrency duplication -- The same function is defined identically in both page.tsx and SpendBarChart.tsx. Could be extracted to a lib/format.ts utility. Minor polish; not a blocker. Overall: the fixes land correctly. Ready to merge once the Task01Icon mock is confirmed. |
Closes #557.
Summary
/costspage showing total AI spend, tasks run, and avg cost / task from the workspace'stoken_usagetable.GET /api/v2/costs/summary?days=30endpoint backed byTokenRepository.get_costs_summarywith zero-filled daily buckets.Implementation notes
Database(...).initialize(). The current globalSchemaManagerschema and the v2 workspace schema (codeframe/core/workspace.py) defineblockersdifferently, soDatabase.initialize()fails on an existing workspace DB. The handler tolerates thetoken_usagetable being absent and returns an empty-state payload in that case, matching what users will see in workspaces that haven't run an agent yet.TokenRepository.get_costs_summaryis independent of that conflict and runs from any synchronous repo connection — covered by its own unit tests.Test plan
uv run pytest tests/persistence tests/ui— 426 passeduv run ruff checkon touched files — cleannpm test— 826 passed (11 new in__tests__/components/costs)npm run build— clean,/costslisted as a static route/costs, confirm summary cards + chart render, switch time range to 7 and 90 days, confirm empty state on a workspace with notoken_usagedata.Acceptance criteria from #557
/costsrenders with summary cards and chartnpm testandnpm run buildpassSummary by CodeRabbit
New Features
Tests