Skip to content

feat: complete v1 local replay workflow#1

Open
Xeron2000 wants to merge 11 commits intomainfrom
phase-01-import-reliability
Open

feat: complete v1 local replay workflow#1
Xeron2000 wants to merge 11 commits intomainfrom
phase-01-import-reliability

Conversation

@Xeron2000
Copy link
Copy Markdown
Owner

@Xeron2000 Xeron2000 commented Apr 11, 2026

Summary

  • add a trustworthy local import workspace with Excel upload, manual row entry, structured import reports, CSV detail export, and SQLite backup/restore
  • complete the replay shell with replay controls, saved progress, dual compare modes, lightweight analytics, and durable local workspace state
  • refresh milestone planning artifacts and README screenshots to match the shipped v1 workflow

Verification

  • cd backend && uv run pytest
  • cd frontend && pnpm exec vitest run
  • cd frontend && pnpm exec tsc -b
  • cd frontend && pnpm build
  • live QA for /replay, /import, /api/backups/download, and /api/trades/import/rows

Summary by CodeRabbit

Release Notes

  • New Features
    • Added dedicated import workspace with Excel upload, manual trade entry, and detailed row-level error reporting.
    • Introduced local SQLite database backup and restore capability.
    • Added replay playback controls (play/pause, step backward/forward, speed presets: 1x/2x/4x).
    • Enabled multi-timeframe and multi-symbol comparison modes.
    • Added lightweight replay analytics panel with summary and distribution charts.
    • Implemented persistent replay workspace state (filters, timeframe preferences, analytics panel state).

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 11, 2026

📝 Walkthrough

Walkthrough

Implements Retraq v1.0 milestone delivering four sequential phases: import reliability with schema migrations and structured reporting, single-trade replay with session persistence and playback controls, multi-timeframe/multi-symbol compare modes, and local UX polish with workspace/filter bootstrap and SQLite backup/restore. Adds 150+ test files validating both backend (migrations, import contracts, API endpoints) and frontend (import workflow, replay flow, compare modes, analytics panel). Extends README and documentation to reflect feature completeness.

Changes

Cohort / File(s) Summary
Planning & Documentation
.planning/MILESTONE-AUDIT-v1.0.md, .planning/PROJECT.md, .planning/REQUIREMENTS.md, .planning/ROADMAP.md, .planning/STATE.md, .planning/config.json, .planning/codebase/*, .planning/phases/*, .planning/research/*
Comprehensive v1.0 planning documentation including milestone audit, project scope, requirements with traceability, phased roadmap, architecture decisions, concerns/pitfalls, stack recommendations, and research summaries. Phase-specific planning with concrete implementation plans, verification steps, and completion summaries for all four phases.
Backend Database & Migrations
backend/migrations/*, backend/models.py, backend/services/import_types.py, backend/services/trade_importer.py
Added SQLite migration runner with backup-before-migrate safety, phase-1 schema bootstrap (klines, trades, import_sessions, import_report_rows tables), and structured ORM models with proper nullable typing. Generalized trade importer supporting Excel and manual row imports with row-level outcome tracking, duplicate/conflict detection, timestamp normalization, and CSV download rendering.
Backend APIs & Services
backend/main.py, backend/import_data.py, backend/services/backup_service.py
New endpoints for manual trade import rows (POST /api/trades/import/rows), import session CSV download (GET /api/trades/import/reports/{session_id}/download), SQLite backup download (GET /api/backups/download), and backup restore (POST /api/backups/restore). Updated startup to run migrations before schema bootstrap. Added backup/restore service with SQLite header validation.
Backend Tests
backend/tests/test_*.py, backend/tests/conftest.py
Comprehensive test coverage including migration runner behavior, import contract validation, import API endpoints (file rejection, missing columns, download links, manual rows), backup/restore endpoints, trade importer logic with duplicates/conflicts, and startup migration ordering. Tests use isolated temporary databases via pytest fixtures.
Frontend Import Workflow
frontend/src/components/import/*, frontend/src/pages/ImportPage.tsx, frontend/src/test/import*.test.tsx
New import page with dual-path workflow (Excel file upload and manual row entry), real-time import results display with row-level outcome filtering, file validation error display, timestamp normalization evidence, downloadable CSV reports, and SQLite backup/restore safekeeping panel. Includes comprehensive test coverage for upload flow, file rejection, manual entry, and route smoke testing.
Frontend Replay & Playback
frontend/src/components/ReplayControls.tsx, frontend/src/utils/replayPlayback.ts, frontend/src/test/replayPlayback.test.ts, frontend/src/test/replayFlow.smoke.test.tsx
New replay control surface with play/pause, step backward/forward, reset, and speed presets (1x/2x/4x). Added replay progress state management with versioned localStorage persistence, reducer-driven cursor advancement with boundary clamping, and round-trip serialization. Tests verify playback state transitions, progress persistence across reloads, and end-to-end smoke coverage.
Frontend Session & Workspace
frontend/src/utils/replaySession.ts, frontend/src/utils/replayWorkspace.ts, frontend/src/test/replaySession.test.ts, frontend/src/test/replayWorkspace.test.ts
Added versioned replay session seed resolution with strict precedence (route > local restore > latest trade > empty), browser-local serialization with type validation, and replay workspace persistence for filter/symbol/timeframe/compare/analytics bootstrap across sessions. Includes safeguards against version mismatch and stale data.
Frontend Compare Mode
frontend/src/components/CompareModeControls.tsx, frontend/src/utils/comparePane.ts, frontend/src/test/comparePane.test.ts, frontend/src/test/symbolCompare.test.tsx, frontend/src/test/timeframeCompare.test.tsx
New compare mode abstraction supporting `off
Frontend Analytics Panel
frontend/src/components/ReplayAnalyticsPanel.tsx, frontend/src/test/replayAnalyticsPanel.test.tsx
Collapsible replay analytics panel showing summary stats, equity curve, hourly heatmap, symbol distribution, and current streak. Lazy loads data on first expand, persists open/closed state via workspace, and includes memoized analyses to avoid unnecessary recalculation. Tests verify lazy loading and workspace persistence.
Frontend Core Updates
frontend/src/components/ChartManager.tsx, frontend/src/components/TradeList.tsx, frontend/src/pages/ReplayPage.tsx, frontend/src/components/Navbar.tsx
Major ChartManager refactoring with typed lightweight-charts APIs, replay cursor integration, compare mode handling with mode-specific annotation policies, and improved crosshair synchronization using nearest-candle snapping. TradeList updated to respect replay seed and workspace bootstrap. ReplayPage enhanced with session seed resolution, workspace-driven timeframe initialization, and localStorage persistence. Navbar adds /import route link.
Frontend Services & Types
frontend/src/services/api.ts, frontend/src/services/importTypes.ts, frontend/src/services/importReportAdapter.ts
New import-related types (ImportReport, ImportRowOutcome, ImportSummary, ManualImportRowInput). Updated API functions with ImportReport return types and new endpoints for manual rows/backup download/restore. Adapter layer for normalizing backend Phase-1 dataclass JSON to frontend types, handling legacy bucket keys and file rejection transformation.
Frontend Testing Infrastructure
frontend/vitest.config.ts, frontend/src/test/setup.ts, frontend/package.json
Added Vitest configuration with jsdom environment, test setup hooks for cleanup and mocks, and testing library dependencies (@testing-library/react/jest-dom/user-event, jsdom, vitest).
Frontend Comprehensive Tests
frontend/src/test/*.test.ts(x)
60+ test files covering import page workflows, import API contracts, replay session/playback/workspace, compare modes, replay/layout persistence, trade list selection, local safekeeping, phase-3 replay integration, and various smoke tests. Mix of unit, integration, and smoke coverage with localStorage/ResizeObserver mocks and deterministic API stubs.
Configuration & Documentation
README.md, AGENTS.md, backend/pyproject.toml
Updated README with v1 feature descriptions (import workflow, replay page, analysis page, safekeeping), removed outdated learning module references, added manual import caveats. Added pytest/httpx to backend dependencies. Embedded GSD documentation sections in AGENTS.md for project/stack/architecture context.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~90 minutes


🐰 Hops through the code with glee,
Four phases planned, now code runs free—
From trades imported to replay's replay,
With sessions saved and workspace's way,
Local and true, the v1 is born,
Where crypto charts dance, by session reborn! 🚀

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch phase-01-import-reliability

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
backend/main.py (1)

84-91: ⚠️ Potential issue | 🟠 Major

Delete the staged Excel file in a finally block.

NamedTemporaryFile(delete=False) leaves every upload on disk right now. Repeated imports will steadily leak temp files.

♻️ Possible fix
 `@app.post`("/api/trades/import")
 async def import_trades(file: UploadFile = File(...), db: Session = Depends(get_db)):
     if not file.filename or not file.filename.endswith((".xlsx", ".xls")):
         raise HTTPException(400, "Only Excel files (.xlsx, .xls) are supported")
+    tmp_path: str | None = None
     try:
         with tempfile.NamedTemporaryFile(delete=False, suffix=".xlsx") as tmp:
             content = await file.read()
             tmp.write(content)
             tmp_path = tmp.name
-        result = trade_importer.parse_excel(db, tmp_path)
-        return result
+        return trade_importer.parse_excel(db, tmp_path)
     except Exception as e:
         raise HTTPException(500, str(e))
+    finally:
+        if tmp_path is not None:
+            Path(tmp_path).unlink(missing_ok=True)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/main.py` around lines 84 - 91, The temp Excel file created with
tempfile.NamedTemporaryFile(delete=False) is never removed; modify the upload
handling in backend/main.py so tmp_path is initialized (e.g., tmp_path = None)
and the try/except that calls trade_importer.parse_excel(db, tmp_path) is
followed by a finally block that checks if tmp_path is set and the file exists
and then deletes it (os.remove or Path.unlink), ensuring removal happens whether
parse_excel succeeds or an HTTPException is raised.
🟡 Minor comments (22)
.planning/codebase/CONVENTIONS.md-38-38 (1)

38-38: ⚠️ Potential issue | 🟡 Minor

Correct outdated test-harness claim.

Line 38 states there is no visible automated test harness, but this PR adds substantial test suites. Please update this item so conventions reflect current reality.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.planning/codebase/CONVENTIONS.md at line 38, Update the outdated sentence
"There is no visible automated test harness in the inspected files." in
CONVENTIONS.md to acknowledge the new automated test suites added in this PR:
replace it with a brief statement that the repository now includes an automated
test harness (specify the testing framework used, e.g., Jest/Mocha/pytest) and
mention that CI runs these tests (e.g., GitHub Actions or the project's CI) so
the conventions reflect current reality.
.planning/codebase/TESTING.md-4-8 (1)

4-8: ⚠️ Potential issue | 🟡 Minor

Testing baseline is outdated and should be rewritten.

Line 4 and Lines 22-24 claim no automated test suite/setup, which conflicts with this PR’s added backend/frontend tests and documented test commands. Please revise this section to list the actual current harnesses and entry commands.

Also applies to: 22-24

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.planning/codebase/TESTING.md around lines 4 - 8, Update the "No dedicated
automated test suite" section in TESTING.md to reflect the current test
harnesses and entry commands: state that frontend tests are available via the
package.json scripts (e.g., npm run test or the repo's defined test script in
frontend package.json such as "dev"/"build"/"lint"/"preview" plus the newly
added test script), and that backend tests run via pytest as configured in
backend/pyproject.toml (e.g., pipx/poetry/pytest invocation or explicit "pytest"
command), and list any coverage/lint commands added; edit the sentences that
currently cover lines 4 and 22-24 to enumerate these actual commands and where
to run them (frontend package.json and backend/pyproject.toml / pytest), so the
document matches the PR-added tests.
AGENTS.md-134-136 (1)

134-136: ⚠️ Potential issue | 🟡 Minor

Update stale testing/diagnostics statements in embedded conventions.

Line 134 (“no visible automated test harness”) and Line 135 (“diagnostics are not clean”) read as baseline facts, but this PR introduces broad backend/frontend tests and reports clean verification runs. Please refresh this block to avoid contradicting the current repo state.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@AGENTS.md` around lines 134 - 136, Update the stale bullets that claim "no
visible automated test harness" and "diagnostics are not clean" to reflect the
new tests and clean runs introduced by this PR: replace those two statements
with a brief note that broad backend and frontend automated tests have been
added and that verification runs (lint/type/test) are currently clean; also
adjust or remove the runtime DB creation remark referencing `main.py` /
`import_data.py` if those files were changed so the conventions block no longer
contradicts repository state.
frontend/src/utils/replaySession.ts-39-55 (1)

39-55: ⚠️ Potential issue | 🟡 Minor

Validate defaultTimeframe against allowed Timeframe values, not generic string.

isReplaySessionSeed accepts any string for defaultTimeframe, but the actual Timeframe type is a finite union: '5m' | '15m' | '1h' | '4h' | '1d'. Malformed persisted payloads with invalid timeframe strings (e.g., "2h") will pass deserialization and leak into runtime state via initialSeed?.defaultTimeframe on line 83 of ReplayPage.tsx.

Replace the check (value.defaultTimeframe === undefined || isString(value.defaultTimeframe)) with validation that enforces membership in the Timeframe union.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/utils/replaySession.ts` around lines 39 - 55, The type guard
isReplaySessionSeed currently allows any string for defaultTimeframe; replace
the last condition so it validates membership in the Timeframe union (e.g.,
check value.defaultTimeframe is undefined or one of '5m' | '15m' | '1h' | '4h' |
'1d') instead of using isString. Implement or reuse a helper (isTimeframe) and
update isReplaySessionSeed's condition to (value.defaultTimeframe === undefined
|| isTimeframe(value.defaultTimeframe)) so malformed timeframe strings cannot
pass and thus initialSeed?.defaultTimeframe won't leak invalid values into
runtime state.
backend/services/import_types.py-27-45 (1)

27-45: ⚠️ Potential issue | 🟡 Minor

Enforce outcome/kind domain at construction time.

outcome and kind are currently unconstrained strings. A typo can silently create invalid report categories and break downstream aggregation/filtering.

Suggested guardrails in dataclasses
 `@dataclass`(frozen=True, slots=True)
 class ImportRowOutcome:
@@
     normalized_value: object | None = None
     outcome: str = "failed"
+
+    def __post_init__(self) -> None:
+        if self.outcome not in IMPORT_OUTCOME_BUCKETS:
+            raise ValueError(f"Unsupported outcome bucket: {self.outcome}")
@@
 `@dataclass`(frozen=True, slots=True)
 class ImportNormalizationEvent:
@@
     reason: str
     kind: str = "timestamp_normalization"
+
+    def __post_init__(self) -> None:
+        if self.kind != "timestamp_normalization":
+            raise ValueError(f"Unsupported normalization kind: {self.kind}")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/services/import_types.py` around lines 27 - 45, ImportRowOutcome and
ImportNormalizationEvent accept arbitrary strings for outcome and kind; add
validation in their dataclass constructors to enforce allowed domains. Implement
__post_init__ on ImportRowOutcome to check that outcome is one of the allowed
values (e.g., "failed", "success", "skipped" — or your canonical set) and raise
ValueError on invalid values, and implement __post_init__ on
ImportNormalizationEvent to validate kind against the canonical normalization
kinds (e.g., "timestamp_normalization", etc.). Alternatively change the field
types to an Enum or typing.Literal and validate in __post_init__ (use
object.__setattr__ if you need to coerce on a frozen dataclass) so typos are
rejected at construction time.
.planning/codebase/CONCERNS.md-22-23 (1)

22-23: ⚠️ Potential issue | 🟡 Minor

Stale claim about .planning/codebase contents.

Line 22 says only STACK.md and INTEGRATIONS.md exist, but this PR already adds more planning docs. Please update this statement to avoid immediate drift.

Suggested wording update
-- **`.planning/codebase` 目前偏薄**:现有材料只有 `STACK.md` 和 `INTEGRATIONS.md`,没有看到更细的 roadmap / decision / milestone 文档。能支撑“现状说明”,但不够支撑后续里程碑管理。
+- **`.planning/codebase` 仍需持续完善**:当前已补充多份现状文档,但 roadmap / decision / milestone 的长期维护机制仍需明确,以支撑后续里程碑管理。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.planning/codebase/CONCERNS.md around lines 22 - 23, Update the stale
statement in .planning/codebase that claims only STACK.md and INTEGRATIONS.md
exist: change the sentence referencing "STACK.md" and "INTEGRATIONS.md" to
acknowledge the additional planning documents introduced in this PR (e.g.,
mention the new roadmap/decision/milestone docs or say "STACK.md,
INTEGRATIONS.md, and additional planning documents such as <new-doc-names>"), so
the .planning/codebase summary accurately reflects current contents; edit the
line that reads about only those two files to a concise, forward-compatible
phrase like "contains STACK.md, INTEGRATIONS.md and supplemental planning docs
(roadmap/decisions/milestones)" or list the actual new filenames.
backend/import_data.py-18-39 (1)

18-39: ⚠️ Potential issue | 🟡 Minor

Guarantee session closure when import parsing throws.

If trade_importer.parse_excel(...) raises, db.close() is skipped. Wrap DB usage in try/finally to avoid leaking sessions on failure paths.

🛠️ Proposed fix
 def main():
     MigrationRunner(engine=engine).run()
-    db = SessionLocal()
-
-    # 检查是否已有数据
-    existing = db.query(Trade).count()
-    if existing > 0:
-        print(f"📊 数据库已有 {existing} 条交易记录,跳过导入")
-        db.close()
-        return
-
-    # 导入示例数据
-    excel_path = os.path.join(os.path.dirname(__file__), "..", "1.xlsx")
-    if not os.path.exists(excel_path):
-        print("⚠️  未找到 1.xlsx,跳过数据导入")
-        db.close()
-        return
-
-    print("📥 导入示例交易数据...")
-    result = trade_importer.parse_excel(db, excel_path)
-    print(
-        f"✅ 导入完成: 成功 {result.summary.success_count} 条, 失败 {result.summary.failed_count} 条"
-    )
-    db.close()
+    db = SessionLocal()
+    try:
+        # 检查是否已有数据
+        existing = db.query(Trade).count()
+        if existing > 0:
+            print(f"📊 数据库已有 {existing} 条交易记录,跳过导入")
+            return
+
+        # 导入示例数据
+        excel_path = os.path.join(os.path.dirname(__file__), "..", "1.xlsx")
+        if not os.path.exists(excel_path):
+            print("⚠️  未找到 1.xlsx,跳过数据导入")
+            return
+
+        print("📥 导入示例交易数据...")
+        result = trade_importer.parse_excel(db, excel_path)
+        print(
+            f"✅ 导入完成: 成功 {result.summary.success_count} 条, 失败 {result.summary.failed_count} 条"
+        )
+    finally:
+        db.close()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/import_data.py` around lines 18 - 39, The DB session opened via
SessionLocal() may not be closed if trade_importer.parse_excel(db, excel_path)
raises; wrap the session usage in a try/finally so db.close() always runs.
Specifically, keep creating the session with SessionLocal(), perform the
existing checks and call to trade_importer.parse_excel(...) inside the try
block, and move db.close() into the finally block to guarantee closure for the
session (refer to SessionLocal, Trade, and trade_importer.parse_excel to locate
the logic).
.planning/REQUIREMENTS.md-67-87 (1)

67-87: ⚠️ Potential issue | 🟡 Minor

Traceability statuses look stale relative to this PR’s completion artifacts.

All requirements are still marked Pending, which can mislead roadmap/state readers now that phase summaries in this PR report completion. Please either update statuses or explicitly mark this table as a baseline-only snapshot.

📝 Minimal doc fix (if you want to keep statuses unchanged here)
 ## Traceability
 
 Which phases cover which requirements. Updated during roadmap creation.
+Status note: this table captures planning-time mapping only; execution/completion state is tracked in `.planning/STATE.md`.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.planning/REQUIREMENTS.md around lines 67 - 87, The requirements table
currently shows every item (e.g., IMPT-01, REPL-01, COMP-01, ANLY-01, STAT-01,
etc.) as "Pending" which conflicts with this PR's completed phase summaries;
either update the Status column entries for the specific requirement IDs that
are complete to reflect their true state (e.g., "Completed", "In Progress", or a
proper milestone tag) or add a single clarifying sentence immediately above the
table stating this is a baseline-only snapshot (e.g., "NOTE: This table is a
baseline snapshot and does not reflect current completion statuses") so readers
aren’t misled—make the change in .planning/REQUIREMENTS.md next to the existing
table.
.planning/codebase/ARCHITECTURE.md-18-22 (1)

18-22: ⚠️ Potential issue | 🟡 Minor

Architecture doc omits newly shipped /import route and backup/import-report API boundaries.

Line 52 still describes only three routes, and the API boundary sections (Line 18-22, Line 92-98) miss the newly added import rows/report download and backup endpoints from this PR. Please update this file so architecture boundaries match the implemented v1 workflow.

📌 Suggested doc patch
- - `frontend/src/App.tsx:7-18` 定义三个页面路由:`/replay`、`/analysis`、`/learn`
+ - `frontend/src/App.tsx:7-18` 定义页面路由:`/import`、`/replay`、`/analysis`、`/learn`

- - `POST /api/trades/import`:上传 Excel 交易单并入库
+ - `POST /api/trades/import`:上传 Excel 交易单并入库
+ - `POST /api/trades/import/rows`:手动逐行导入交易
+ - `GET /api/trades/import/reports/{session_id}/download`:下载导入明细 CSV
+ - `GET /api/backups/download`:下载本地 SQLite 备份
+ - `POST /api/backups/restore`:恢复本地 SQLite 备份

- - `importTrades()`:上传 Excel 到 `/api/trades/import`
+ - `importTrades()`:上传 Excel 到 `/api/trades/import`
+ - `importTradeRows()`:手动行导入到 `/api/trades/import/rows`
+ - `downloadImportReport()`:导入会话明细下载
+ - `restoreSQLiteBackup()`:上传并恢复 SQLite 备份

Also applies to: 52-53, 92-98

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.planning/codebase/ARCHITECTURE.md around lines 18 - 22, Update the
ARCHITECTURE.md API routes and boundary sections to include the new import and
backup/report endpoints added in this PR: add POST /api/trades/import (upload
Excel import), GET /api/trades/import-report/{reportId} (download import
report), POST /api/backup (create backup) and GET
/api/backup/{backupId}/download (download backup); make the same additions where
routes are summarized (the top routes list and the API boundary sections
previously at lines ~52-53 and ~92-98) so the documented v1 workflow matches the
implemented endpoints.
frontend/src/test/replayWorkspace.test.ts-37-41 (1)

37-41: ⚠️ Potential issue | 🟡 Minor

Add teardown for vi.stubGlobal to prevent cross-test global leakage.

The localStorage global is stubbed in the beforeEach hook but never restored. This can cause the stubbed value to persist across other test suites and contaminate their tests.

Suggested fix
describe('replayWorkspace transport', () => {
  beforeEach(() => {
    localStorageStore.clear();
    vi.stubGlobal('localStorage', localStorageMock as Storage);
  });
+
+  afterEach(() => {
+    vi.unstubAllGlobals();
+  });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/test/replayWorkspace.test.ts` around lines 37 - 41, The test
stubs the global localStorage in the beforeEach (vi.stubGlobal('localStorage',
localStorageMock)) but never restores it; add an afterEach that calls
vi.unstubAllGlobals() or vi.unstubGlobal('localStorage') to restore the original
global and prevent leakage (also mirror clearing localStorageStore in afterEach
if desired); update the test file's beforeEach/afterEach block around the
describe('replayWorkspace transport') to call vi.unstubAllGlobals() (or
vi.unstubGlobal('localStorage')) to tear down the stubbed global.
backend/tests/test_backup_api.py-66-67 (1)

66-67: ⚠️ Potential issue | 🟡 Minor

Assert backup download succeeded before using payload bytes.

The restore test should validate the download precondition first, otherwise failures are harder to diagnose.

🔧 Suggested fix
     backup_response = client.get("/api/backups/download")
+    assert backup_response.status_code == 200
     backup_bytes = backup_response.content
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/test_backup_api.py` around lines 66 - 67, The test reads
backup_response.content into backup_bytes without checking the response
succeeded; update the test around the client.get("/api/backups/download") call
to assert the download succeeded first (e.g., assert backup_response.status_code
== 200 or backup_response.ok) before using backup_bytes, referencing
backup_response and backup_bytes so failures are diagnosed early.
frontend/src/test/tradeListSelection.test.tsx-114-119 (1)

114-119: ⚠️ Potential issue | 🟡 Minor

Avoid asserting against the last setItem call.

Using mock.calls.at(-1) is brittle; any additional write can break this test without behavior regressions.

🔧 Suggested fix
-    expect(deserializeReplayWorkspace(localStorageMock.setItem.mock.calls.at(-1)?.[1] ?? '')).toMatchObject({
+    const workspaceWrite = [...localStorageMock.setItem.mock.calls]
+      .reverse()
+      .find(([key]) => key === REPLAY_WORKSPACE_STORAGE_KEY);
+    expect(deserializeReplayWorkspace(workspaceWrite?.[1] ?? '')).toMatchObject({
       version: 1,
       symbol: 'ETH-USDT',
     });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/test/tradeListSelection.test.tsx` around lines 114 - 119, The
test is brittle because it inspects mock.calls.at(-1); instead, locate the
specific setItem call for REPLAY_WORKSPACE_STORAGE_KEY and deserialize that
argument for assertions. Change the test to find the call in
localStorageMock.setItem.mock.calls where call[0] ===
REPLAY_WORKSPACE_STORAGE_KEY (or use .find), extract call[1], pass it to
deserializeReplayWorkspace, and assert the resulting object contains version: 1
and symbol: 'ETH-USDT'; keep the existing
expect(localStorageMock.setItem).toHaveBeenCalledWith(REPLAY_WORKSPACE_STORAGE_KEY,
expect.any(String)) check as well.
.planning/codebase/STACK.md-17-20 (1)

17-20: ⚠️ Potential issue | 🟡 Minor

Storage table inventory is incomplete after Phase 1 import-report persistence.

This section still implies only klines and trades are persisted, which can mislead contributors working on import report/download flows.

Proposed doc update
- - 表:`klines`、`trades`;启动时自动 `Base.metadata.create_all`(`backend/models.py`, `backend/main.py`, `backend/import_data.py`)
+ - 表:`klines`、`trades`,以及导入报告相关表(导入会话/逐行结果);启动时自动 `Base.metadata.create_all` 并执行迁移(`backend/models.py`, `backend/main.py`, `backend/import_data.py`, `backend/migrations/*`)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.planning/codebase/STACK.md around lines 17 - 20, The storage inventory in
STACK.md is outdated and omits tables created for Phase 1 import-report
persistence; update the "存储" section to list all additional tables and indexes
introduced by the import-report feature (e.g., the import-report persistence
tables and any related indexes), and cite the code locations that define them
(reference backend/models.py where the ORM models are declared,
backend/import_data.py where import-report persistence is implemented, and
backend/main.py/backend/database.py where schema creation/initialization
occurs); ensure the entry matches actual model names and unique/index
definitions used in the codebase so contributors see the complete
persisted-table inventory.
frontend/src/components/import/LocalSafekeepingPanel.tsx-24-27 (1)

24-27: ⚠️ Potential issue | 🟡 Minor

Reset the native file input after successful restore.

On Line 26 you clear React state, but the <input type="file"> value on Lines 91-98 is not reset. Re-selecting the same file can fail to fire onChange, blocking immediate retry with the same backup file.

Proposed patch
 import { useCallback, useState } from 'react';
+import { useRef } from 'react';
@@
 export default function LocalSafekeepingPanel({ importReport }: LocalSafekeepingPanelProps) {
+  const fileInputRef = useRef<HTMLInputElement | null>(null);
   const [backupFile, setBackupFile] = useState<File | null>(null);
@@
     try {
       await restoreSQLiteBackup(backupFile);
       setBackupFile(null);
+      if (fileInputRef.current) {
+        fileInputRef.current.value = '';
+      }
       setStatusMessage('SQLite 备份已恢复。');
@@
             <input
+              ref={fileInputRef}
               id="sqlite-backup-file"
               aria-label="选择 SQLite 备份文件"
               className="file-input file-input-bordered w-full"

Also applies to: 91-98

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/import/LocalSafekeepingPanel.tsx` around lines 24 -
27, After a successful restore (inside the try that calls restoreSQLiteBackup
and then setBackupFile(null)/setStatusMessage), also clear the native file input
so re-selecting the same file will fire onChange; add a ref to the <input
type="file"> used around the block referenced (lines ~91-98) (e.g.,
fileInputRef) and after setBackupFile(null) set fileInputRef.current.value = ''
(or null) to reset the native input; reference restoreSQLiteBackup,
setBackupFile, setStatusMessage and the file input element when making the
change.
frontend/src/components/import/ImportDropzone.tsx-20-24 (1)

20-24: ⚠️ Potential issue | 🟡 Minor

Apply the same file-type guard on drag/drop as file picker.

Line 54 restricts picker types, but Lines 20-24 accept any dropped file. Add a lightweight extension/type check before calling onFileChange for consistent behavior.

Proposed patch
         onDrop={(event) => {
           event.preventDefault();
           const droppedFile = event.dataTransfer.files.item(0);
-          onFileChange(droppedFile ?? null);
+          if (!droppedFile) {
+            onFileChange(null);
+            return;
+          }
+          const lower = droppedFile.name.toLowerCase();
+          const isExcel = lower.endsWith('.xlsx') || lower.endsWith('.xls');
+          onFileChange(isExcel ? droppedFile : null);
         }}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/import/ImportDropzone.tsx` around lines 20 - 24, The
drag-and-drop handler (onDrop) currently accepts any dropped file and calls
onFileChange without validating type; add the same lightweight
file-type/extension guard that the file picker uses before calling onFileChange
(e.g., check file.type and/or file.name extension against the same accepted list
used by the picker, or reuse an existing validator function if present), and
only call onFileChange(droppedFile) when the file passes that check (otherwise
call onFileChange(null) or ignore the drop and optionally show a validation
error).
.planning/research/STACK.md-7-123 (1)

7-123: ⚠️ Potential issue | 🟡 Minor

Minor docs quality pass needed (compound adjective hyphenation).

Several phrases should be hyphenated for clarity (e.g., “local-first”, “client-side”, “front-end”, “route-level”, “trade-centered”). This is a small but repeated readability issue across Line 7 onward.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.planning/research/STACK.md around lines 7 - 123, The document uses several
compound adjectives that should be hyphenated for clarity; search the document
(from the "The standard 2026 shape..." section onward) for instances like local
first, client side, front end, route level, trade centered and replace them with
local-first, client-side, front-end, route-level, trade-centered respectively;
ensure consistency for similar compounds (e.g., "server heavy" ->
"server-heavy", "small amount" when used adjectivally -> "small-amount" only if
used before a noun) and run a quick pass to hyphenate other compound modifiers
to match these examples.
.planning/research/STACK.md-93-96 (1)

93-96: ⚠️ Potential issue | 🟡 Minor

Update STACK.md lines 93–96 to reflect the new migration infrastructure.

The backend now has a visible migration layer (backend/migrations/runner.py with versioned migrations), which runs before Base.metadata.create_all() on startup. While create_all() is still present, it is now secondary to the migration runner and follows a hybrid approach (migrations for schema versioning, ORM for idempotent table creation). Lines 93–96 incorrectly state that the backend has "no visible migration layer" and that create_all() "blocks safe schema evolution." Both need updating to reflect the current architecture.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.planning/research/STACK.md around lines 93 - 96, Update STACK.md to
acknowledge the new migration infrastructure: replace the claim that the backend
has "no visible migration layer" and that Base.metadata.create_all() alone
"blocks safe schema evolution" with a note that a migration runner exists
(backend/migrations/runner.py) and is executed before Base.metadata.create_all()
in backend/main.py, explaining the hybrid approach where versioned migrations
handle schema evolution and Base.metadata.create_all() is kept as a secondary
idempotent table-creation safeguard.
frontend/src/components/ReplayAnalyticsPanel.tsx-228-246 (1)

228-246: ⚠️ Potential issue | 🟡 Minor

The active symbol can disappear from the distribution table.

Line 228 slices to six rows before the highlight check. If the current replay symbol is outside that top six, the panel drops it entirely, so the “focus symbol” context vanishes instead of being highlighted.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/ReplayAnalyticsPanel.tsx` around lines 228 - 246, The
table currently renders symbolStats.slice(0, 6) so focusSymbol can be omitted if
it's not in the top six; adjust the rendering logic so the focused symbol is
always shown: compute a display list (e.g., topSix = symbolStats.slice(0,6)) and
if focusSymbol exists and is not in topSix, find its stat in symbolStats (by
stat.symbol) and either append it to the render list or replace the last
element, then map over that display list; reference symbolStats and focusSymbol
in ReplayAnalyticsPanel and keep the existing highlighting logic intact.
frontend/src/test/timeframeCompare.test.tsx-236-243 (1)

236-243: ⚠️ Potential issue | 🟡 Minor

This assertion can pass without a post-click 4h fetch.

Line 239 matches any earlier 4h request, and opening compare mode already records one. The test still passes if clicking the main timeframe stops issuing fetchKlines entirely. Clear the call log before Line 236, or assert against the newly appended call instead.

🧪 Tighten the regression check
     await waitFor(() => {
       expect(compareModeToggleSpy.fetchKlinesCalls).toEqual(
         expect.arrayContaining([
           expect.objectContaining({ symbol: 'SOL-USDT', timeframe: '1h' }),
         ]),
       );
     });

+    compareModeToggleSpy.fetchKlinesCalls.length = 0;
     await user.click(screen.getByRole('button', { name: '4h' }));

     expect(screen.getByRole('button', { name: '对比周期 1h' })).toHaveAttribute('aria-pressed', 'true');
-    expect(compareModeToggleSpy.fetchKlinesCalls).toEqual(
-      expect.arrayContaining([
-        expect.objectContaining({ symbol: 'SOL-USDT', timeframe: '4h' }),
-      ]),
-    );
+    expect(compareModeToggleSpy.fetchKlinesCalls.at(-1)).toEqual(
+      expect.objectContaining({ symbol: 'SOL-USDT', timeframe: '4h' }),
+    );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/test/timeframeCompare.test.tsx` around lines 236 - 243, The test
currently asserts that a '4h' fetch occurred but may match a prior request;
before calling await user.click(screen.getByRole('button', { name: '4h' }))
clear or reset the spy's recorded calls (compareModeToggleSpy.fetchKlinesCalls)
or, alternatively, change the assertion to check the newest call (e.g. the last
element of compareModeToggleSpy.fetchKlinesCalls) contains { symbol: 'SOL-USDT',
timeframe: '4h' } so you guarantee the click triggered the fetch; update the
test around the user.click and the expect referencing
compareModeToggleSpy.fetchKlinesCalls to use one of these approaches.
frontend/src/components/import/ImportResultsTable.tsx-32-45 (1)

32-45: ⚠️ Potential issue | 🟡 Minor

Expose the active filter state semantically.

The filter is a toggle, but the selected state is only conveyed by color right now. Add aria-pressed={active} so assistive tech can announce which filter is active.

♿ Small fix
 function FilterButton({
   active,
   children,
   onClick,
 }: {
   active: boolean;
   children: ReactNode;
   onClick: () => void;
 }) {
   return (
-    <button className={`btn btn-sm ${active ? 'btn-primary' : 'btn-ghost'}`} onClick={onClick} type="button">
+    <button
+      aria-pressed={active}
+      className={`btn btn-sm ${active ? 'btn-primary' : 'btn-ghost'}`}
+      onClick={onClick}
+      type="button"
+    >
       {children}
     </button>
   );
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/import/ImportResultsTable.tsx` around lines 32 - 45,
The FilterButton component currently only conveys selection visually; update the
button element in function FilterButton to include the ARIA toggle state by
adding aria-pressed={active} to the <button> props so assistive tech can detect
the active state (use the existing active prop passed into FilterButton).
frontend/src/utils/replayPlayback.ts-19-24 (1)

19-24: ⚠️ Potential issue | 🟡 Minor

tick currently advertises backward playback but never stops at minTime.

Because direction is shared by both 'step' and 'tick', a backward tick will clamp at minTime and keep isPlaying: true forever. Either make 'tick' forward-only in the action type or stop playback when the active boundary is reached.

💡 Suggested change
       const signedStep = action.direction === 'backward' ? -action.stepMs : action.stepMs;
       const nextCursor = clampTime(state.cursorTime + signedStep, action.minTime, action.maxTime);
-      const reachedEnd = nextCursor >= action.maxTime;
+      const reachedBoundary =
+        action.direction === 'backward'
+          ? nextCursor <= action.minTime
+          : nextCursor >= action.maxTime;

       return {
         ...state,
         cursorTime: nextCursor,
-        isPlaying: action.type === 'tick' ? !reachedEnd : false,
+        isPlaying: action.type === 'tick' ? !reachedBoundary : false,
       };

Also applies to: 81-93

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/utils/replayPlayback.ts` around lines 19 - 24, The 'tick' action
in ReplayPlaybackAction incorrectly allows backward direction and never stops at
minTime, causing isPlaying to stay true; either restrict the action type to make
'tick' forward-only (remove direction from the 'tick' variant) or update the
playback reducer/handler that processes 'tick' (and 'step') to detect when the
computed time is clamped to minTime or maxTime and set isPlaying = false when a
boundary is reached; update the ReplayPlaybackAction definition (remove
direction from the 'tick' union member) or add boundary-check logic in the
reducer that handles 'tick'/'step' to stop playback when clamp(Time, minTime,
maxTime) equals the boundary, and mirror this change for the similar code region
referenced (lines ~81-93).
frontend/src/utils/replayPlayback.ts-26-30 (1)

26-30: ⚠️ Potential issue | 🟡 Minor

Clamp restored progress before hydrating state.

A saved record can still be stale even when it is well-formed. If the trade window changes after a local re-import or restore, this restores an out-of-range cursor until the next step/tick clamps it. Clamp or discard saved progress against the current bounds before returning state.

💡 Suggested change
 interface CreateInitialReplayPlaybackStateInput {
   tradeId: number;
   entryTime: number;
+  minTime: number;
+  maxTime: number;
   savedProgress?: ReplayProgressRecord | null;
 }

 export function createInitialReplayPlaybackState({
   tradeId,
   entryTime,
+  minTime,
+  maxTime,
   savedProgress,
 }: CreateInitialReplayPlaybackStateInput): ReplayPlaybackState {
   if (savedProgress && savedProgress.tradeId === tradeId) {
     return {
-      cursorTime: savedProgress.cursorTime,
+      cursorTime: clampTime(savedProgress.cursorTime, minTime, maxTime),
       isPlaying: false,
       speed: savedProgress.speed,
     };
   }

Based on learnings, the app is local-first and import-driven, so persisted replay state needs to tolerate locally changed trade data.

Also applies to: 44-55

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/utils/replayPlayback.ts` around lines 26 - 30, The saved
ReplayProgressRecord (savedProgress in CreateInitialReplayPlaybackStateInput)
must be validated and clamped against the current trade bounds before hydrating
state; update the state-initialization logic that consumes savedProgress (the
function that builds the initial playback state) to check savedProgress fields
(e.g., cursor/index/time/percent) and: if out of range, clamp them into [min,
max] based on the current trade's step/count/time bounds (or discard/nullify
savedProgress if completely invalid), then use the clamped record when returning
the initial state; apply the same validation/clamping logic to the other
restoration path referenced around lines 44-55.
🧹 Nitpick comments (8)
backend/migrations/versions/001_phase1_import_reliability.sql (1)

36-76: Harden import-report invariants with DB-level checks.

Consider adding CHECK constraints for non-negative counters (total_rows, success_count, etc.) and positive row_number to prevent invalid persisted state if application-side validation is bypassed.

Suggested migration hardening
 CREATE TABLE IF NOT EXISTS import_sessions (
     id INTEGER PRIMARY KEY AUTOINCREMENT,
     source_filename TEXT NOT NULL,
     status TEXT NOT NULL,
-    total_rows INTEGER NOT NULL DEFAULT 0,
-    success_count INTEGER NOT NULL DEFAULT 0,
-    failed_count INTEGER NOT NULL DEFAULT 0,
-    duplicate_count INTEGER NOT NULL DEFAULT 0,
-    conflict_count INTEGER NOT NULL DEFAULT 0,
-    timestamp_normalization_count INTEGER NOT NULL DEFAULT 0,
+    total_rows INTEGER NOT NULL DEFAULT 0 CHECK (total_rows >= 0),
+    success_count INTEGER NOT NULL DEFAULT 0 CHECK (success_count >= 0),
+    failed_count INTEGER NOT NULL DEFAULT 0 CHECK (failed_count >= 0),
+    duplicate_count INTEGER NOT NULL DEFAULT 0 CHECK (duplicate_count >= 0),
+    conflict_count INTEGER NOT NULL DEFAULT 0 CHECK (conflict_count >= 0),
+    timestamp_normalization_count INTEGER NOT NULL DEFAULT 0 CHECK (timestamp_normalization_count >= 0),
     file_rejection_reason TEXT,
     file_rejection_message TEXT,
     download_filename TEXT,
     download_mime_type TEXT NOT NULL DEFAULT 'text/csv',
     created_at INTEGER NOT NULL
 );
@@
 CREATE TABLE IF NOT EXISTS import_report_rows (
     id INTEGER PRIMARY KEY AUTOINCREMENT,
     session_id INTEGER NOT NULL,
-    row_number INTEGER NOT NULL,
+    row_number INTEGER NOT NULL CHECK (row_number > 0),
     outcome TEXT NOT NULL,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/migrations/versions/001_phase1_import_reliability.sql` around lines
36 - 76, Update the CREATE TABLE definitions to enforce DB-level invariants: add
CHECK constraints on import_sessions (columns total_rows, success_count,
failed_count, duplicate_count, conflict_count, timestamp_normalization_count) to
require each >= 0, and on import_report_rows to require row_number > 0; if this
migration must apply to existing DBs, add corresponding ALTER TABLE ... ADD
CHECK statements to guard existing data migration. Target the import_sessions
and import_report_rows table definitions and the specific column names
(total_rows, success_count, failed_count, duplicate_count, conflict_count,
timestamp_normalization_count, row_number) when making the changes.
.planning/codebase/STRUCTURE.md (1)

43-47: Consider removing exact line-number references from architecture docs.

Exact ranges are likely to drift after normal edits; file/symbol-level references are usually easier to keep accurate.

Also applies to: 87-109

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.planning/codebase/STRUCTURE.md around lines 43 - 47, The doc lists specific
line ranges that will drift; remove the exact line-number references and replace
them with file- or symbol-level references instead—e.g., change entries like
`backend/main.py:28-129` to `backend/main.py` (or `backend.main`),
`backend/database.py:4-16` to `backend/database.py` (or reference
`create_engine`, `Session`, `Base`), `backend/models.py:5-38` to
`backend/models.py` (or mention `Kline`, `Trade` models),
`backend/import_data.py:12-36` to `backend/import_data.py`, and keep
`backend/services/` as-is; apply the same removal of numeric ranges for the
other occurrence noted (lines 87-109) so the architecture doc references
files/symbols only.
frontend/src/test/replayAnalyticsPanel.test.tsx (1)

47-51: Remove as never cast in test helper and use Parameters to maintain type safety.

The Record<string, unknown> parameter type does not match the ReplayWorkspaceRecord expected by serializeReplayWorkspace, requiring an unsafe as never cast. This masks potential schema/type regressions. Use Parameters<typeof serializeReplayWorkspace>[0] to automatically extract and enforce the correct parameter type:

Suggested refactor
-function seedWorkspaceRecord(record: Record<string, unknown>) {
+function seedWorkspaceRecord(
+	record: Parameters<typeof serializeReplayWorkspace>[0],
+) {
 	window.localStorage.setItem(
 		REPLAY_WORKSPACE_STORAGE_KEY,
-		serializeReplayWorkspace(record as never),
+		serializeReplayWorkspace(record),
 	);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/test/replayAnalyticsPanel.test.tsx` around lines 47 - 51,
seedWorkspaceRecord currently accepts Record<string, unknown> and uses an unsafe
cast to call serializeReplayWorkspace; change the parameter type to the exact
type expected by serializeReplayWorkspace by using Parameters<typeof
serializeReplayWorkspace>[0], remove the "as never" cast, and keep the call to
window.localStorage.setItem(REPLAY_WORKSPACE_STORAGE_KEY,
serializeReplayWorkspace(record)) so the helper enforces the correct
ReplayWorkspaceRecord shape at compile time.
.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-04-PLAN.md (1)

35-44: Include explicit component-boundary refactor criteria in this phase task.

Given this phase integrates analytics into replay shell surfaces, add acceptance criteria that ChartManager.tsx orchestration remains modular (logic extracted into focused hooks/subcomponents) to prevent further coupling.

As per coding guidelines: frontend/**/*.tsx: "Refactor larger, imperative components like ChartManager.tsx and AnalysisPage.tsx into smaller, modular components".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
@.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-04-PLAN.md
around lines 35 - 44, Update the Phase 3 task to include explicit
component-boundary refactor acceptance criteria: state that ChartManager.tsx
orchestration must be extracted into focused hooks and subcomponents (e.g.,
useChartData, useSeriesSync, ChartControls) and that ReplayAnalyticsPanel.tsx
should consume those hooks rather than import imperative logic; require
tests/smoke coverage to assert modularity by verifying ChartManager no longer
contains large imperative blocks and that small focused components/hooks (named
above or similar) exist and are used. Ensure the updated task text and
verify/action sections mention these modularization checkpoints and reference
ChartManager.tsx and ReplayAnalyticsPanel.tsx so reviewers can validate the
refactor.
.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-03-PLAN.md (1)

29-38: Add an explicit modularization checkpoint for ChartManager.tsx.

Since this plan edits ChartManager.tsx, include a concrete extraction step (e.g., compare-mode state/controller hook + pane presenter component) so the phase doesn’t further grow an already imperative surface.

As per coding guidelines: frontend/**/*.tsx: "Refactor larger, imperative components like ChartManager.tsx and AnalysisPage.tsx into smaller, modular components".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
@.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-03-PLAN.md
around lines 29 - 38, Extract compare-mode responsibilities out of
ChartManager.tsx into a dedicated state/controller hook (e.g.,
useCompareModeController) and a presentational component (e.g.,
ComparePanePresenter) so compare-mode logic is modular: move all compare-mode
state, mode-switch handlers, and symbol-specific sync logic into the hook, have
ChartManager consume the hook and delegate UI to ComparePanePresenter, and
ensure CompareModeControls.tsx uses the same hook or props to preserve existing
multi-symbol/timeframe synchronization and the expected defaults; update
frontend/src/test/symbolCompare.test.tsx to target the new hook/component
boundaries to drive the failing tests before refactor.
frontend/src/test/replayLayoutPersistence.test.tsx (1)

160-164: Replace as never with proper type assertion for strict TypeScript compliance.

Line 163 uses as never to bypass type checking, which defeats strict typing and can hide invalid workspace seed shapes. Instead, assert to the expected ReplayWorkspaceRecord type: record as ReplayWorkspaceRecord. This maintains strict TypeScript settings required per coding guidelines.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/test/replayLayoutPersistence.test.tsx` around lines 160 - 164,
The test helper seedWorkspaceRecord is casting the input with `as never` before
calling serializeReplayWorkspace; replace that unsafe cast with the correct
domain type by asserting the input as ReplayWorkspaceRecord (i.e., use `record
as ReplayWorkspaceRecord`) so serializeReplayWorkspace receives the proper type;
keep references to seedWorkspaceRecord, REPLAY_WORKSPACE_STORAGE_KEY, and
serializeReplayWorkspace to locate the change.
frontend/src/pages/ReplayPage.tsx (1)

23-72: Move the seed parsing helpers out of the page.

isRecord, isReplaySeed, resolveInitialSeed, and seedToTrade are pure replay/session logic now embedded in the route component. Extracting them to src/utils/replaySession.ts would keep this page focused on wiring and make the bootstrap logic easier to test directly.

Based on learnings Maintain strong separation between UI, API, and pure logic helpers across the codebase.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/pages/ReplayPage.tsx` around lines 23 - 72, Extract the pure
helpers isRecord, isReplaySeed, resolveInitialSeed, and seedToTrade into a new
module (e.g., src/utils/replaySession.ts): move their implementations there,
export them, and update ReplayPage.tsx to import these functions instead of
defining them inline. Ensure the new module imports any referenced
types/constants (ReplaySessionSeed, REPLAY_SESSION_STORAGE_KEY,
deserializeReplaySession, Trade) and preserves existing behavior (including
handling window/localStorage). Run the project typescript build to fix any
missing imports and update any other consumers to use the exported symbols.
frontend/src/components/ChartManager.tsx (1)

125-1793: Split ChartManager before it grows further.

This component now couples chart bootstrap/teardown, replay state, localStorage hydration, kline loading, compare sync, modal state, and render markup. Pulling those concerns into hooks/components will make bugs like the async teardown race much easier to contain.

As per coding guidelines, frontend/**/*.tsx: Refactor larger, imperative components like ChartManager.tsx and AnalysisPage.tsx into smaller, modular components.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/ChartManager.tsx` around lines 125 - 1793,
ChartManager has grown too large and mixes bootstrapping/teardown, kline
loading, playback/localStorage hydration, compare sync, and UI; split
responsibilities into focused hooks/components. Extract chart lifecycle and DOM
refs into a useChart (wrap createChart, chartRef, seriesRef, volumeSeriesRef,
mainAnchorSeriesRef/compareAnchorSeriesRef, applyTimeFiller,
subscribe/unsubscribe crosshair and resize, and chart removal), move data
fetching logic into a useKlineLoader hook (containing loadMainData,
loadCompareData, latestMainRequestIdRef/latestCompareRequestIdRef,
mainCandlesByTimeRef/compareCandlesByTimeRef and ensureTimeFiller interaction),
encapsulate replay state and persistence into useReplayPlayback (playbackState,
dispatchPlayback, hydration via
buildReplayProgressStorageKey/serializeReplayProgress/deserializeReplayProgress),
and pull UI controls into a smaller ChartControls/CompareModal component
(handling modal open/close, symbolOptions, search state); keep shared
crosshair/compare sync helpers (scheduleTargetCrosshairSync,
updateSharedCrosshair, syncingRangeRef) but call them from the new hooks; wire
the hooks back into ChartManager to preserve behavior and ensure each extracted
hook/component cleans up its own async/rAF/observer to avoid teardown races.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d385cb71-e4bf-41eb-a758-f56ebb74f333

📥 Commits

Reviewing files that changed from the base of the PR and between 57156d2 and c87b4ca.

⛔ Files ignored due to path filters (5)
  • backend/uv.lock is excluded by !**/*.lock
  • docs/images/analysis.png is excluded by !**/*.png
  • docs/images/import.png is excluded by !**/*.png
  • docs/images/replay.png is excluded by !**/*.png
  • frontend/pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (113)
  • .planning/MILESTONE-AUDIT-v1.0.md
  • .planning/PROJECT.md
  • .planning/REQUIREMENTS.md
  • .planning/ROADMAP.md
  • .planning/STATE.md
  • .planning/codebase/ARCHITECTURE.md
  • .planning/codebase/CONCERNS.md
  • .planning/codebase/CONVENTIONS.md
  • .planning/codebase/INTEGRATIONS.md
  • .planning/codebase/STACK.md
  • .planning/codebase/STRUCTURE.md
  • .planning/codebase/TESTING.md
  • .planning/config.json
  • .planning/phases/01-import-reliability-schema-evolution/01-01-SUMMARY.md
  • .planning/phases/01-import-reliability-schema-evolution/01-02-SUMMARY.md
  • .planning/phases/01-import-reliability-schema-evolution/01-03-SUMMARY.md
  • .planning/phases/01-import-reliability-schema-evolution/01-04-SUMMARY.md
  • .planning/phases/01-import-reliability-schema-evolution/01-05-SUMMARY.md
  • .planning/phases/02-single-trade-replay-core/02-01-PLAN.md
  • .planning/phases/02-single-trade-replay-core/02-01-SUMMARY.md
  • .planning/phases/02-single-trade-replay-core/02-02-PLAN.md
  • .planning/phases/02-single-trade-replay-core/02-02-SUMMARY.md
  • .planning/phases/02-single-trade-replay-core/02-03-PLAN.md
  • .planning/phases/02-single-trade-replay-core/02-03-SUMMARY.md
  • .planning/phases/02-single-trade-replay-core/02-04-PLAN.md
  • .planning/phases/02-single-trade-replay-core/02-04-SUMMARY.md
  • .planning/phases/02-single-trade-replay-core/02-CONTEXT.md
  • .planning/phases/02-single-trade-replay-core/02-DISCUSSION-LOG.md
  • .planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-01-PLAN.md
  • .planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-01-SUMMARY.md
  • .planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-02-PLAN.md
  • .planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-02-SUMMARY.md
  • .planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-03-PLAN.md
  • .planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-03-SUMMARY.md
  • .planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-04-PLAN.md
  • .planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-04-SUMMARY.md
  • .planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-CONTEXT.md
  • .planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-DISCUSSION-LOG.md
  • .planning/phases/04-local-ux-polish-performance-hardening/04-01-PLAN.md
  • .planning/phases/04-local-ux-polish-performance-hardening/04-01-SUMMARY.md
  • .planning/phases/04-local-ux-polish-performance-hardening/04-02-PLAN.md
  • .planning/phases/04-local-ux-polish-performance-hardening/04-02-SUMMARY.md
  • .planning/phases/04-local-ux-polish-performance-hardening/04-03-PLAN.md
  • .planning/phases/04-local-ux-polish-performance-hardening/04-03-SUMMARY.md
  • .planning/phases/04-local-ux-polish-performance-hardening/04-CONTEXT.md
  • .planning/phases/04-local-ux-polish-performance-hardening/04-DISCUSSION-LOG.md
  • .planning/research/ARCHITECTURE.md
  • .planning/research/FEATURES.md
  • .planning/research/PITFALLS.md
  • .planning/research/STACK.md
  • .planning/research/SUMMARY.md
  • AGENTS.md
  • README.md
  • backend/import_data.py
  • backend/main.py
  • backend/migrations/__init__.py
  • backend/migrations/runner.py
  • backend/migrations/versions/001_phase1_import_reliability.sql
  • backend/models.py
  • backend/pyproject.toml
  • backend/services/backup_service.py
  • backend/services/import_types.py
  • backend/services/trade_importer.py
  • backend/tests/conftest.py
  • backend/tests/test_backup_api.py
  • backend/tests/test_import_api.py
  • backend/tests/test_import_contracts.py
  • backend/tests/test_migration_runner.py
  • backend/tests/test_startup_migrations.py
  • backend/tests/test_trade_importer.py
  • frontend/package.json
  • frontend/src/App.tsx
  • frontend/src/components/ChartManager.tsx
  • frontend/src/components/CompareModeControls.tsx
  • frontend/src/components/Navbar.tsx
  • frontend/src/components/ReplayAnalyticsPanel.tsx
  • frontend/src/components/ReplayControls.tsx
  • frontend/src/components/TradeList.tsx
  • frontend/src/components/import/ImportContractNote.tsx
  • frontend/src/components/import/ImportDropzone.tsx
  • frontend/src/components/import/ImportResultsTable.tsx
  • frontend/src/components/import/ImportSummary.tsx
  • frontend/src/components/import/LocalSafekeepingPanel.tsx
  • frontend/src/components/import/ManualImportPanel.tsx
  • frontend/src/pages/ImportPage.tsx
  • frontend/src/pages/ReplayPage.tsx
  • frontend/src/services/api.ts
  • frontend/src/services/importReportAdapter.ts
  • frontend/src/services/importTypes.ts
  • frontend/src/test/ImportPage.test.tsx
  • frontend/src/test/comparePane.test.ts
  • frontend/src/test/import-api.contract.test.ts
  • frontend/src/test/import-page-manual-entry.test.tsx
  • frontend/src/test/import-route-smoke.test.tsx
  • frontend/src/test/localSafekeepingPanel.test.tsx
  • frontend/src/test/phase3ReplayIntegration.smoke.test.tsx
  • frontend/src/test/replayAnalyticsPanel.test.tsx
  • frontend/src/test/replayControls.test.tsx
  • frontend/src/test/replayFlow.smoke.test.tsx
  • frontend/src/test/replayLayoutPersistence.test.tsx
  • frontend/src/test/replayPage.test.tsx
  • frontend/src/test/replayPlayback.test.ts
  • frontend/src/test/replaySession.test.ts
  • frontend/src/test/replayWorkspace.test.ts
  • frontend/src/test/setup.ts
  • frontend/src/test/symbolCompare.test.tsx
  • frontend/src/test/timeframeCompare.test.tsx
  • frontend/src/test/tradeListSelection.test.tsx
  • frontend/src/utils/comparePane.ts
  • frontend/src/utils/replayPlayback.ts
  • frontend/src/utils/replaySession.ts
  • frontend/src/utils/replayWorkspace.ts
  • frontend/vitest.config.ts

Comment thread backend/import_data.py
Comment on lines +16 to 17
MigrationRunner(engine=engine).run()
Base.metadata.create_all(bind=engine)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify where migration-managed flows still call create_all
rg -n --type py -C2 'MigrationRunner\(|Base\.metadata\.create_all\(' backend

Repository: Xeron2000/retraq

Length of output: 2996


🏁 Script executed:

#!/bin/bash
# Find and inspect MigrationRunner implementation
fd -t py 'migration' backend --exec grep -l 'class MigrationRunner' {} \;

Repository: Xeron2000/retraq

Length of output: 229


🏁 Script executed:

#!/bin/bash
# Get MigrationRunner class definition
rg -n 'class MigrationRunner' backend -A 20

Repository: Xeron2000/retraq

Length of output: 1480


🏁 Script executed:

#!/bin/bash
# Check main.py context to understand the pattern
head -n 30 backend/main.py | cat -n

Repository: Xeron2000/retraq

Length of output: 1294


🏁 Script executed:

#!/bin/bash
# Find DEFAULT_MIGRATIONS definition
rg -n 'DEFAULT_MIGRATIONS' backend

Repository: Xeron2000/retraq

Length of output: 235


🏁 Script executed:

#!/bin/bash
# Get migration step implementation details
rg -n 'class MigrationStep' backend -A 10

Repository: Xeron2000/retraq

Length of output: 665


🏁 Script executed:

#!/bin/bash
# Check what migrations are defined
rg -n 'MigrationStep\(' backend/migrations/runner.py -A 2 | head -40

Repository: Xeron2000/retraq

Length of output: 154


🏁 Script executed:

#!/bin/bash
# Get full bootstrap_schema implementation
rg -n 'def bootstrap_schema' backend/migrations/runner.py -A 100 | head -120

Repository: Xeron2000/retraq

Length of output: 3506


🏁 Script executed:

#!/bin/bash
# List all models to understand full schema requirements
cat -n backend/models.py

Repository: Xeron2000/retraq

Length of output: 4900


Mixing migration-versioned and ORM-metadata schema definitions enables silent schema drift.

The bootstrap migration (001) creates only klines and trades, but Base.metadata.create_all() auto-creates import_sessions and import_report_rows without migration versioning. This hybrid approach means new ORM models bypass the migration system entirely—adding a model to models.py silently creates its table without a recorded migration step, breaking schema governance.

Move all table definitions to explicit migration steps, or remove create_all() and define all tables through migrations exclusively.

♻️ Proposed fix

Add migrations for import_sessions and import_report_rows, then remove create_all():

 def main():
     MigrationRunner(engine=engine).run()
-    Base.metadata.create_all(bind=engine)
     db = SessionLocal()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/import_data.py` around lines 16 - 17, The code calls
MigrationRunner(engine).run() but then calls
Base.metadata.create_all(bind=engine), which silently creates tables (notably
import_sessions and import_report_rows) outside the migration system; remove the
Base.metadata.create_all(bind=engine) call and instead add explicit Alembic/DB
migration scripts that create the import_sessions and import_report_rows tables
(and any other models added in models.py) so all schema changes are recorded
under the migration runner (referencing MigrationRunner,
Base.metadata.create_all, import_sessions, import_report_rows, and models.py).

Comment thread backend/main.py
Comment on lines +126 to +130
with tempfile.NamedTemporaryFile(
delete=False, suffix=Path(file.filename).suffix or ".sqlite3"
) as tmp:
tmp_path = Path(tmp.name)
_ = tmp.write(await file.read())
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Stream the backup upload to disk instead of buffering it.

await file.read() materializes the full SQLite backup in memory before restore begins. Large backups can spike RAM or fail this path outright.

📦 Chunked write example
     try:
         with tempfile.NamedTemporaryFile(
             delete=False, suffix=Path(file.filename).suffix or ".sqlite3"
         ) as tmp:
             tmp_path = Path(tmp.name)
-            _ = tmp.write(await file.read())
+            while chunk := await file.read(1024 * 1024):
+                tmp.write(chunk)
 
         restore_sqlite_backup(engine, tmp_path)
         return {"status": "restored"}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/main.py` around lines 126 - 130, The current restore path buffers the
entire upload with await file.read(), causing high memory use; change the logic
in the block that creates the temporary file (the with
tempfile.NamedTemporaryFile(...) as tmp: / tmp_path / tmp.write usage) to stream
the upload to disk by reading the UploadFile in chunks (e.g., use
file.read(chunk_size) in a loop or an async chunk iterator) and writing each
chunk to tmp.write until EOF, keeping delete=False and suffix handling the same;
ensure tmp.flush() / tmp.close() semantics are preserved so subsequent code
using tmp_path sees the fully written file.

Comment thread backend/models.py
Comment on lines +53 to +99
class ImportSession(Base):
__tablename__ = "import_sessions"

id: Mapped[int] = mapped_column(Integer, primary_key=True, autoincrement=True)
source_filename: Mapped[str] = mapped_column(String(255), nullable=False)
status: Mapped[str] = mapped_column(String(32), nullable=False)
total_rows: Mapped[int] = mapped_column(Integer, nullable=False, default=0)
success_count: Mapped[int] = mapped_column(Integer, nullable=False, default=0)
failed_count: Mapped[int] = mapped_column(Integer, nullable=False, default=0)
duplicate_count: Mapped[int] = mapped_column(Integer, nullable=False, default=0)
conflict_count: Mapped[int] = mapped_column(Integer, nullable=False, default=0)
timestamp_normalization_count: Mapped[int] = mapped_column(
Integer, nullable=False, default=0
)
file_rejection_reason: Mapped[str | None] = mapped_column(String(255))
file_rejection_message: Mapped[str | None] = mapped_column(Text)
download_filename: Mapped[str | None] = mapped_column(String(255))
download_mime_type: Mapped[str] = mapped_column(
String(64), nullable=False, default="text/csv"
)
created_at: Mapped[int] = mapped_column(
BigInteger, nullable=False, default=lambda: int(time.time() * 1000)
)

__table_args__ = (Index("ix_import_sessions_status", "status"),)


class ImportReportRow(Base):
__tablename__ = "import_report_rows"

id: Mapped[int] = mapped_column(Integer, primary_key=True, autoincrement=True)
session_id: Mapped[int] = mapped_column(
ForeignKey("import_sessions.id"), nullable=False
)
row_number: Mapped[int] = mapped_column(Integer, nullable=False)
outcome: Mapped[str] = mapped_column(String(32), nullable=False)
field: Mapped[str] = mapped_column(String(64), nullable=False)
raw_value: Mapped[str | None] = mapped_column(Text)
normalized_value: Mapped[str | None] = mapped_column(Text)
reason: Mapped[str] = mapped_column(Text, nullable=False)
business_key: Mapped[str] = mapped_column(Text, nullable=False)

__table_args__ = (
Index("ix_import_report_rows_session_id", "session_id"),
Index("ix_import_report_rows_session_outcome", "session_id", "outcome"),
Index("ix_import_report_rows_business_key", "business_key"),
)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Add a versioned migration for these tables in the same change.

ImportSession and ImportReportRow only exist in ORM metadata here. The migration runner still bootstraps the legacy tables, so startup has to rely on Base.metadata.create_all() to materialize the import schema outside schema_version, which leaves the recorded migration history out of sync with the real database shape.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/models.py` around lines 53 - 99, The ImportSession and
ImportReportRow models are only added to ORM metadata and aren't captured by the
versioned migrations, causing Base.metadata.create_all() to be used at runtime
and leaving schema_version out of sync; create a new versioned migration that
explicitly creates the import_sessions and import_report_rows tables (including
columns, indexes ix_import_sessions_status, ix_import_report_rows_session_id,
ix_import_report_rows_session_outcome, ix_import_report_rows_business_key, and
defaults like download_mime_type and created_at) so the migration runner records
these tables in schema_version and remove any runtime fallback that calls
Base.metadata.create_all() for these models.

Comment thread backend/pyproject.toml
Comment on lines +15 to +16
"pytest>=8.3.0",
"httpx>=0.28.1",
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

# 1) Verify whether pytest/httpx are used outside tests
rg -n --type=py '^\s*(from|import)\s+(pytest|httpx)\b' backend -g '!backend/tests/**' -g '!**/.venv/**'

# 2) Inspect dependency sections in pyproject
rg -n '\[project\]|\[project\.optional-dependencies\]|pytest|httpx' backend/pyproject.toml

Repository: Xeron2000/retraq

Length of output: 42


🏁 Script executed:

cat -n backend/pyproject.toml

Repository: Xeron2000/retraq

Length of output: 719


🏁 Script executed:

# Search for all pytest/httpx usage in the entire backend directory
rg -n 'pytest|httpx' backend --type=py -A 2

Repository: Xeron2000/retraq

Length of output: 363


🏁 Script executed:

# Check specifically for imports in non-test files
find backend -name "*.py" -not -path "*/tests/*" -not -path "*/.venv/*" | head -20

Repository: Xeron2000/retraq

Length of output: 430


🏁 Script executed:

# Search for all httpx usage in backend, showing context
rg -n 'httpx' backend --type=py -B 1 -A 1

Repository: Xeron2000/retraq

Length of output: 42


🏁 Script executed:

# Check if there are any test files besides tests directory
find backend -name "test_*.py" -o -name "*_test.py" | head -20

Repository: Xeron2000/retraq

Length of output: 281


🏁 Script executed:

# Search for httpx specifically in test files
rg -n 'httpx' backend/tests --type=py -B 1 -A 1

Repository: Xeron2000/retraq

Length of output: 42


🏁 Script executed:

# Check if httpx might be used indirectly (e.g., through fixtures or imports from other modules)
grep -r 'httpx' backend/ --include="*.py" 2>/dev/null || true

Repository: Xeron2000/retraq

Length of output: 42


Move test-only packages out of runtime dependencies and remove unused imports.

pytest (line 15) is test-only and should be moved to an optional-dependencies section. Additionally, httpx (line 16) is not used anywhere in the codebase and should either be removed entirely or moved to optional-dependencies if it's intended for future use.

Proposed dependency layout
 dependencies = [
     "fastapi>=0.115.0",
     "uvicorn>=0.30.0",
     "sqlalchemy>=2.0.35",
     "ccxt>=4.4.26",
     "pandas>=2.2.3",
     "openpyxl>=3.1.5",
     "python-multipart>=0.0.9",
     "pytz>=2024.1",
-    "pytest>=8.3.0",
-    "httpx>=0.28.1",
 ]
+
+[project.optional-dependencies]
+dev = [
+    "pytest>=8.3.0",
+    "httpx>=0.28.1",
+]
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"pytest>=8.3.0",
"httpx>=0.28.1",
dependencies = [
"fastapi>=0.115.0",
"uvicorn>=0.30.0",
"sqlalchemy>=2.0.35",
"ccxt>=4.4.26",
"pandas>=2.2.3",
"openpyxl>=3.1.5",
"python-multipart>=0.0.9",
"pytz>=2024.1",
]
[project.optional-dependencies]
dev = [
"pytest>=8.3.0",
"httpx>=0.28.1",
]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/pyproject.toml` around lines 15 - 16, Move the test-only package
"pytest" out of runtime dependencies into your development or optional
dependencies (e.g., dev-dependencies or project.optional-dependencies) so it is
not installed in production; remove "httpx" from the runtime dependency list if
it is unused, or similarly move "httpx" to an optional/dev dependency if you
intend to keep it for testing/experimentation, and also delete any unused
imports of httpx from the codebase (search for "httpx" imports) to keep
dependencies in sync with actual usage.

Comment on lines +37 to +40
engine.dispose()
try:
_ = shutil.copy2(backup_path, sqlite_path)
finally:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make restore write atomic to avoid DB corruption on interruption.

Lines 37-40 copy directly over the live DB file. If the process crashes mid-copy, the SQLite file can be left corrupted. Use temp-file + atomic replace.

Proposed patch
 from pathlib import Path
+import os
 import shutil
@@
     engine.dispose()
     try:
-        _ = shutil.copy2(backup_path, sqlite_path)
+        temp_restore_path = sqlite_path.with_suffix(f"{sqlite_path.suffix}.restore_tmp")
+        _ = shutil.copy2(backup_path, temp_restore_path)
+        os.replace(temp_restore_path, sqlite_path)
     finally:
         engine.dispose()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/services/backup_service.py` around lines 37 - 40, The current restore
copies directly over the live DB (shutil.copy2(backup_path, sqlite_path)) after
engine.dispose(), which can corrupt SQLite if interrupted; instead, copy the
backup into a temporary file placed in the same directory as sqlite_path (use
tempfile.mkstemp or NamedTemporaryFile with dir=os.path.dirname(sqlite_path)),
use shutil.copy2(backup_path, tmp_path) to preserve metadata, then atomically
replace the target with os.replace(tmp_path, sqlite_path) or
Path(tmp_path).replace(Path(sqlite_path)); ensure the temp file is removed on
errors and do this after engine.dispose() so the DB is not in use.

Comment on lines +19 to +25
database.engine = engine
database.SessionLocal = sessionmaker(bind=engine, autocommit=False, autoflush=False)
Base.metadata.create_all(bind=engine)

sys.modules.pop("main", None)
main = importlib.import_module("main")
return TestClient(main.app)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Restore database globals after test client setup.

_make_client overwrites database.engine / database.SessionLocal and reloads main, but never restores prior globals. This can make later tests run against the wrong DB depending on execution order.

🔧 Suggested fix
 def _make_client(db_path: Path) -> TestClient:
+    original_engine = database.engine
+    original_session_local = database.SessionLocal
     engine = create_engine(
         f"sqlite:///{db_path}", connect_args={"check_same_thread": False}
     )
     database.engine = engine
     database.SessionLocal = sessionmaker(bind=engine, autocommit=False, autoflush=False)
     Base.metadata.create_all(bind=engine)
 
     sys.modules.pop("main", None)
     main = importlib.import_module("main")
-    return TestClient(main.app)
+    client = TestClient(main.app)
+    client._original_engine = original_engine  # type: ignore[attr-defined]
+    client._original_session_local = original_session_local  # type: ignore[attr-defined]
+    return client
 def test_backup_download_returns_current_sqlite_database(tmp_path: Path):
@@
-    client = _make_client(db_path)
+    client = _make_client(db_path)
+    try:
+        _seed_trade("BTC-USDT", 65000)
+        response = client.get("/api/backups/download")
+        assert response.status_code == 200
+        assert response.headers["content-type"].startswith("application/octet-stream")
+        assert response.headers["content-disposition"].endswith('.sqlite3"')
+        assert response.content == db_path.read_bytes()
+    finally:
+        database.engine.dispose()
+        database.engine = client._original_engine  # type: ignore[attr-defined]
+        database.SessionLocal = client._original_session_local  # type: ignore[attr-defined]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@backend/tests/test_backup_api.py` around lines 19 - 25, The helper
_make_client currently replaces database.engine and database.SessionLocal and
reloads main without restoring them; modify _make_client to capture the original
values of database.engine and database.SessionLocal before assigning the test
engine/sessionmaker, then import/reload main and create the TestClient, and
finally restore the original database.engine and database.SessionLocal (using a
try/finally or context manager) so subsequent tests see the original globals;
reference the symbols _make_client, database.engine, database.SessionLocal, and
main.app when making the change.

Comment on lines +505 to +603
const loadCompareData = useCallback(
async (
tf: Timeframe,
sym: string,
range?: { start: number; end: number },
) => {
if (!compareSeriesRef.current || !compareVolumeSeriesRef.current) return;
latestCompareRequestIdRef.current += 1;
const requestId = latestCompareRequestIdRef.current;
setCompareKlineLoading(true);
setCompareKlineError(null);
setCompareHasData(false);
try {
compareSeriesRef.current.setData([]);
compareVolumeSeriesRef.current.setData([]);
compareCandlesByTimeRef.current = new Map();
compareHasDataRef.current = false;
const klines: Kline[] = await fetchKlines(
sym,
tf,
range ? { start: range.start, end: range.end } : undefined,
);
if (requestId !== latestCompareRequestIdRef.current) return;
const data: CandlestickData[] = klines.map((k) => ({
time: k.time as Time,
open: k.open,
high: k.high,
low: k.low,
close: k.close,
}));
const volume: HistogramData[] = klines.map((k) => ({
time: k.time as Time,
value: k.volume,
color:
k.close >= k.open
? "rgba(34, 197, 94, 0.6)"
: "rgba(239, 68, 68, 0.6)",
}));
compareSeriesRef.current.setData(data);
compareVolumeSeriesRef.current.setData(volume);
compareCandlesByTimeRef.current = new Map(
data.map((candle) => [candle.time, candle]),
);
compareHasDataRef.current = data.length > 0;
setCompareHasData(data.length > 0);
setCompareKlineError(null);

const fillerStart = range
? Math.floor(range.start / 1000)
: (data[0]?.time as number | undefined);
const fillerEnd = range
? Math.floor(range.end / 1000)
: (data[data.length - 1]?.time as number | undefined);
if (typeof fillerStart === "number" && typeof fillerEnd === "number") {
ensureTimeFiller(tf, fillerStart, fillerEnd);
}

const pending = pendingCompareVisibleRangeRef.current;
if (pending && compareChartRef.current) {
try {
if (
typeof pending.from === "number" &&
typeof pending.to === "number"
) {
compareChartRef.current.timeScale().setVisibleRange(pending);
} else if (
typeof compareChartRef.current.timeScale()
.setVisibleLogicalRange === "function"
) {
compareChartRef.current
.timeScale()
.setVisibleLogicalRange(
pending as { from: number; to: number },
);
}
pendingCompareVisibleRangeRef.current = null;
} catch (err) {
console.error(err);
}
}
} catch (error) {
console.error(error);
if (requestId !== latestCompareRequestIdRef.current) return;
compareSeriesRef.current.setData([]);
compareVolumeSeriesRef.current.setData([]);
compareCandlesByTimeRef.current = new Map();
compareHasDataRef.current = false;
setCompareHasData(false);
setCompareKlineError(
error instanceof Error ? error.message : String(error),
);
} finally {
if (requestId === latestCompareRequestIdRef.current) {
setCompareKlineLoading(false);
}
}
},
[ensureTimeFiller],
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard compare-series writes after the async fetch.

Line 527 only filters out superseded requests. If the compare pane is closed while fetchKlines(...) is still in flight, the cleanup later nulls these refs and the post-await setData(...) path throws.

🛡️ One way to harden the loader
 const loadCompareData = useCallback(
   async (
     tf: Timeframe,
     sym: string,
     range?: { start: number; end: number },
   ) => {
-    if (!compareSeriesRef.current || !compareVolumeSeriesRef.current) return;
+    const compareSeries = compareSeriesRef.current;
+    const compareVolumeSeries = compareVolumeSeriesRef.current;
+    if (!compareSeries || !compareVolumeSeries) return;
     latestCompareRequestIdRef.current += 1;
     const requestId = latestCompareRequestIdRef.current;
     setCompareKlineLoading(true);
     setCompareKlineError(null);
     setCompareHasData(false);
     try {
-      compareSeriesRef.current.setData([]);
-      compareVolumeSeriesRef.current.setData([]);
+      compareSeries.setData([]);
+      compareVolumeSeries.setData([]);
       compareCandlesByTimeRef.current = new Map();
       compareHasDataRef.current = false;
       const klines: Kline[] = await fetchKlines(
         sym,
         tf,
         range ? { start: range.start, end: range.end } : undefined,
       );
-      if (requestId !== latestCompareRequestIdRef.current) return;
+      if (
+        requestId !== latestCompareRequestIdRef.current ||
+        compareSeriesRef.current !== compareSeries ||
+        compareVolumeSeriesRef.current !== compareVolumeSeries
+      ) {
+        return;
+      }
       const data: CandlestickData[] = klines.map((k) => ({
         time: k.time as Time,
         open: k.open,
         high: k.high,
         low: k.low,
         close: k.close,
       }));
       const volume: HistogramData[] = klines.map((k) => ({
         time: k.time as Time,
         value: k.volume,
         color:
           k.close >= k.open
             ? "rgba(34, 197, 94, 0.6)"
             : "rgba(239, 68, 68, 0.6)",
       }));
-      compareSeriesRef.current.setData(data);
-      compareVolumeSeriesRef.current.setData(volume);
+      compareSeries.setData(data);
+      compareVolumeSeries.setData(volume);
       // ...
     } catch (error) {
       console.error(error);
       if (requestId !== latestCompareRequestIdRef.current) return;
-      compareSeriesRef.current.setData([]);
-      compareVolumeSeriesRef.current.setData([]);
+      if (
+        compareSeriesRef.current === compareSeries &&
+        compareVolumeSeriesRef.current === compareVolumeSeries
+      ) {
+        compareSeries.setData([]);
+        compareVolumeSeries.setData([]);
+      }
       // ...
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/ChartManager.tsx` around lines 505 - 603, The
post-await code in loadCompareData (function loadCompareData) only checks
requestId but does not guard against refs being nulled when the compare pane
closes; wrap/precede any use of compareSeriesRef.current,
compareVolumeSeriesRef.current, compareCandlesByTimeRef.current,
compareChartRef.current and compareHasDataRef.current with a runtime existence
check (e.g., if (!compareSeriesRef.current || !compareVolumeSeriesRef.current ||
!compareCandlesByTimeRef.current) return) immediately after the await and also
inside the catch before setting data/refs, so you bail out if the pane was
closed; apply the same guard before manipulating
pendingCompareVisibleRangeRef.current/timeScale() to avoid calling methods on a
destroyed chart.

Comment on lines +275 to +280
Promise.all([
fetchStats(),
fetchTrades(undefined, {
limit: TRADES_FETCH_LIMIT,
maxPages: TRADES_FETCH_MAX_PAGES,
}),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't mix full-dataset stats with a capped trade sample.

fetchStats() gives panel-wide totals, but Line 277 limits the trade-derived views to at most 200 * 20 rows. Once the local DB grows past that, the equity curve, heatmap, and symbol table can silently disagree with the totals shown above. Either fetch to exhaustion here, or disclose that these sections are truncated.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/components/ReplayAnalyticsPanel.tsx` around lines 275 - 280, The
panel mixes full totals from fetchStats() with a capped trade sample from
fetchTrades(...) (using TRADES_FETCH_LIMIT and TRADES_FETCH_MAX_PAGES), causing
derived views (equity curve, heatmap, symbol table) to disagree when the local
DB exceeds the sample; either remove the sampling here and call fetchTrades to
exhaustion (i.e., call fetchTrades without
TRADES_FETCH_LIMIT/TRADES_FETCH_MAX_PAGES or set them to unlimited) so derived
views reflect full data, or keep the capped fetch but add a clear UI disclosure
(in ReplayAnalyticsPanel) that the trade-derived sections are truncated and show
the current sample size so users understand the discrepancy.

Comment on lines +81 to +83
const initialWorkspace = loadReplayWorkspace();
const initialTimeframe =
initialWorkspace?.activeTimeframe ?? initialSeed?.defaultTimeframe ?? "15m";
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

workspaceBootstrap is broader than “restored replay selection.”

initialWorkspace is already being used for non-selection state (activeTimeframe), so Boolean(initialWorkspace) disables TradeList’s latest-trade fallback even when the only restored data is unrelated UI state. That can reopen /replay into an empty shell with no symbol/trade to restore.

Example narrowing of the bootstrap flag
 const initialWorkspace = loadReplayWorkspace();
+const hasSelectionBootstrap = initialWorkspace?.symbol !== undefined;
 ...
-					workspaceBootstrap={Boolean(initialWorkspace)}
+					workspaceBootstrap={hasSelectionBootstrap}

Use another explicit replay-selection signal here if the workspace stores more than just symbol.

Also applies to: 139-139

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/pages/ReplayPage.tsx` around lines 81 - 83, The bootstrap flag
is too broad because Boolean(initialWorkspace) is true for UI-only state like
activeTimeframe; change to an explicit replay-selection check (e.g., derive
workspaceBootstrap from a specific symbol/selection field stored in
loadReplayWorkspace, not the whole workspace) so TradeList’s latest-trade
fallback only disables when a symbol is actually restored; update references
where workspaceBootstrap is used (including the other occurrence around the code
that reads initialWorkspace/initialSeed at the later block) and ensure
functions/variables like loadReplayWorkspace, initialWorkspace, initialSeed,
workspaceBootstrap, and TradeList consume the new explicit selection signal
instead of Boolean(initialWorkspace).

Comment on lines +93 to +97
beforeEach(() => {
vi.clearAllMocks();
localStorageStore.clear();
vi.stubGlobal('localStorage', localStorageMock as Storage);
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify test files that stub globals also restore them.
rg -n --type=ts --type=tsx 'vi\.stubGlobal\(|vi\.unstubAllGlobals\(' frontend/src/test

Repository: Xeron2000/retraq

Length of output: 87


🏁 Script executed:

cat -n frontend/src/test/tradeListSelection.test.tsx | head -150

Repository: Xeron2000/retraq

Length of output: 5446


🏁 Script executed:

#!/bin/bash
# Search for vi.stubGlobal and vi.unstubAllGlobals using file extension patterns
rg -n 'vi\.stubGlobal\(|vi\.unstubAllGlobals\(' frontend/src/test/ --type-list

Repository: Xeron2000/retraq

Length of output: 449


🏁 Script executed:

#!/bin/bash
# Find all test files and search them
fd -e test.ts -e test.tsx frontend/src/test/ -x grep -l 'vi\.stubGlobal\|vi\.unstubAllGlobals\|afterEach' {}

Repository: Xeron2000/retraq

Length of output: 437


🏁 Script executed:

#!/bin/bash
# Get the full file to see if there's an afterEach hook
wc -l frontend/src/test/tradeListSelection.test.tsx

Repository: Xeron2000/retraq

Length of output: 109


🏁 Script executed:

#!/bin/bash
# Check if afterEach exists in this file
grep -n "afterEach" frontend/src/test/tradeListSelection.test.tsx

Repository: Xeron2000/retraq

Length of output: 42


🏁 Script executed:

#!/bin/bash
# Search all test files for stubGlobal and unstubAllGlobals patterns
find frontend/src/test -name "*.test.ts*" -exec grep -l "stubGlobal\|unstubAllGlobals" {} \;

Repository: Xeron2000/retraq

Length of output: 516


🏁 Script executed:

#!/bin/bash
# Check each test file for stubGlobal usage and cleanup patterns
for file in frontend/src/test/import-route-smoke.test.tsx \
            frontend/src/test/phase3ReplayIntegration.smoke.test.tsx \
            frontend/src/test/replayAnalyticsPanel.test.tsx \
            frontend/src/test/replayFlow.smoke.test.tsx \
            frontend/src/test/replayLayoutPersistence.test.tsx \
            frontend/src/test/replayPage.test.tsx \
            frontend/src/test/replayWorkspace.test.ts \
            frontend/src/test/symbolCompare.test.tsx \
            frontend/src/test/timeframeCompare.test.tsx; do
  echo "=== $file ==="
  grep -n "vi\.stubGlobal\|afterEach\|unstubAllGlobals" "$file" || echo "No matches found"
done

Repository: Xeron2000/retraq

Length of output: 2609


Add afterEach hook to restore stubbed globals.

vi.stubGlobal('localStorage', ...) is called in beforeEach but never cleaned up, which can leak this mock into other test suites and cause test pollution. Add an afterEach hook to call vi.unstubAllGlobals() to match the pattern used in other test files.

🔧 Suggested fix
-import { beforeEach, describe, expect, it, vi } from 'vitest';
+import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
@@
 describe('TradeList seed-aware selection', () => {
   beforeEach(() => {
     vi.clearAllMocks();
     localStorageStore.clear();
     vi.stubGlobal('localStorage', localStorageMock as Storage);
   });
+
+  afterEach(() => {
+    vi.unstubAllGlobals();
+  });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@frontend/src/test/tradeListSelection.test.tsx` around lines 93 - 97, The test
sets a global stub in beforeEach via vi.stubGlobal('localStorage',
localStorageMock) but never restores it; add an afterEach that calls
vi.unstubAllGlobals() (and optionally vi.clearAllMocks() again) to undo the stub
and prevent leakage into other suites — locate the beforeEach block in
tradeListSelection.test.tsx and add an afterEach ensuring vi.unstubAllGlobals()
is called after tests that use localStorageMock.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant