Conversation
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent) Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent) Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent) Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent) Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent) Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
📝 WalkthroughWalkthroughImplements Retraq v1.0 milestone delivering four sequential phases: import reliability with schema migrations and structured reporting, single-trade replay with session persistence and playback controls, multi-timeframe/multi-symbol compare modes, and local UX polish with workspace/filter bootstrap and SQLite backup/restore. Adds 150+ test files validating both backend (migrations, import contracts, API endpoints) and frontend (import workflow, replay flow, compare modes, analytics panel). Extends README and documentation to reflect feature completeness. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~90 minutes
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 12
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
backend/main.py (1)
84-91:⚠️ Potential issue | 🟠 MajorDelete the staged Excel file in a
finallyblock.
NamedTemporaryFile(delete=False)leaves every upload on disk right now. Repeated imports will steadily leak temp files.♻️ Possible fix
`@app.post`("/api/trades/import") async def import_trades(file: UploadFile = File(...), db: Session = Depends(get_db)): if not file.filename or not file.filename.endswith((".xlsx", ".xls")): raise HTTPException(400, "Only Excel files (.xlsx, .xls) are supported") + tmp_path: str | None = None try: with tempfile.NamedTemporaryFile(delete=False, suffix=".xlsx") as tmp: content = await file.read() tmp.write(content) tmp_path = tmp.name - result = trade_importer.parse_excel(db, tmp_path) - return result + return trade_importer.parse_excel(db, tmp_path) except Exception as e: raise HTTPException(500, str(e)) + finally: + if tmp_path is not None: + Path(tmp_path).unlink(missing_ok=True)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/main.py` around lines 84 - 91, The temp Excel file created with tempfile.NamedTemporaryFile(delete=False) is never removed; modify the upload handling in backend/main.py so tmp_path is initialized (e.g., tmp_path = None) and the try/except that calls trade_importer.parse_excel(db, tmp_path) is followed by a finally block that checks if tmp_path is set and the file exists and then deletes it (os.remove or Path.unlink), ensuring removal happens whether parse_excel succeeds or an HTTPException is raised.
🟡 Minor comments (22)
.planning/codebase/CONVENTIONS.md-38-38 (1)
38-38:⚠️ Potential issue | 🟡 MinorCorrect outdated test-harness claim.
Line 38 states there is no visible automated test harness, but this PR adds substantial test suites. Please update this item so conventions reflect current reality.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.planning/codebase/CONVENTIONS.md at line 38, Update the outdated sentence "There is no visible automated test harness in the inspected files." in CONVENTIONS.md to acknowledge the new automated test suites added in this PR: replace it with a brief statement that the repository now includes an automated test harness (specify the testing framework used, e.g., Jest/Mocha/pytest) and mention that CI runs these tests (e.g., GitHub Actions or the project's CI) so the conventions reflect current reality..planning/codebase/TESTING.md-4-8 (1)
4-8:⚠️ Potential issue | 🟡 MinorTesting baseline is outdated and should be rewritten.
Line 4 and Lines 22-24 claim no automated test suite/setup, which conflicts with this PR’s added backend/frontend tests and documented test commands. Please revise this section to list the actual current harnesses and entry commands.
Also applies to: 22-24
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.planning/codebase/TESTING.md around lines 4 - 8, Update the "No dedicated automated test suite" section in TESTING.md to reflect the current test harnesses and entry commands: state that frontend tests are available via the package.json scripts (e.g., npm run test or the repo's defined test script in frontend package.json such as "dev"/"build"/"lint"/"preview" plus the newly added test script), and that backend tests run via pytest as configured in backend/pyproject.toml (e.g., pipx/poetry/pytest invocation or explicit "pytest" command), and list any coverage/lint commands added; edit the sentences that currently cover lines 4 and 22-24 to enumerate these actual commands and where to run them (frontend package.json and backend/pyproject.toml / pytest), so the document matches the PR-added tests.AGENTS.md-134-136 (1)
134-136:⚠️ Potential issue | 🟡 MinorUpdate stale testing/diagnostics statements in embedded conventions.
Line 134 (“no visible automated test harness”) and Line 135 (“diagnostics are not clean”) read as baseline facts, but this PR introduces broad backend/frontend tests and reports clean verification runs. Please refresh this block to avoid contradicting the current repo state.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENTS.md` around lines 134 - 136, Update the stale bullets that claim "no visible automated test harness" and "diagnostics are not clean" to reflect the new tests and clean runs introduced by this PR: replace those two statements with a brief note that broad backend and frontend automated tests have been added and that verification runs (lint/type/test) are currently clean; also adjust or remove the runtime DB creation remark referencing `main.py` / `import_data.py` if those files were changed so the conventions block no longer contradicts repository state.frontend/src/utils/replaySession.ts-39-55 (1)
39-55:⚠️ Potential issue | 🟡 MinorValidate
defaultTimeframeagainst allowedTimeframevalues, not generic string.
isReplaySessionSeedaccepts any string fordefaultTimeframe, but the actualTimeframetype is a finite union:'5m' | '15m' | '1h' | '4h' | '1d'. Malformed persisted payloads with invalid timeframe strings (e.g.,"2h") will pass deserialization and leak into runtime state viainitialSeed?.defaultTimeframeon line 83 of ReplayPage.tsx.Replace the check
(value.defaultTimeframe === undefined || isString(value.defaultTimeframe))with validation that enforces membership in theTimeframeunion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/utils/replaySession.ts` around lines 39 - 55, The type guard isReplaySessionSeed currently allows any string for defaultTimeframe; replace the last condition so it validates membership in the Timeframe union (e.g., check value.defaultTimeframe is undefined or one of '5m' | '15m' | '1h' | '4h' | '1d') instead of using isString. Implement or reuse a helper (isTimeframe) and update isReplaySessionSeed's condition to (value.defaultTimeframe === undefined || isTimeframe(value.defaultTimeframe)) so malformed timeframe strings cannot pass and thus initialSeed?.defaultTimeframe won't leak invalid values into runtime state.backend/services/import_types.py-27-45 (1)
27-45:⚠️ Potential issue | 🟡 MinorEnforce outcome/kind domain at construction time.
outcomeandkindare currently unconstrained strings. A typo can silently create invalid report categories and break downstream aggregation/filtering.Suggested guardrails in dataclasses
`@dataclass`(frozen=True, slots=True) class ImportRowOutcome: @@ normalized_value: object | None = None outcome: str = "failed" + + def __post_init__(self) -> None: + if self.outcome not in IMPORT_OUTCOME_BUCKETS: + raise ValueError(f"Unsupported outcome bucket: {self.outcome}") @@ `@dataclass`(frozen=True, slots=True) class ImportNormalizationEvent: @@ reason: str kind: str = "timestamp_normalization" + + def __post_init__(self) -> None: + if self.kind != "timestamp_normalization": + raise ValueError(f"Unsupported normalization kind: {self.kind}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/services/import_types.py` around lines 27 - 45, ImportRowOutcome and ImportNormalizationEvent accept arbitrary strings for outcome and kind; add validation in their dataclass constructors to enforce allowed domains. Implement __post_init__ on ImportRowOutcome to check that outcome is one of the allowed values (e.g., "failed", "success", "skipped" — or your canonical set) and raise ValueError on invalid values, and implement __post_init__ on ImportNormalizationEvent to validate kind against the canonical normalization kinds (e.g., "timestamp_normalization", etc.). Alternatively change the field types to an Enum or typing.Literal and validate in __post_init__ (use object.__setattr__ if you need to coerce on a frozen dataclass) so typos are rejected at construction time..planning/codebase/CONCERNS.md-22-23 (1)
22-23:⚠️ Potential issue | 🟡 MinorStale claim about
.planning/codebasecontents.Line 22 says only
STACK.mdandINTEGRATIONS.mdexist, but this PR already adds more planning docs. Please update this statement to avoid immediate drift.Suggested wording update
-- **`.planning/codebase` 目前偏薄**:现有材料只有 `STACK.md` 和 `INTEGRATIONS.md`,没有看到更细的 roadmap / decision / milestone 文档。能支撑“现状说明”,但不够支撑后续里程碑管理。 +- **`.planning/codebase` 仍需持续完善**:当前已补充多份现状文档,但 roadmap / decision / milestone 的长期维护机制仍需明确,以支撑后续里程碑管理。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.planning/codebase/CONCERNS.md around lines 22 - 23, Update the stale statement in .planning/codebase that claims only STACK.md and INTEGRATIONS.md exist: change the sentence referencing "STACK.md" and "INTEGRATIONS.md" to acknowledge the additional planning documents introduced in this PR (e.g., mention the new roadmap/decision/milestone docs or say "STACK.md, INTEGRATIONS.md, and additional planning documents such as <new-doc-names>"), so the .planning/codebase summary accurately reflects current contents; edit the line that reads about only those two files to a concise, forward-compatible phrase like "contains STACK.md, INTEGRATIONS.md and supplemental planning docs (roadmap/decisions/milestones)" or list the actual new filenames.backend/import_data.py-18-39 (1)
18-39:⚠️ Potential issue | 🟡 MinorGuarantee session closure when import parsing throws.
If
trade_importer.parse_excel(...)raises,db.close()is skipped. Wrap DB usage intry/finallyto avoid leaking sessions on failure paths.🛠️ Proposed fix
def main(): MigrationRunner(engine=engine).run() - db = SessionLocal() - - # 检查是否已有数据 - existing = db.query(Trade).count() - if existing > 0: - print(f"📊 数据库已有 {existing} 条交易记录,跳过导入") - db.close() - return - - # 导入示例数据 - excel_path = os.path.join(os.path.dirname(__file__), "..", "1.xlsx") - if not os.path.exists(excel_path): - print("⚠️ 未找到 1.xlsx,跳过数据导入") - db.close() - return - - print("📥 导入示例交易数据...") - result = trade_importer.parse_excel(db, excel_path) - print( - f"✅ 导入完成: 成功 {result.summary.success_count} 条, 失败 {result.summary.failed_count} 条" - ) - db.close() + db = SessionLocal() + try: + # 检查是否已有数据 + existing = db.query(Trade).count() + if existing > 0: + print(f"📊 数据库已有 {existing} 条交易记录,跳过导入") + return + + # 导入示例数据 + excel_path = os.path.join(os.path.dirname(__file__), "..", "1.xlsx") + if not os.path.exists(excel_path): + print("⚠️ 未找到 1.xlsx,跳过数据导入") + return + + print("📥 导入示例交易数据...") + result = trade_importer.parse_excel(db, excel_path) + print( + f"✅ 导入完成: 成功 {result.summary.success_count} 条, 失败 {result.summary.failed_count} 条" + ) + finally: + db.close()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/import_data.py` around lines 18 - 39, The DB session opened via SessionLocal() may not be closed if trade_importer.parse_excel(db, excel_path) raises; wrap the session usage in a try/finally so db.close() always runs. Specifically, keep creating the session with SessionLocal(), perform the existing checks and call to trade_importer.parse_excel(...) inside the try block, and move db.close() into the finally block to guarantee closure for the session (refer to SessionLocal, Trade, and trade_importer.parse_excel to locate the logic)..planning/REQUIREMENTS.md-67-87 (1)
67-87:⚠️ Potential issue | 🟡 MinorTraceability statuses look stale relative to this PR’s completion artifacts.
All requirements are still marked
Pending, which can mislead roadmap/state readers now that phase summaries in this PR report completion. Please either update statuses or explicitly mark this table as a baseline-only snapshot.📝 Minimal doc fix (if you want to keep statuses unchanged here)
## Traceability Which phases cover which requirements. Updated during roadmap creation. +Status note: this table captures planning-time mapping only; execution/completion state is tracked in `.planning/STATE.md`.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.planning/REQUIREMENTS.md around lines 67 - 87, The requirements table currently shows every item (e.g., IMPT-01, REPL-01, COMP-01, ANLY-01, STAT-01, etc.) as "Pending" which conflicts with this PR's completed phase summaries; either update the Status column entries for the specific requirement IDs that are complete to reflect their true state (e.g., "Completed", "In Progress", or a proper milestone tag) or add a single clarifying sentence immediately above the table stating this is a baseline-only snapshot (e.g., "NOTE: This table is a baseline snapshot and does not reflect current completion statuses") so readers aren’t misled—make the change in .planning/REQUIREMENTS.md next to the existing table..planning/codebase/ARCHITECTURE.md-18-22 (1)
18-22:⚠️ Potential issue | 🟡 MinorArchitecture doc omits newly shipped
/importroute and backup/import-report API boundaries.Line 52 still describes only three routes, and the API boundary sections (Line 18-22, Line 92-98) miss the newly added import rows/report download and backup endpoints from this PR. Please update this file so architecture boundaries match the implemented v1 workflow.
📌 Suggested doc patch
- - `frontend/src/App.tsx:7-18` 定义三个页面路由:`/replay`、`/analysis`、`/learn` + - `frontend/src/App.tsx:7-18` 定义页面路由:`/import`、`/replay`、`/analysis`、`/learn` - - `POST /api/trades/import`:上传 Excel 交易单并入库 + - `POST /api/trades/import`:上传 Excel 交易单并入库 + - `POST /api/trades/import/rows`:手动逐行导入交易 + - `GET /api/trades/import/reports/{session_id}/download`:下载导入明细 CSV + - `GET /api/backups/download`:下载本地 SQLite 备份 + - `POST /api/backups/restore`:恢复本地 SQLite 备份 - - `importTrades()`:上传 Excel 到 `/api/trades/import` + - `importTrades()`:上传 Excel 到 `/api/trades/import` + - `importTradeRows()`:手动行导入到 `/api/trades/import/rows` + - `downloadImportReport()`:导入会话明细下载 + - `restoreSQLiteBackup()`:上传并恢复 SQLite 备份Also applies to: 52-53, 92-98
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.planning/codebase/ARCHITECTURE.md around lines 18 - 22, Update the ARCHITECTURE.md API routes and boundary sections to include the new import and backup/report endpoints added in this PR: add POST /api/trades/import (upload Excel import), GET /api/trades/import-report/{reportId} (download import report), POST /api/backup (create backup) and GET /api/backup/{backupId}/download (download backup); make the same additions where routes are summarized (the top routes list and the API boundary sections previously at lines ~52-53 and ~92-98) so the documented v1 workflow matches the implemented endpoints.frontend/src/test/replayWorkspace.test.ts-37-41 (1)
37-41:⚠️ Potential issue | 🟡 MinorAdd teardown for
vi.stubGlobalto prevent cross-test global leakage.The
localStorageglobal is stubbed in thebeforeEachhook but never restored. This can cause the stubbed value to persist across other test suites and contaminate their tests.Suggested fix
describe('replayWorkspace transport', () => { beforeEach(() => { localStorageStore.clear(); vi.stubGlobal('localStorage', localStorageMock as Storage); }); + + afterEach(() => { + vi.unstubAllGlobals(); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/test/replayWorkspace.test.ts` around lines 37 - 41, The test stubs the global localStorage in the beforeEach (vi.stubGlobal('localStorage', localStorageMock)) but never restores it; add an afterEach that calls vi.unstubAllGlobals() or vi.unstubGlobal('localStorage') to restore the original global and prevent leakage (also mirror clearing localStorageStore in afterEach if desired); update the test file's beforeEach/afterEach block around the describe('replayWorkspace transport') to call vi.unstubAllGlobals() (or vi.unstubGlobal('localStorage')) to tear down the stubbed global.backend/tests/test_backup_api.py-66-67 (1)
66-67:⚠️ Potential issue | 🟡 MinorAssert backup download succeeded before using payload bytes.
The restore test should validate the download precondition first, otherwise failures are harder to diagnose.
🔧 Suggested fix
backup_response = client.get("/api/backups/download") + assert backup_response.status_code == 200 backup_bytes = backup_response.content🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/tests/test_backup_api.py` around lines 66 - 67, The test reads backup_response.content into backup_bytes without checking the response succeeded; update the test around the client.get("/api/backups/download") call to assert the download succeeded first (e.g., assert backup_response.status_code == 200 or backup_response.ok) before using backup_bytes, referencing backup_response and backup_bytes so failures are diagnosed early.frontend/src/test/tradeListSelection.test.tsx-114-119 (1)
114-119:⚠️ Potential issue | 🟡 MinorAvoid asserting against the last
setItemcall.Using
mock.calls.at(-1)is brittle; any additional write can break this test without behavior regressions.🔧 Suggested fix
- expect(deserializeReplayWorkspace(localStorageMock.setItem.mock.calls.at(-1)?.[1] ?? '')).toMatchObject({ + const workspaceWrite = [...localStorageMock.setItem.mock.calls] + .reverse() + .find(([key]) => key === REPLAY_WORKSPACE_STORAGE_KEY); + expect(deserializeReplayWorkspace(workspaceWrite?.[1] ?? '')).toMatchObject({ version: 1, symbol: 'ETH-USDT', });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/test/tradeListSelection.test.tsx` around lines 114 - 119, The test is brittle because it inspects mock.calls.at(-1); instead, locate the specific setItem call for REPLAY_WORKSPACE_STORAGE_KEY and deserialize that argument for assertions. Change the test to find the call in localStorageMock.setItem.mock.calls where call[0] === REPLAY_WORKSPACE_STORAGE_KEY (or use .find), extract call[1], pass it to deserializeReplayWorkspace, and assert the resulting object contains version: 1 and symbol: 'ETH-USDT'; keep the existing expect(localStorageMock.setItem).toHaveBeenCalledWith(REPLAY_WORKSPACE_STORAGE_KEY, expect.any(String)) check as well..planning/codebase/STACK.md-17-20 (1)
17-20:⚠️ Potential issue | 🟡 MinorStorage table inventory is incomplete after Phase 1 import-report persistence.
This section still implies only
klinesandtradesare persisted, which can mislead contributors working on import report/download flows.Proposed doc update
- - 表:`klines`、`trades`;启动时自动 `Base.metadata.create_all`(`backend/models.py`, `backend/main.py`, `backend/import_data.py`) + - 表:`klines`、`trades`,以及导入报告相关表(导入会话/逐行结果);启动时自动 `Base.metadata.create_all` 并执行迁移(`backend/models.py`, `backend/main.py`, `backend/import_data.py`, `backend/migrations/*`)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.planning/codebase/STACK.md around lines 17 - 20, The storage inventory in STACK.md is outdated and omits tables created for Phase 1 import-report persistence; update the "存储" section to list all additional tables and indexes introduced by the import-report feature (e.g., the import-report persistence tables and any related indexes), and cite the code locations that define them (reference backend/models.py where the ORM models are declared, backend/import_data.py where import-report persistence is implemented, and backend/main.py/backend/database.py where schema creation/initialization occurs); ensure the entry matches actual model names and unique/index definitions used in the codebase so contributors see the complete persisted-table inventory.frontend/src/components/import/LocalSafekeepingPanel.tsx-24-27 (1)
24-27:⚠️ Potential issue | 🟡 MinorReset the native file input after successful restore.
On Line 26 you clear React state, but the
<input type="file">value on Lines 91-98 is not reset. Re-selecting the same file can fail to fireonChange, blocking immediate retry with the same backup file.Proposed patch
import { useCallback, useState } from 'react'; +import { useRef } from 'react'; @@ export default function LocalSafekeepingPanel({ importReport }: LocalSafekeepingPanelProps) { + const fileInputRef = useRef<HTMLInputElement | null>(null); const [backupFile, setBackupFile] = useState<File | null>(null); @@ try { await restoreSQLiteBackup(backupFile); setBackupFile(null); + if (fileInputRef.current) { + fileInputRef.current.value = ''; + } setStatusMessage('SQLite 备份已恢复。'); @@ <input + ref={fileInputRef} id="sqlite-backup-file" aria-label="选择 SQLite 备份文件" className="file-input file-input-bordered w-full"Also applies to: 91-98
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/import/LocalSafekeepingPanel.tsx` around lines 24 - 27, After a successful restore (inside the try that calls restoreSQLiteBackup and then setBackupFile(null)/setStatusMessage), also clear the native file input so re-selecting the same file will fire onChange; add a ref to the <input type="file"> used around the block referenced (lines ~91-98) (e.g., fileInputRef) and after setBackupFile(null) set fileInputRef.current.value = '' (or null) to reset the native input; reference restoreSQLiteBackup, setBackupFile, setStatusMessage and the file input element when making the change.frontend/src/components/import/ImportDropzone.tsx-20-24 (1)
20-24:⚠️ Potential issue | 🟡 MinorApply the same file-type guard on drag/drop as file picker.
Line 54 restricts picker types, but Lines 20-24 accept any dropped file. Add a lightweight extension/type check before calling
onFileChangefor consistent behavior.Proposed patch
onDrop={(event) => { event.preventDefault(); const droppedFile = event.dataTransfer.files.item(0); - onFileChange(droppedFile ?? null); + if (!droppedFile) { + onFileChange(null); + return; + } + const lower = droppedFile.name.toLowerCase(); + const isExcel = lower.endsWith('.xlsx') || lower.endsWith('.xls'); + onFileChange(isExcel ? droppedFile : null); }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/import/ImportDropzone.tsx` around lines 20 - 24, The drag-and-drop handler (onDrop) currently accepts any dropped file and calls onFileChange without validating type; add the same lightweight file-type/extension guard that the file picker uses before calling onFileChange (e.g., check file.type and/or file.name extension against the same accepted list used by the picker, or reuse an existing validator function if present), and only call onFileChange(droppedFile) when the file passes that check (otherwise call onFileChange(null) or ignore the drop and optionally show a validation error)..planning/research/STACK.md-7-123 (1)
7-123:⚠️ Potential issue | 🟡 MinorMinor docs quality pass needed (compound adjective hyphenation).
Several phrases should be hyphenated for clarity (e.g., “local-first”, “client-side”, “front-end”, “route-level”, “trade-centered”). This is a small but repeated readability issue across Line 7 onward.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.planning/research/STACK.md around lines 7 - 123, The document uses several compound adjectives that should be hyphenated for clarity; search the document (from the "The standard 2026 shape..." section onward) for instances like local first, client side, front end, route level, trade centered and replace them with local-first, client-side, front-end, route-level, trade-centered respectively; ensure consistency for similar compounds (e.g., "server heavy" -> "server-heavy", "small amount" when used adjectivally -> "small-amount" only if used before a noun) and run a quick pass to hyphenate other compound modifiers to match these examples..planning/research/STACK.md-93-96 (1)
93-96:⚠️ Potential issue | 🟡 MinorUpdate STACK.md lines 93–96 to reflect the new migration infrastructure.
The backend now has a visible migration layer (
backend/migrations/runner.pywith versioned migrations), which runs beforeBase.metadata.create_all()on startup. Whilecreate_all()is still present, it is now secondary to the migration runner and follows a hybrid approach (migrations for schema versioning, ORM for idempotent table creation). Lines 93–96 incorrectly state that the backend has "no visible migration layer" and thatcreate_all()"blocks safe schema evolution." Both need updating to reflect the current architecture.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.planning/research/STACK.md around lines 93 - 96, Update STACK.md to acknowledge the new migration infrastructure: replace the claim that the backend has "no visible migration layer" and that Base.metadata.create_all() alone "blocks safe schema evolution" with a note that a migration runner exists (backend/migrations/runner.py) and is executed before Base.metadata.create_all() in backend/main.py, explaining the hybrid approach where versioned migrations handle schema evolution and Base.metadata.create_all() is kept as a secondary idempotent table-creation safeguard.frontend/src/components/ReplayAnalyticsPanel.tsx-228-246 (1)
228-246:⚠️ Potential issue | 🟡 MinorThe active symbol can disappear from the distribution table.
Line 228 slices to six rows before the highlight check. If the current replay symbol is outside that top six, the panel drops it entirely, so the “focus symbol” context vanishes instead of being highlighted.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ReplayAnalyticsPanel.tsx` around lines 228 - 246, The table currently renders symbolStats.slice(0, 6) so focusSymbol can be omitted if it's not in the top six; adjust the rendering logic so the focused symbol is always shown: compute a display list (e.g., topSix = symbolStats.slice(0,6)) and if focusSymbol exists and is not in topSix, find its stat in symbolStats (by stat.symbol) and either append it to the render list or replace the last element, then map over that display list; reference symbolStats and focusSymbol in ReplayAnalyticsPanel and keep the existing highlighting logic intact.frontend/src/test/timeframeCompare.test.tsx-236-243 (1)
236-243:⚠️ Potential issue | 🟡 MinorThis assertion can pass without a post-click
4hfetch.Line 239 matches any earlier
4hrequest, and opening compare mode already records one. The test still passes if clicking the main timeframe stops issuingfetchKlinesentirely. Clear the call log before Line 236, or assert against the newly appended call instead.🧪 Tighten the regression check
await waitFor(() => { expect(compareModeToggleSpy.fetchKlinesCalls).toEqual( expect.arrayContaining([ expect.objectContaining({ symbol: 'SOL-USDT', timeframe: '1h' }), ]), ); }); + compareModeToggleSpy.fetchKlinesCalls.length = 0; await user.click(screen.getByRole('button', { name: '4h' })); expect(screen.getByRole('button', { name: '对比周期 1h' })).toHaveAttribute('aria-pressed', 'true'); - expect(compareModeToggleSpy.fetchKlinesCalls).toEqual( - expect.arrayContaining([ - expect.objectContaining({ symbol: 'SOL-USDT', timeframe: '4h' }), - ]), - ); + expect(compareModeToggleSpy.fetchKlinesCalls.at(-1)).toEqual( + expect.objectContaining({ symbol: 'SOL-USDT', timeframe: '4h' }), + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/test/timeframeCompare.test.tsx` around lines 236 - 243, The test currently asserts that a '4h' fetch occurred but may match a prior request; before calling await user.click(screen.getByRole('button', { name: '4h' })) clear or reset the spy's recorded calls (compareModeToggleSpy.fetchKlinesCalls) or, alternatively, change the assertion to check the newest call (e.g. the last element of compareModeToggleSpy.fetchKlinesCalls) contains { symbol: 'SOL-USDT', timeframe: '4h' } so you guarantee the click triggered the fetch; update the test around the user.click and the expect referencing compareModeToggleSpy.fetchKlinesCalls to use one of these approaches.frontend/src/components/import/ImportResultsTable.tsx-32-45 (1)
32-45:⚠️ Potential issue | 🟡 MinorExpose the active filter state semantically.
The filter is a toggle, but the selected state is only conveyed by color right now. Add
aria-pressed={active}so assistive tech can announce which filter is active.♿ Small fix
function FilterButton({ active, children, onClick, }: { active: boolean; children: ReactNode; onClick: () => void; }) { return ( - <button className={`btn btn-sm ${active ? 'btn-primary' : 'btn-ghost'}`} onClick={onClick} type="button"> + <button + aria-pressed={active} + className={`btn btn-sm ${active ? 'btn-primary' : 'btn-ghost'}`} + onClick={onClick} + type="button" + > {children} </button> ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/import/ImportResultsTable.tsx` around lines 32 - 45, The FilterButton component currently only conveys selection visually; update the button element in function FilterButton to include the ARIA toggle state by adding aria-pressed={active} to the <button> props so assistive tech can detect the active state (use the existing active prop passed into FilterButton).frontend/src/utils/replayPlayback.ts-19-24 (1)
19-24:⚠️ Potential issue | 🟡 Minor
tickcurrently advertises backward playback but never stops atminTime.Because
directionis shared by both'step'and'tick', a backward tick will clamp atminTimeand keepisPlaying: trueforever. Either make'tick'forward-only in the action type or stop playback when the active boundary is reached.💡 Suggested change
const signedStep = action.direction === 'backward' ? -action.stepMs : action.stepMs; const nextCursor = clampTime(state.cursorTime + signedStep, action.minTime, action.maxTime); - const reachedEnd = nextCursor >= action.maxTime; + const reachedBoundary = + action.direction === 'backward' + ? nextCursor <= action.minTime + : nextCursor >= action.maxTime; return { ...state, cursorTime: nextCursor, - isPlaying: action.type === 'tick' ? !reachedEnd : false, + isPlaying: action.type === 'tick' ? !reachedBoundary : false, };Also applies to: 81-93
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/utils/replayPlayback.ts` around lines 19 - 24, The 'tick' action in ReplayPlaybackAction incorrectly allows backward direction and never stops at minTime, causing isPlaying to stay true; either restrict the action type to make 'tick' forward-only (remove direction from the 'tick' variant) or update the playback reducer/handler that processes 'tick' (and 'step') to detect when the computed time is clamped to minTime or maxTime and set isPlaying = false when a boundary is reached; update the ReplayPlaybackAction definition (remove direction from the 'tick' union member) or add boundary-check logic in the reducer that handles 'tick'/'step' to stop playback when clamp(Time, minTime, maxTime) equals the boundary, and mirror this change for the similar code region referenced (lines ~81-93).frontend/src/utils/replayPlayback.ts-26-30 (1)
26-30:⚠️ Potential issue | 🟡 MinorClamp restored progress before hydrating state.
A saved record can still be stale even when it is well-formed. If the trade window changes after a local re-import or restore, this restores an out-of-range cursor until the next step/tick clamps it. Clamp or discard saved progress against the current bounds before returning state.
💡 Suggested change
interface CreateInitialReplayPlaybackStateInput { tradeId: number; entryTime: number; + minTime: number; + maxTime: number; savedProgress?: ReplayProgressRecord | null; } export function createInitialReplayPlaybackState({ tradeId, entryTime, + minTime, + maxTime, savedProgress, }: CreateInitialReplayPlaybackStateInput): ReplayPlaybackState { if (savedProgress && savedProgress.tradeId === tradeId) { return { - cursorTime: savedProgress.cursorTime, + cursorTime: clampTime(savedProgress.cursorTime, minTime, maxTime), isPlaying: false, speed: savedProgress.speed, }; }Based on learnings, the app is local-first and import-driven, so persisted replay state needs to tolerate locally changed trade data.
Also applies to: 44-55
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/utils/replayPlayback.ts` around lines 26 - 30, The saved ReplayProgressRecord (savedProgress in CreateInitialReplayPlaybackStateInput) must be validated and clamped against the current trade bounds before hydrating state; update the state-initialization logic that consumes savedProgress (the function that builds the initial playback state) to check savedProgress fields (e.g., cursor/index/time/percent) and: if out of range, clamp them into [min, max] based on the current trade's step/count/time bounds (or discard/nullify savedProgress if completely invalid), then use the clamped record when returning the initial state; apply the same validation/clamping logic to the other restoration path referenced around lines 44-55.
🧹 Nitpick comments (8)
backend/migrations/versions/001_phase1_import_reliability.sql (1)
36-76: Harden import-report invariants with DB-level checks.Consider adding
CHECKconstraints for non-negative counters (total_rows,success_count, etc.) and positiverow_numberto prevent invalid persisted state if application-side validation is bypassed.Suggested migration hardening
CREATE TABLE IF NOT EXISTS import_sessions ( id INTEGER PRIMARY KEY AUTOINCREMENT, source_filename TEXT NOT NULL, status TEXT NOT NULL, - total_rows INTEGER NOT NULL DEFAULT 0, - success_count INTEGER NOT NULL DEFAULT 0, - failed_count INTEGER NOT NULL DEFAULT 0, - duplicate_count INTEGER NOT NULL DEFAULT 0, - conflict_count INTEGER NOT NULL DEFAULT 0, - timestamp_normalization_count INTEGER NOT NULL DEFAULT 0, + total_rows INTEGER NOT NULL DEFAULT 0 CHECK (total_rows >= 0), + success_count INTEGER NOT NULL DEFAULT 0 CHECK (success_count >= 0), + failed_count INTEGER NOT NULL DEFAULT 0 CHECK (failed_count >= 0), + duplicate_count INTEGER NOT NULL DEFAULT 0 CHECK (duplicate_count >= 0), + conflict_count INTEGER NOT NULL DEFAULT 0 CHECK (conflict_count >= 0), + timestamp_normalization_count INTEGER NOT NULL DEFAULT 0 CHECK (timestamp_normalization_count >= 0), file_rejection_reason TEXT, file_rejection_message TEXT, download_filename TEXT, download_mime_type TEXT NOT NULL DEFAULT 'text/csv', created_at INTEGER NOT NULL ); @@ CREATE TABLE IF NOT EXISTS import_report_rows ( id INTEGER PRIMARY KEY AUTOINCREMENT, session_id INTEGER NOT NULL, - row_number INTEGER NOT NULL, + row_number INTEGER NOT NULL CHECK (row_number > 0), outcome TEXT NOT NULL,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/migrations/versions/001_phase1_import_reliability.sql` around lines 36 - 76, Update the CREATE TABLE definitions to enforce DB-level invariants: add CHECK constraints on import_sessions (columns total_rows, success_count, failed_count, duplicate_count, conflict_count, timestamp_normalization_count) to require each >= 0, and on import_report_rows to require row_number > 0; if this migration must apply to existing DBs, add corresponding ALTER TABLE ... ADD CHECK statements to guard existing data migration. Target the import_sessions and import_report_rows table definitions and the specific column names (total_rows, success_count, failed_count, duplicate_count, conflict_count, timestamp_normalization_count, row_number) when making the changes..planning/codebase/STRUCTURE.md (1)
43-47: Consider removing exact line-number references from architecture docs.Exact ranges are likely to drift after normal edits; file/symbol-level references are usually easier to keep accurate.
Also applies to: 87-109
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.planning/codebase/STRUCTURE.md around lines 43 - 47, The doc lists specific line ranges that will drift; remove the exact line-number references and replace them with file- or symbol-level references instead—e.g., change entries like `backend/main.py:28-129` to `backend/main.py` (or `backend.main`), `backend/database.py:4-16` to `backend/database.py` (or reference `create_engine`, `Session`, `Base`), `backend/models.py:5-38` to `backend/models.py` (or mention `Kline`, `Trade` models), `backend/import_data.py:12-36` to `backend/import_data.py`, and keep `backend/services/` as-is; apply the same removal of numeric ranges for the other occurrence noted (lines 87-109) so the architecture doc references files/symbols only.frontend/src/test/replayAnalyticsPanel.test.tsx (1)
47-51: Removeas nevercast in test helper and useParametersto maintain type safety.The
Record<string, unknown>parameter type does not match theReplayWorkspaceRecordexpected byserializeReplayWorkspace, requiring an unsafeas nevercast. This masks potential schema/type regressions. UseParameters<typeof serializeReplayWorkspace>[0]to automatically extract and enforce the correct parameter type:Suggested refactor
-function seedWorkspaceRecord(record: Record<string, unknown>) { +function seedWorkspaceRecord( + record: Parameters<typeof serializeReplayWorkspace>[0], +) { window.localStorage.setItem( REPLAY_WORKSPACE_STORAGE_KEY, - serializeReplayWorkspace(record as never), + serializeReplayWorkspace(record), ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/test/replayAnalyticsPanel.test.tsx` around lines 47 - 51, seedWorkspaceRecord currently accepts Record<string, unknown> and uses an unsafe cast to call serializeReplayWorkspace; change the parameter type to the exact type expected by serializeReplayWorkspace by using Parameters<typeof serializeReplayWorkspace>[0], remove the "as never" cast, and keep the call to window.localStorage.setItem(REPLAY_WORKSPACE_STORAGE_KEY, serializeReplayWorkspace(record)) so the helper enforces the correct ReplayWorkspaceRecord shape at compile time..planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-04-PLAN.md (1)
35-44: Include explicit component-boundary refactor criteria in this phase task.Given this phase integrates analytics into replay shell surfaces, add acceptance criteria that
ChartManager.tsxorchestration remains modular (logic extracted into focused hooks/subcomponents) to prevent further coupling.As per coding guidelines:
frontend/**/*.tsx: "Refactor larger, imperative components likeChartManager.tsxandAnalysisPage.tsxinto smaller, modular components".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-04-PLAN.md around lines 35 - 44, Update the Phase 3 task to include explicit component-boundary refactor acceptance criteria: state that ChartManager.tsx orchestration must be extracted into focused hooks and subcomponents (e.g., useChartData, useSeriesSync, ChartControls) and that ReplayAnalyticsPanel.tsx should consume those hooks rather than import imperative logic; require tests/smoke coverage to assert modularity by verifying ChartManager no longer contains large imperative blocks and that small focused components/hooks (named above or similar) exist and are used. Ensure the updated task text and verify/action sections mention these modularization checkpoints and reference ChartManager.tsx and ReplayAnalyticsPanel.tsx so reviewers can validate the refactor..planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-03-PLAN.md (1)
29-38: Add an explicit modularization checkpoint forChartManager.tsx.Since this plan edits
ChartManager.tsx, include a concrete extraction step (e.g., compare-mode state/controller hook + pane presenter component) so the phase doesn’t further grow an already imperative surface.As per coding guidelines:
frontend/**/*.tsx: "Refactor larger, imperative components likeChartManager.tsxandAnalysisPage.tsxinto smaller, modular components".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-03-PLAN.md around lines 29 - 38, Extract compare-mode responsibilities out of ChartManager.tsx into a dedicated state/controller hook (e.g., useCompareModeController) and a presentational component (e.g., ComparePanePresenter) so compare-mode logic is modular: move all compare-mode state, mode-switch handlers, and symbol-specific sync logic into the hook, have ChartManager consume the hook and delegate UI to ComparePanePresenter, and ensure CompareModeControls.tsx uses the same hook or props to preserve existing multi-symbol/timeframe synchronization and the expected defaults; update frontend/src/test/symbolCompare.test.tsx to target the new hook/component boundaries to drive the failing tests before refactor.frontend/src/test/replayLayoutPersistence.test.tsx (1)
160-164: Replaceas neverwith proper type assertion for strict TypeScript compliance.Line 163 uses
as neverto bypass type checking, which defeats strict typing and can hide invalid workspace seed shapes. Instead, assert to the expectedReplayWorkspaceRecordtype:record as ReplayWorkspaceRecord. This maintains strict TypeScript settings required per coding guidelines.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/test/replayLayoutPersistence.test.tsx` around lines 160 - 164, The test helper seedWorkspaceRecord is casting the input with `as never` before calling serializeReplayWorkspace; replace that unsafe cast with the correct domain type by asserting the input as ReplayWorkspaceRecord (i.e., use `record as ReplayWorkspaceRecord`) so serializeReplayWorkspace receives the proper type; keep references to seedWorkspaceRecord, REPLAY_WORKSPACE_STORAGE_KEY, and serializeReplayWorkspace to locate the change.frontend/src/pages/ReplayPage.tsx (1)
23-72: Move the seed parsing helpers out of the page.
isRecord,isReplaySeed,resolveInitialSeed, andseedToTradeare pure replay/session logic now embedded in the route component. Extracting them tosrc/utils/replaySession.tswould keep this page focused on wiring and make the bootstrap logic easier to test directly.Based on learnings Maintain strong separation between UI, API, and pure logic helpers across the codebase.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/pages/ReplayPage.tsx` around lines 23 - 72, Extract the pure helpers isRecord, isReplaySeed, resolveInitialSeed, and seedToTrade into a new module (e.g., src/utils/replaySession.ts): move their implementations there, export them, and update ReplayPage.tsx to import these functions instead of defining them inline. Ensure the new module imports any referenced types/constants (ReplaySessionSeed, REPLAY_SESSION_STORAGE_KEY, deserializeReplaySession, Trade) and preserves existing behavior (including handling window/localStorage). Run the project typescript build to fix any missing imports and update any other consumers to use the exported symbols.frontend/src/components/ChartManager.tsx (1)
125-1793: SplitChartManagerbefore it grows further.This component now couples chart bootstrap/teardown, replay state, localStorage hydration, kline loading, compare sync, modal state, and render markup. Pulling those concerns into hooks/components will make bugs like the async teardown race much easier to contain.
As per coding guidelines,
frontend/**/*.tsx: Refactor larger, imperative components likeChartManager.tsxandAnalysisPage.tsxinto smaller, modular components.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ChartManager.tsx` around lines 125 - 1793, ChartManager has grown too large and mixes bootstrapping/teardown, kline loading, playback/localStorage hydration, compare sync, and UI; split responsibilities into focused hooks/components. Extract chart lifecycle and DOM refs into a useChart (wrap createChart, chartRef, seriesRef, volumeSeriesRef, mainAnchorSeriesRef/compareAnchorSeriesRef, applyTimeFiller, subscribe/unsubscribe crosshair and resize, and chart removal), move data fetching logic into a useKlineLoader hook (containing loadMainData, loadCompareData, latestMainRequestIdRef/latestCompareRequestIdRef, mainCandlesByTimeRef/compareCandlesByTimeRef and ensureTimeFiller interaction), encapsulate replay state and persistence into useReplayPlayback (playbackState, dispatchPlayback, hydration via buildReplayProgressStorageKey/serializeReplayProgress/deserializeReplayProgress), and pull UI controls into a smaller ChartControls/CompareModal component (handling modal open/close, symbolOptions, search state); keep shared crosshair/compare sync helpers (scheduleTargetCrosshairSync, updateSharedCrosshair, syncingRangeRef) but call them from the new hooks; wire the hooks back into ChartManager to preserve behavior and ensure each extracted hook/component cleans up its own async/rAF/observer to avoid teardown races.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d385cb71-e4bf-41eb-a758-f56ebb74f333
⛔ Files ignored due to path filters (5)
backend/uv.lockis excluded by!**/*.lockdocs/images/analysis.pngis excluded by!**/*.pngdocs/images/import.pngis excluded by!**/*.pngdocs/images/replay.pngis excluded by!**/*.pngfrontend/pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (113)
.planning/MILESTONE-AUDIT-v1.0.md.planning/PROJECT.md.planning/REQUIREMENTS.md.planning/ROADMAP.md.planning/STATE.md.planning/codebase/ARCHITECTURE.md.planning/codebase/CONCERNS.md.planning/codebase/CONVENTIONS.md.planning/codebase/INTEGRATIONS.md.planning/codebase/STACK.md.planning/codebase/STRUCTURE.md.planning/codebase/TESTING.md.planning/config.json.planning/phases/01-import-reliability-schema-evolution/01-01-SUMMARY.md.planning/phases/01-import-reliability-schema-evolution/01-02-SUMMARY.md.planning/phases/01-import-reliability-schema-evolution/01-03-SUMMARY.md.planning/phases/01-import-reliability-schema-evolution/01-04-SUMMARY.md.planning/phases/01-import-reliability-schema-evolution/01-05-SUMMARY.md.planning/phases/02-single-trade-replay-core/02-01-PLAN.md.planning/phases/02-single-trade-replay-core/02-01-SUMMARY.md.planning/phases/02-single-trade-replay-core/02-02-PLAN.md.planning/phases/02-single-trade-replay-core/02-02-SUMMARY.md.planning/phases/02-single-trade-replay-core/02-03-PLAN.md.planning/phases/02-single-trade-replay-core/02-03-SUMMARY.md.planning/phases/02-single-trade-replay-core/02-04-PLAN.md.planning/phases/02-single-trade-replay-core/02-04-SUMMARY.md.planning/phases/02-single-trade-replay-core/02-CONTEXT.md.planning/phases/02-single-trade-replay-core/02-DISCUSSION-LOG.md.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-01-PLAN.md.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-01-SUMMARY.md.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-02-PLAN.md.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-02-SUMMARY.md.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-03-PLAN.md.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-03-SUMMARY.md.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-04-PLAN.md.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-04-SUMMARY.md.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-CONTEXT.md.planning/phases/03-multi-timeframe-compare-lightweight-analytics/03-DISCUSSION-LOG.md.planning/phases/04-local-ux-polish-performance-hardening/04-01-PLAN.md.planning/phases/04-local-ux-polish-performance-hardening/04-01-SUMMARY.md.planning/phases/04-local-ux-polish-performance-hardening/04-02-PLAN.md.planning/phases/04-local-ux-polish-performance-hardening/04-02-SUMMARY.md.planning/phases/04-local-ux-polish-performance-hardening/04-03-PLAN.md.planning/phases/04-local-ux-polish-performance-hardening/04-03-SUMMARY.md.planning/phases/04-local-ux-polish-performance-hardening/04-CONTEXT.md.planning/phases/04-local-ux-polish-performance-hardening/04-DISCUSSION-LOG.md.planning/research/ARCHITECTURE.md.planning/research/FEATURES.md.planning/research/PITFALLS.md.planning/research/STACK.md.planning/research/SUMMARY.mdAGENTS.mdREADME.mdbackend/import_data.pybackend/main.pybackend/migrations/__init__.pybackend/migrations/runner.pybackend/migrations/versions/001_phase1_import_reliability.sqlbackend/models.pybackend/pyproject.tomlbackend/services/backup_service.pybackend/services/import_types.pybackend/services/trade_importer.pybackend/tests/conftest.pybackend/tests/test_backup_api.pybackend/tests/test_import_api.pybackend/tests/test_import_contracts.pybackend/tests/test_migration_runner.pybackend/tests/test_startup_migrations.pybackend/tests/test_trade_importer.pyfrontend/package.jsonfrontend/src/App.tsxfrontend/src/components/ChartManager.tsxfrontend/src/components/CompareModeControls.tsxfrontend/src/components/Navbar.tsxfrontend/src/components/ReplayAnalyticsPanel.tsxfrontend/src/components/ReplayControls.tsxfrontend/src/components/TradeList.tsxfrontend/src/components/import/ImportContractNote.tsxfrontend/src/components/import/ImportDropzone.tsxfrontend/src/components/import/ImportResultsTable.tsxfrontend/src/components/import/ImportSummary.tsxfrontend/src/components/import/LocalSafekeepingPanel.tsxfrontend/src/components/import/ManualImportPanel.tsxfrontend/src/pages/ImportPage.tsxfrontend/src/pages/ReplayPage.tsxfrontend/src/services/api.tsfrontend/src/services/importReportAdapter.tsfrontend/src/services/importTypes.tsfrontend/src/test/ImportPage.test.tsxfrontend/src/test/comparePane.test.tsfrontend/src/test/import-api.contract.test.tsfrontend/src/test/import-page-manual-entry.test.tsxfrontend/src/test/import-route-smoke.test.tsxfrontend/src/test/localSafekeepingPanel.test.tsxfrontend/src/test/phase3ReplayIntegration.smoke.test.tsxfrontend/src/test/replayAnalyticsPanel.test.tsxfrontend/src/test/replayControls.test.tsxfrontend/src/test/replayFlow.smoke.test.tsxfrontend/src/test/replayLayoutPersistence.test.tsxfrontend/src/test/replayPage.test.tsxfrontend/src/test/replayPlayback.test.tsfrontend/src/test/replaySession.test.tsfrontend/src/test/replayWorkspace.test.tsfrontend/src/test/setup.tsfrontend/src/test/symbolCompare.test.tsxfrontend/src/test/timeframeCompare.test.tsxfrontend/src/test/tradeListSelection.test.tsxfrontend/src/utils/comparePane.tsfrontend/src/utils/replayPlayback.tsfrontend/src/utils/replaySession.tsfrontend/src/utils/replayWorkspace.tsfrontend/vitest.config.ts
| MigrationRunner(engine=engine).run() | ||
| Base.metadata.create_all(bind=engine) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify where migration-managed flows still call create_all
rg -n --type py -C2 'MigrationRunner\(|Base\.metadata\.create_all\(' backendRepository: Xeron2000/retraq
Length of output: 2996
🏁 Script executed:
#!/bin/bash
# Find and inspect MigrationRunner implementation
fd -t py 'migration' backend --exec grep -l 'class MigrationRunner' {} \;Repository: Xeron2000/retraq
Length of output: 229
🏁 Script executed:
#!/bin/bash
# Get MigrationRunner class definition
rg -n 'class MigrationRunner' backend -A 20Repository: Xeron2000/retraq
Length of output: 1480
🏁 Script executed:
#!/bin/bash
# Check main.py context to understand the pattern
head -n 30 backend/main.py | cat -nRepository: Xeron2000/retraq
Length of output: 1294
🏁 Script executed:
#!/bin/bash
# Find DEFAULT_MIGRATIONS definition
rg -n 'DEFAULT_MIGRATIONS' backendRepository: Xeron2000/retraq
Length of output: 235
🏁 Script executed:
#!/bin/bash
# Get migration step implementation details
rg -n 'class MigrationStep' backend -A 10Repository: Xeron2000/retraq
Length of output: 665
🏁 Script executed:
#!/bin/bash
# Check what migrations are defined
rg -n 'MigrationStep\(' backend/migrations/runner.py -A 2 | head -40Repository: Xeron2000/retraq
Length of output: 154
🏁 Script executed:
#!/bin/bash
# Get full bootstrap_schema implementation
rg -n 'def bootstrap_schema' backend/migrations/runner.py -A 100 | head -120Repository: Xeron2000/retraq
Length of output: 3506
🏁 Script executed:
#!/bin/bash
# List all models to understand full schema requirements
cat -n backend/models.pyRepository: Xeron2000/retraq
Length of output: 4900
Mixing migration-versioned and ORM-metadata schema definitions enables silent schema drift.
The bootstrap migration (001) creates only klines and trades, but Base.metadata.create_all() auto-creates import_sessions and import_report_rows without migration versioning. This hybrid approach means new ORM models bypass the migration system entirely—adding a model to models.py silently creates its table without a recorded migration step, breaking schema governance.
Move all table definitions to explicit migration steps, or remove create_all() and define all tables through migrations exclusively.
♻️ Proposed fix
Add migrations for import_sessions and import_report_rows, then remove create_all():
def main():
MigrationRunner(engine=engine).run()
- Base.metadata.create_all(bind=engine)
db = SessionLocal()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/import_data.py` around lines 16 - 17, The code calls
MigrationRunner(engine).run() but then calls
Base.metadata.create_all(bind=engine), which silently creates tables (notably
import_sessions and import_report_rows) outside the migration system; remove the
Base.metadata.create_all(bind=engine) call and instead add explicit Alembic/DB
migration scripts that create the import_sessions and import_report_rows tables
(and any other models added in models.py) so all schema changes are recorded
under the migration runner (referencing MigrationRunner,
Base.metadata.create_all, import_sessions, import_report_rows, and models.py).
| with tempfile.NamedTemporaryFile( | ||
| delete=False, suffix=Path(file.filename).suffix or ".sqlite3" | ||
| ) as tmp: | ||
| tmp_path = Path(tmp.name) | ||
| _ = tmp.write(await file.read()) |
There was a problem hiding this comment.
Stream the backup upload to disk instead of buffering it.
await file.read() materializes the full SQLite backup in memory before restore begins. Large backups can spike RAM or fail this path outright.
📦 Chunked write example
try:
with tempfile.NamedTemporaryFile(
delete=False, suffix=Path(file.filename).suffix or ".sqlite3"
) as tmp:
tmp_path = Path(tmp.name)
- _ = tmp.write(await file.read())
+ while chunk := await file.read(1024 * 1024):
+ tmp.write(chunk)
restore_sqlite_backup(engine, tmp_path)
return {"status": "restored"}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/main.py` around lines 126 - 130, The current restore path buffers the
entire upload with await file.read(), causing high memory use; change the logic
in the block that creates the temporary file (the with
tempfile.NamedTemporaryFile(...) as tmp: / tmp_path / tmp.write usage) to stream
the upload to disk by reading the UploadFile in chunks (e.g., use
file.read(chunk_size) in a loop or an async chunk iterator) and writing each
chunk to tmp.write until EOF, keeping delete=False and suffix handling the same;
ensure tmp.flush() / tmp.close() semantics are preserved so subsequent code
using tmp_path sees the fully written file.
| class ImportSession(Base): | ||
| __tablename__ = "import_sessions" | ||
|
|
||
| id: Mapped[int] = mapped_column(Integer, primary_key=True, autoincrement=True) | ||
| source_filename: Mapped[str] = mapped_column(String(255), nullable=False) | ||
| status: Mapped[str] = mapped_column(String(32), nullable=False) | ||
| total_rows: Mapped[int] = mapped_column(Integer, nullable=False, default=0) | ||
| success_count: Mapped[int] = mapped_column(Integer, nullable=False, default=0) | ||
| failed_count: Mapped[int] = mapped_column(Integer, nullable=False, default=0) | ||
| duplicate_count: Mapped[int] = mapped_column(Integer, nullable=False, default=0) | ||
| conflict_count: Mapped[int] = mapped_column(Integer, nullable=False, default=0) | ||
| timestamp_normalization_count: Mapped[int] = mapped_column( | ||
| Integer, nullable=False, default=0 | ||
| ) | ||
| file_rejection_reason: Mapped[str | None] = mapped_column(String(255)) | ||
| file_rejection_message: Mapped[str | None] = mapped_column(Text) | ||
| download_filename: Mapped[str | None] = mapped_column(String(255)) | ||
| download_mime_type: Mapped[str] = mapped_column( | ||
| String(64), nullable=False, default="text/csv" | ||
| ) | ||
| created_at: Mapped[int] = mapped_column( | ||
| BigInteger, nullable=False, default=lambda: int(time.time() * 1000) | ||
| ) | ||
|
|
||
| __table_args__ = (Index("ix_import_sessions_status", "status"),) | ||
|
|
||
|
|
||
| class ImportReportRow(Base): | ||
| __tablename__ = "import_report_rows" | ||
|
|
||
| id: Mapped[int] = mapped_column(Integer, primary_key=True, autoincrement=True) | ||
| session_id: Mapped[int] = mapped_column( | ||
| ForeignKey("import_sessions.id"), nullable=False | ||
| ) | ||
| row_number: Mapped[int] = mapped_column(Integer, nullable=False) | ||
| outcome: Mapped[str] = mapped_column(String(32), nullable=False) | ||
| field: Mapped[str] = mapped_column(String(64), nullable=False) | ||
| raw_value: Mapped[str | None] = mapped_column(Text) | ||
| normalized_value: Mapped[str | None] = mapped_column(Text) | ||
| reason: Mapped[str] = mapped_column(Text, nullable=False) | ||
| business_key: Mapped[str] = mapped_column(Text, nullable=False) | ||
|
|
||
| __table_args__ = ( | ||
| Index("ix_import_report_rows_session_id", "session_id"), | ||
| Index("ix_import_report_rows_session_outcome", "session_id", "outcome"), | ||
| Index("ix_import_report_rows_business_key", "business_key"), | ||
| ) |
There was a problem hiding this comment.
Add a versioned migration for these tables in the same change.
ImportSession and ImportReportRow only exist in ORM metadata here. The migration runner still bootstraps the legacy tables, so startup has to rely on Base.metadata.create_all() to materialize the import schema outside schema_version, which leaves the recorded migration history out of sync with the real database shape.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/models.py` around lines 53 - 99, The ImportSession and
ImportReportRow models are only added to ORM metadata and aren't captured by the
versioned migrations, causing Base.metadata.create_all() to be used at runtime
and leaving schema_version out of sync; create a new versioned migration that
explicitly creates the import_sessions and import_report_rows tables (including
columns, indexes ix_import_sessions_status, ix_import_report_rows_session_id,
ix_import_report_rows_session_outcome, ix_import_report_rows_business_key, and
defaults like download_mime_type and created_at) so the migration runner records
these tables in schema_version and remove any runtime fallback that calls
Base.metadata.create_all() for these models.
| "pytest>=8.3.0", | ||
| "httpx>=0.28.1", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# 1) Verify whether pytest/httpx are used outside tests
rg -n --type=py '^\s*(from|import)\s+(pytest|httpx)\b' backend -g '!backend/tests/**' -g '!**/.venv/**'
# 2) Inspect dependency sections in pyproject
rg -n '\[project\]|\[project\.optional-dependencies\]|pytest|httpx' backend/pyproject.tomlRepository: Xeron2000/retraq
Length of output: 42
🏁 Script executed:
cat -n backend/pyproject.tomlRepository: Xeron2000/retraq
Length of output: 719
🏁 Script executed:
# Search for all pytest/httpx usage in the entire backend directory
rg -n 'pytest|httpx' backend --type=py -A 2Repository: Xeron2000/retraq
Length of output: 363
🏁 Script executed:
# Check specifically for imports in non-test files
find backend -name "*.py" -not -path "*/tests/*" -not -path "*/.venv/*" | head -20Repository: Xeron2000/retraq
Length of output: 430
🏁 Script executed:
# Search for all httpx usage in backend, showing context
rg -n 'httpx' backend --type=py -B 1 -A 1Repository: Xeron2000/retraq
Length of output: 42
🏁 Script executed:
# Check if there are any test files besides tests directory
find backend -name "test_*.py" -o -name "*_test.py" | head -20Repository: Xeron2000/retraq
Length of output: 281
🏁 Script executed:
# Search for httpx specifically in test files
rg -n 'httpx' backend/tests --type=py -B 1 -A 1Repository: Xeron2000/retraq
Length of output: 42
🏁 Script executed:
# Check if httpx might be used indirectly (e.g., through fixtures or imports from other modules)
grep -r 'httpx' backend/ --include="*.py" 2>/dev/null || trueRepository: Xeron2000/retraq
Length of output: 42
Move test-only packages out of runtime dependencies and remove unused imports.
pytest (line 15) is test-only and should be moved to an optional-dependencies section. Additionally, httpx (line 16) is not used anywhere in the codebase and should either be removed entirely or moved to optional-dependencies if it's intended for future use.
Proposed dependency layout
dependencies = [
"fastapi>=0.115.0",
"uvicorn>=0.30.0",
"sqlalchemy>=2.0.35",
"ccxt>=4.4.26",
"pandas>=2.2.3",
"openpyxl>=3.1.5",
"python-multipart>=0.0.9",
"pytz>=2024.1",
- "pytest>=8.3.0",
- "httpx>=0.28.1",
]
+
+[project.optional-dependencies]
+dev = [
+ "pytest>=8.3.0",
+ "httpx>=0.28.1",
+]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "pytest>=8.3.0", | |
| "httpx>=0.28.1", | |
| dependencies = [ | |
| "fastapi>=0.115.0", | |
| "uvicorn>=0.30.0", | |
| "sqlalchemy>=2.0.35", | |
| "ccxt>=4.4.26", | |
| "pandas>=2.2.3", | |
| "openpyxl>=3.1.5", | |
| "python-multipart>=0.0.9", | |
| "pytz>=2024.1", | |
| ] | |
| [project.optional-dependencies] | |
| dev = [ | |
| "pytest>=8.3.0", | |
| "httpx>=0.28.1", | |
| ] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/pyproject.toml` around lines 15 - 16, Move the test-only package
"pytest" out of runtime dependencies into your development or optional
dependencies (e.g., dev-dependencies or project.optional-dependencies) so it is
not installed in production; remove "httpx" from the runtime dependency list if
it is unused, or similarly move "httpx" to an optional/dev dependency if you
intend to keep it for testing/experimentation, and also delete any unused
imports of httpx from the codebase (search for "httpx" imports) to keep
dependencies in sync with actual usage.
| engine.dispose() | ||
| try: | ||
| _ = shutil.copy2(backup_path, sqlite_path) | ||
| finally: |
There was a problem hiding this comment.
Make restore write atomic to avoid DB corruption on interruption.
Lines 37-40 copy directly over the live DB file. If the process crashes mid-copy, the SQLite file can be left corrupted. Use temp-file + atomic replace.
Proposed patch
from pathlib import Path
+import os
import shutil
@@
engine.dispose()
try:
- _ = shutil.copy2(backup_path, sqlite_path)
+ temp_restore_path = sqlite_path.with_suffix(f"{sqlite_path.suffix}.restore_tmp")
+ _ = shutil.copy2(backup_path, temp_restore_path)
+ os.replace(temp_restore_path, sqlite_path)
finally:
engine.dispose()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/services/backup_service.py` around lines 37 - 40, The current restore
copies directly over the live DB (shutil.copy2(backup_path, sqlite_path)) after
engine.dispose(), which can corrupt SQLite if interrupted; instead, copy the
backup into a temporary file placed in the same directory as sqlite_path (use
tempfile.mkstemp or NamedTemporaryFile with dir=os.path.dirname(sqlite_path)),
use shutil.copy2(backup_path, tmp_path) to preserve metadata, then atomically
replace the target with os.replace(tmp_path, sqlite_path) or
Path(tmp_path).replace(Path(sqlite_path)); ensure the temp file is removed on
errors and do this after engine.dispose() so the DB is not in use.
| database.engine = engine | ||
| database.SessionLocal = sessionmaker(bind=engine, autocommit=False, autoflush=False) | ||
| Base.metadata.create_all(bind=engine) | ||
|
|
||
| sys.modules.pop("main", None) | ||
| main = importlib.import_module("main") | ||
| return TestClient(main.app) |
There was a problem hiding this comment.
Restore database globals after test client setup.
_make_client overwrites database.engine / database.SessionLocal and reloads main, but never restores prior globals. This can make later tests run against the wrong DB depending on execution order.
🔧 Suggested fix
def _make_client(db_path: Path) -> TestClient:
+ original_engine = database.engine
+ original_session_local = database.SessionLocal
engine = create_engine(
f"sqlite:///{db_path}", connect_args={"check_same_thread": False}
)
database.engine = engine
database.SessionLocal = sessionmaker(bind=engine, autocommit=False, autoflush=False)
Base.metadata.create_all(bind=engine)
sys.modules.pop("main", None)
main = importlib.import_module("main")
- return TestClient(main.app)
+ client = TestClient(main.app)
+ client._original_engine = original_engine # type: ignore[attr-defined]
+ client._original_session_local = original_session_local # type: ignore[attr-defined]
+ return client def test_backup_download_returns_current_sqlite_database(tmp_path: Path):
@@
- client = _make_client(db_path)
+ client = _make_client(db_path)
+ try:
+ _seed_trade("BTC-USDT", 65000)
+ response = client.get("/api/backups/download")
+ assert response.status_code == 200
+ assert response.headers["content-type"].startswith("application/octet-stream")
+ assert response.headers["content-disposition"].endswith('.sqlite3"')
+ assert response.content == db_path.read_bytes()
+ finally:
+ database.engine.dispose()
+ database.engine = client._original_engine # type: ignore[attr-defined]
+ database.SessionLocal = client._original_session_local # type: ignore[attr-defined]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/tests/test_backup_api.py` around lines 19 - 25, The helper
_make_client currently replaces database.engine and database.SessionLocal and
reloads main without restoring them; modify _make_client to capture the original
values of database.engine and database.SessionLocal before assigning the test
engine/sessionmaker, then import/reload main and create the TestClient, and
finally restore the original database.engine and database.SessionLocal (using a
try/finally or context manager) so subsequent tests see the original globals;
reference the symbols _make_client, database.engine, database.SessionLocal, and
main.app when making the change.
| const loadCompareData = useCallback( | ||
| async ( | ||
| tf: Timeframe, | ||
| sym: string, | ||
| range?: { start: number; end: number }, | ||
| ) => { | ||
| if (!compareSeriesRef.current || !compareVolumeSeriesRef.current) return; | ||
| latestCompareRequestIdRef.current += 1; | ||
| const requestId = latestCompareRequestIdRef.current; | ||
| setCompareKlineLoading(true); | ||
| setCompareKlineError(null); | ||
| setCompareHasData(false); | ||
| try { | ||
| compareSeriesRef.current.setData([]); | ||
| compareVolumeSeriesRef.current.setData([]); | ||
| compareCandlesByTimeRef.current = new Map(); | ||
| compareHasDataRef.current = false; | ||
| const klines: Kline[] = await fetchKlines( | ||
| sym, | ||
| tf, | ||
| range ? { start: range.start, end: range.end } : undefined, | ||
| ); | ||
| if (requestId !== latestCompareRequestIdRef.current) return; | ||
| const data: CandlestickData[] = klines.map((k) => ({ | ||
| time: k.time as Time, | ||
| open: k.open, | ||
| high: k.high, | ||
| low: k.low, | ||
| close: k.close, | ||
| })); | ||
| const volume: HistogramData[] = klines.map((k) => ({ | ||
| time: k.time as Time, | ||
| value: k.volume, | ||
| color: | ||
| k.close >= k.open | ||
| ? "rgba(34, 197, 94, 0.6)" | ||
| : "rgba(239, 68, 68, 0.6)", | ||
| })); | ||
| compareSeriesRef.current.setData(data); | ||
| compareVolumeSeriesRef.current.setData(volume); | ||
| compareCandlesByTimeRef.current = new Map( | ||
| data.map((candle) => [candle.time, candle]), | ||
| ); | ||
| compareHasDataRef.current = data.length > 0; | ||
| setCompareHasData(data.length > 0); | ||
| setCompareKlineError(null); | ||
|
|
||
| const fillerStart = range | ||
| ? Math.floor(range.start / 1000) | ||
| : (data[0]?.time as number | undefined); | ||
| const fillerEnd = range | ||
| ? Math.floor(range.end / 1000) | ||
| : (data[data.length - 1]?.time as number | undefined); | ||
| if (typeof fillerStart === "number" && typeof fillerEnd === "number") { | ||
| ensureTimeFiller(tf, fillerStart, fillerEnd); | ||
| } | ||
|
|
||
| const pending = pendingCompareVisibleRangeRef.current; | ||
| if (pending && compareChartRef.current) { | ||
| try { | ||
| if ( | ||
| typeof pending.from === "number" && | ||
| typeof pending.to === "number" | ||
| ) { | ||
| compareChartRef.current.timeScale().setVisibleRange(pending); | ||
| } else if ( | ||
| typeof compareChartRef.current.timeScale() | ||
| .setVisibleLogicalRange === "function" | ||
| ) { | ||
| compareChartRef.current | ||
| .timeScale() | ||
| .setVisibleLogicalRange( | ||
| pending as { from: number; to: number }, | ||
| ); | ||
| } | ||
| pendingCompareVisibleRangeRef.current = null; | ||
| } catch (err) { | ||
| console.error(err); | ||
| } | ||
| } | ||
| } catch (error) { | ||
| console.error(error); | ||
| if (requestId !== latestCompareRequestIdRef.current) return; | ||
| compareSeriesRef.current.setData([]); | ||
| compareVolumeSeriesRef.current.setData([]); | ||
| compareCandlesByTimeRef.current = new Map(); | ||
| compareHasDataRef.current = false; | ||
| setCompareHasData(false); | ||
| setCompareKlineError( | ||
| error instanceof Error ? error.message : String(error), | ||
| ); | ||
| } finally { | ||
| if (requestId === latestCompareRequestIdRef.current) { | ||
| setCompareKlineLoading(false); | ||
| } | ||
| } | ||
| }, | ||
| [ensureTimeFiller], | ||
| ); |
There was a problem hiding this comment.
Guard compare-series writes after the async fetch.
Line 527 only filters out superseded requests. If the compare pane is closed while fetchKlines(...) is still in flight, the cleanup later nulls these refs and the post-await setData(...) path throws.
🛡️ One way to harden the loader
const loadCompareData = useCallback(
async (
tf: Timeframe,
sym: string,
range?: { start: number; end: number },
) => {
- if (!compareSeriesRef.current || !compareVolumeSeriesRef.current) return;
+ const compareSeries = compareSeriesRef.current;
+ const compareVolumeSeries = compareVolumeSeriesRef.current;
+ if (!compareSeries || !compareVolumeSeries) return;
latestCompareRequestIdRef.current += 1;
const requestId = latestCompareRequestIdRef.current;
setCompareKlineLoading(true);
setCompareKlineError(null);
setCompareHasData(false);
try {
- compareSeriesRef.current.setData([]);
- compareVolumeSeriesRef.current.setData([]);
+ compareSeries.setData([]);
+ compareVolumeSeries.setData([]);
compareCandlesByTimeRef.current = new Map();
compareHasDataRef.current = false;
const klines: Kline[] = await fetchKlines(
sym,
tf,
range ? { start: range.start, end: range.end } : undefined,
);
- if (requestId !== latestCompareRequestIdRef.current) return;
+ if (
+ requestId !== latestCompareRequestIdRef.current ||
+ compareSeriesRef.current !== compareSeries ||
+ compareVolumeSeriesRef.current !== compareVolumeSeries
+ ) {
+ return;
+ }
const data: CandlestickData[] = klines.map((k) => ({
time: k.time as Time,
open: k.open,
high: k.high,
low: k.low,
close: k.close,
}));
const volume: HistogramData[] = klines.map((k) => ({
time: k.time as Time,
value: k.volume,
color:
k.close >= k.open
? "rgba(34, 197, 94, 0.6)"
: "rgba(239, 68, 68, 0.6)",
}));
- compareSeriesRef.current.setData(data);
- compareVolumeSeriesRef.current.setData(volume);
+ compareSeries.setData(data);
+ compareVolumeSeries.setData(volume);
// ...
} catch (error) {
console.error(error);
if (requestId !== latestCompareRequestIdRef.current) return;
- compareSeriesRef.current.setData([]);
- compareVolumeSeriesRef.current.setData([]);
+ if (
+ compareSeriesRef.current === compareSeries &&
+ compareVolumeSeriesRef.current === compareVolumeSeries
+ ) {
+ compareSeries.setData([]);
+ compareVolumeSeries.setData([]);
+ }
// ...
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/ChartManager.tsx` around lines 505 - 603, The
post-await code in loadCompareData (function loadCompareData) only checks
requestId but does not guard against refs being nulled when the compare pane
closes; wrap/precede any use of compareSeriesRef.current,
compareVolumeSeriesRef.current, compareCandlesByTimeRef.current,
compareChartRef.current and compareHasDataRef.current with a runtime existence
check (e.g., if (!compareSeriesRef.current || !compareVolumeSeriesRef.current ||
!compareCandlesByTimeRef.current) return) immediately after the await and also
inside the catch before setting data/refs, so you bail out if the pane was
closed; apply the same guard before manipulating
pendingCompareVisibleRangeRef.current/timeScale() to avoid calling methods on a
destroyed chart.
| Promise.all([ | ||
| fetchStats(), | ||
| fetchTrades(undefined, { | ||
| limit: TRADES_FETCH_LIMIT, | ||
| maxPages: TRADES_FETCH_MAX_PAGES, | ||
| }), |
There was a problem hiding this comment.
Don't mix full-dataset stats with a capped trade sample.
fetchStats() gives panel-wide totals, but Line 277 limits the trade-derived views to at most 200 * 20 rows. Once the local DB grows past that, the equity curve, heatmap, and symbol table can silently disagree with the totals shown above. Either fetch to exhaustion here, or disclose that these sections are truncated.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/ReplayAnalyticsPanel.tsx` around lines 275 - 280, The
panel mixes full totals from fetchStats() with a capped trade sample from
fetchTrades(...) (using TRADES_FETCH_LIMIT and TRADES_FETCH_MAX_PAGES), causing
derived views (equity curve, heatmap, symbol table) to disagree when the local
DB exceeds the sample; either remove the sampling here and call fetchTrades to
exhaustion (i.e., call fetchTrades without
TRADES_FETCH_LIMIT/TRADES_FETCH_MAX_PAGES or set them to unlimited) so derived
views reflect full data, or keep the capped fetch but add a clear UI disclosure
(in ReplayAnalyticsPanel) that the trade-derived sections are truncated and show
the current sample size so users understand the discrepancy.
| const initialWorkspace = loadReplayWorkspace(); | ||
| const initialTimeframe = | ||
| initialWorkspace?.activeTimeframe ?? initialSeed?.defaultTimeframe ?? "15m"; |
There was a problem hiding this comment.
workspaceBootstrap is broader than “restored replay selection.”
initialWorkspace is already being used for non-selection state (activeTimeframe), so Boolean(initialWorkspace) disables TradeList’s latest-trade fallback even when the only restored data is unrelated UI state. That can reopen /replay into an empty shell with no symbol/trade to restore.
Example narrowing of the bootstrap flag
const initialWorkspace = loadReplayWorkspace();
+const hasSelectionBootstrap = initialWorkspace?.symbol !== undefined;
...
- workspaceBootstrap={Boolean(initialWorkspace)}
+ workspaceBootstrap={hasSelectionBootstrap}Use another explicit replay-selection signal here if the workspace stores more than just symbol.
Also applies to: 139-139
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/pages/ReplayPage.tsx` around lines 81 - 83, The bootstrap flag
is too broad because Boolean(initialWorkspace) is true for UI-only state like
activeTimeframe; change to an explicit replay-selection check (e.g., derive
workspaceBootstrap from a specific symbol/selection field stored in
loadReplayWorkspace, not the whole workspace) so TradeList’s latest-trade
fallback only disables when a symbol is actually restored; update references
where workspaceBootstrap is used (including the other occurrence around the code
that reads initialWorkspace/initialSeed at the later block) and ensure
functions/variables like loadReplayWorkspace, initialWorkspace, initialSeed,
workspaceBootstrap, and TradeList consume the new explicit selection signal
instead of Boolean(initialWorkspace).
| beforeEach(() => { | ||
| vi.clearAllMocks(); | ||
| localStorageStore.clear(); | ||
| vi.stubGlobal('localStorage', localStorageMock as Storage); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify test files that stub globals also restore them.
rg -n --type=ts --type=tsx 'vi\.stubGlobal\(|vi\.unstubAllGlobals\(' frontend/src/testRepository: Xeron2000/retraq
Length of output: 87
🏁 Script executed:
cat -n frontend/src/test/tradeListSelection.test.tsx | head -150Repository: Xeron2000/retraq
Length of output: 5446
🏁 Script executed:
#!/bin/bash
# Search for vi.stubGlobal and vi.unstubAllGlobals using file extension patterns
rg -n 'vi\.stubGlobal\(|vi\.unstubAllGlobals\(' frontend/src/test/ --type-listRepository: Xeron2000/retraq
Length of output: 449
🏁 Script executed:
#!/bin/bash
# Find all test files and search them
fd -e test.ts -e test.tsx frontend/src/test/ -x grep -l 'vi\.stubGlobal\|vi\.unstubAllGlobals\|afterEach' {}Repository: Xeron2000/retraq
Length of output: 437
🏁 Script executed:
#!/bin/bash
# Get the full file to see if there's an afterEach hook
wc -l frontend/src/test/tradeListSelection.test.tsxRepository: Xeron2000/retraq
Length of output: 109
🏁 Script executed:
#!/bin/bash
# Check if afterEach exists in this file
grep -n "afterEach" frontend/src/test/tradeListSelection.test.tsxRepository: Xeron2000/retraq
Length of output: 42
🏁 Script executed:
#!/bin/bash
# Search all test files for stubGlobal and unstubAllGlobals patterns
find frontend/src/test -name "*.test.ts*" -exec grep -l "stubGlobal\|unstubAllGlobals" {} \;Repository: Xeron2000/retraq
Length of output: 516
🏁 Script executed:
#!/bin/bash
# Check each test file for stubGlobal usage and cleanup patterns
for file in frontend/src/test/import-route-smoke.test.tsx \
frontend/src/test/phase3ReplayIntegration.smoke.test.tsx \
frontend/src/test/replayAnalyticsPanel.test.tsx \
frontend/src/test/replayFlow.smoke.test.tsx \
frontend/src/test/replayLayoutPersistence.test.tsx \
frontend/src/test/replayPage.test.tsx \
frontend/src/test/replayWorkspace.test.ts \
frontend/src/test/symbolCompare.test.tsx \
frontend/src/test/timeframeCompare.test.tsx; do
echo "=== $file ==="
grep -n "vi\.stubGlobal\|afterEach\|unstubAllGlobals" "$file" || echo "No matches found"
doneRepository: Xeron2000/retraq
Length of output: 2609
Add afterEach hook to restore stubbed globals.
vi.stubGlobal('localStorage', ...) is called in beforeEach but never cleaned up, which can leak this mock into other test suites and cause test pollution. Add an afterEach hook to call vi.unstubAllGlobals() to match the pattern used in other test files.
🔧 Suggested fix
-import { beforeEach, describe, expect, it, vi } from 'vitest';
+import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest';
@@
describe('TradeList seed-aware selection', () => {
beforeEach(() => {
vi.clearAllMocks();
localStorageStore.clear();
vi.stubGlobal('localStorage', localStorageMock as Storage);
});
+
+ afterEach(() => {
+ vi.unstubAllGlobals();
+ });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/test/tradeListSelection.test.tsx` around lines 93 - 97, The test
sets a global stub in beforeEach via vi.stubGlobal('localStorage',
localStorageMock) but never restores it; add an afterEach that calls
vi.unstubAllGlobals() (and optionally vi.clearAllMocks() again) to undo the stub
and prevent leakage into other suites — locate the beforeEach block in
tradeListSelection.test.tsx and add an afterEach ensuring vi.unstubAllGlobals()
is called after tests that use localStorageMock.
Summary
Verification
cd backend && uv run pytestcd frontend && pnpm exec vitest runcd frontend && pnpm exec tsc -bcd frontend && pnpm build/replay,/import,/api/backups/download, and/api/trades/import/rowsSummary by CodeRabbit
Release Notes