Skip to content

feat: Deploy & Run History follow-ups — polling, metrics, deep-links, DB columns, server-side filters#64

Open
abhizipstack wants to merge 14 commits intomainfrom
feat/deploy-runhistory-followups
Open

feat: Deploy & Run History follow-ups — polling, metrics, deep-links, DB columns, server-side filters#64
abhizipstack wants to merge 14 commits intomainfrom
feat/deploy-runhistory-followups

Conversation

@abhizipstack
Copy link
Copy Markdown
Contributor

What

Implements 6 of 7 items from the deferred follow-ups ticket, plus bug fixes found during testing.

Deep-link toast → Run History (#5)

  • Quick Deploy success toast now includes a clickable "View in Run History →" link (with renderMarkdown: false so JSX renders correctly).
  • Run History page auto-expands the most recent run when arriving via deep-link.

Pre-fill create-job from 0-candidates (#6)

  • "Go to Scheduler" CTA navigates to /project/job/list?create=1&project=<pid>&model=<name>.
  • Jobs List reads params: auto-opens the create drawer with the model pre-enabled in Model Configuration.

First-class trigger + scope DB columns (#4)

  • Added trigger (scheduled/manual) and scope (job/model) as real CharField columns on TaskRunHistory with DB indexes.
  • Migration 0002_taskrunhistory_trigger_scope.
  • trigger_scheduled_run writes both columns and kwargs (backward compat).
  • Frontend getRunTriggerScope prefers top-level fields, falls back to kwargs for pre-migration rows.

Server-side Run History filtering (#7)

  • task_run_history endpoint accepts ?trigger=, ?scope=, ?status= query params.
  • Frontend filter changes now trigger a server-side refetch instead of client-side filtering, so pagination is accurate across all pages.

Live deploy progress polling (#3)

  • After dispatch, Quick Deploy button flips to "Deploying…" with spinner.
  • Polls latest run status every 5s via getLatestRunStatus.
  • On terminal state: completion toast (success/failure) with Run History link, explorer refresh, recent-runs cache clear.
  • Auto-cleans on unmount.

Runtime metrics in Run History (#1)

  • After DAG execution (success or failure), trigger_scheduled_run serializes BASE_RESULT into TaskRunHistory.result as JSON.
  • Per-model name, status, end_status; aggregate total/passed/failed counts.
  • Source classes (e.g. SourceMdoela) filtered out — only user-created models appear.
  • Frontend insights panel renders a metrics bar when result.total > 0.

Deferred to separate tickets:

  • User-facing activity logs → OR-1462
  • Row counts per model → OR-1461

Why

These follow-ups close gaps identified during the Quick Deploy and Run History work: no way to navigate from a toast to the specific run, no way to create a job from the 0-candidates flow, filters were client-side only (broke pagination), no live feedback during deploy, and no execution metrics on completed runs.

How

  • Backend: migration for trigger/scope columns, query-param filtering in task_run_history, BASE_RESULT serialization in trigger_scheduled_run + _mark_failure.
  • Frontend: renderMarkdown: false for JSX toasts, useSearchParams for deep-link + pre-fill, polling via setInterval + getLatestRunStatus, metrics bar in insights panel.

Can this PR break any existing features?

Low risk:

  • New DB columns have defaults (scheduled / job) so existing rows are valid post-migration. Serializer uses fields = "__all__" so new columns auto-expose.
  • Server-side filtering is additive — no params = no filter = same behavior as before.
  • Polling is self-contained in component state; auto-cleans on unmount.
  • result field was already on the model (never written); now populated. Frontend guards with result?.total > 0.
  • Source class filtering uses startswith("Source") convention from the no-code model generator.

Database Migrations

  • backend/backend/core/scheduler/migrations/0002_taskrunhistory_trigger_scope.py — adds trigger, scope columns + indexes.

Env Config

None.

Relevant Docs

None.

Related Issues or PRs

Dependencies Versions

No changes.

Notes on Testing

Tested locally (gunicorn + Celery worker + React dev server):

  1. Quick Deploy → success toast shows clickable "View in Run History →" link → navigates to Run History with job preselected + first run auto-expanded.
  2. Quick Deploy on model with no job → "Go to Scheduler" → Jobs List opens with create drawer, model pre-checked.
  3. Run History filters (Trigger/Scope/Status) now refetch from server — verified pagination accuracy.
  4. Quick Deploy → button shows "Deploying…" spinner → polls → completion toast appears on SUCCESS/FAILURE.
  5. Run History expanded row shows metrics bar: "1 model attempted · 1 passed · 0 failed · Mdoela (OK)". Source classes filtered out.
  6. Migration applied cleanly; old runs render with kwargs fallback.

Checklist

I have read and understood the Contribution Guidelines.

🤖 Generated with Claude Code

abhizipstack and others added 12 commits April 16, 2026 17:53
The Quick Deploy success toast now includes a clickable "View in Run
History →" link that navigates to /project/job/history?task=<id>,
preselecting the job. On arrival, the Run History page auto-expands
the most recent run (first row) in addition to any FAILURE rows, so
the user immediately sees the deploy they just triggered.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When no job covers the current model, clicking "Go to Scheduler" now
navigates to /project/job/list?create=1&project=<pid>&model=<name>.
The Jobs List reads these params: auto-opens the create drawer, and
JobDeploy pre-enables the specified model in Model Configuration with
the config panel auto-expanded.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Previously stored only in kwargs JSON, making server-side filtering
impossible. Now first-class nullable CharField columns with DB
indexes, written by trigger_scheduled_run alongside kwargs.

- Migration 0002 adds trigger (scheduled/manual) and scope (job/model)
  columns with defaults matching existing behavior.
- celery_tasks.py writes both the columns and kwargs (backward compat).
- Frontend getRunTriggerScope prefers top-level row.trigger / row.scope
  (from serializer) and falls back to kwargs for pre-migration rows.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replaces client-side filtering with server-side query params on the
task_run_history endpoint. Filter changes now trigger a fresh API call
with ?trigger=manual&scope=model&status=FAILURE, so results are
accurate across all pages (previously client-side filtering only
worked on the visible page).

Backend accepts optional trigger, scope, status query params and
applies them as Django ORM filters against the new DB columns from
the previous migration.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
After dispatching a deploy, the Quick Deploy button flips to
"Deploying…" with a spinner and polls the latest run status every 5s.
On terminal state (SUCCESS/FAILURE/REVOKED):
- Clears the polling interval
- Shows a completion toast with status + deep-link to Run History
- Refreshes the explorer (status badges) and recent-runs cache

Polling auto-cleans on component unmount. The button returns to its
normal state when the run finishes or the component unmounts.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
After DAG execution (success or failure), trigger_scheduled_run now
serializes BASE_RESULT into run.result as JSON with per-model
status/end_status and aggregate passed/failed counts.

Frontend insights panel renders a metrics bar when result is present:
"N models attempted · X passed · Y failed" plus per-model breakdown.
Falls back gracefully to scope/models display for older runs without
result data.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The notify service defaults to renderMarkdown: true, which wraps
description in ReactMarkdown. When description is JSX (our <a> link),
ReactMarkdown stringifies it via JSON.stringify, rendering as raw
text instead of a clickable link. Added renderMarkdown: false to
both the dispatch toast and the polling-completion toast.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Old runs have result as {} or with total=0. Guard with
record.result?.total > 0 so the metrics bar only renders when
there's actual execution data.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
BASE_RESULT.node_name stores str(cls) which renders as
<class 'project.models.mdoela.Mdoela'>. Extract the module name
(second-to-last dotted segment) so metrics show "mdoela" instead
of the full class repr.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
A model file can define multiple classes (e.g. SourceMdoela + Mdoela)
in the same module. Using [-2] (module name) made them
indistinguishable. Switch to [-1] (class name) so the metrics
display shows "SourceMdoela (OK), Mdoela (OK)" instead of
"mdoela (OK), mdoela (OK)".

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
No-code models generate a *Source class (e.g. MdoelaSource) for DAG
dependency resolution alongside the user's actual model class. Both
execute as DAG nodes and appear in BASE_RESULT, but users only care
about their own models. Filter out classes ending with "Source" from
the metrics serialization so the count and per-model list reflect
user-created models only.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
SourceMdoela, DevPaymentsSource — the sample projects use both
conventions. The generated no-code models use the prefix pattern
(SourceX). Changed endswith to startswith to match.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@abhizipstack abhizipstack requested review from a team as code owners April 16, 2026 13:40
Comment thread frontend/src/ide/editor/no-code-model/no-code-model.jsx Fixed
Comment thread frontend/src/ide/editor/no-code-model/no-code-model.jsx Fixed
@greptile-apps
Copy link
Copy Markdown

greptile-apps bot commented Apr 16, 2026

Greptile Summary

This PR delivers the deferred follow-ups from the Quick Deploy / Run History work: deep-link toasts, pre-fill create-job CTA, first-class trigger/scope DB columns with a safe migration, server-side task_run_history filtering, live deploy progress polling, and per-model execution metrics from BASE_RESULT.

  • Pagination broken by new filter useEffect: getRunHistoryList is listed as a dep of the server-side filter effect; because it's a useCallback that captures currentPage/pageSize, every pagination action creates a new function reference, re-triggers the effect, and fires a hardcoded page-1 fetch that races and overwrites the intended page. Users cannot navigate beyond page 1 in Run History.
  • Polling may resolve on wrong run: startDeployPolling calls getLatestRunStatus, which returns the most-recent row for the task; if the Celery worker hasn't yet created the new TaskRunHistory record when the first poll fires (5 s after dispatch), a previous terminal run is returned, the polling stops, and a misleading completion toast is shown.

Confidence Score: 4/5

Safe to merge after fixing the pagination reset — all other changes are low-risk and additive.

One clear P1 regression: the new server-side filter useEffect causes Run History pagination to reset to page 1 on every page change, making the feature unusable beyond page 1. Backend changes (migration, filters, metrics) are solid. The P2 polling edge case is unlikely in a healthy Celery setup but worth hardening.

frontend/src/ide/run-history/Runhistory.jsx — filter useEffect dep array must be fixed before merge.

Important Files Changed

Filename Overview
frontend/src/ide/run-history/Runhistory.jsx New server-side filter useEffect includes getRunHistoryList in its dep array — triggers a page-1 refetch on every pagination change, making it impossible to navigate beyond page 1.
frontend/src/ide/editor/no-code-model/no-code-model.jsx Adds deploy polling, deep-link toast, and goToScheduler CTA. Polling edge case: getLatestRunStatus may match a previous terminal run before the worker creates the new record.
backend/backend/core/scheduler/celery_tasks.py Adds BASE_RESULT metrics capture, _clear_base_result helper, and trigger/scope columns on run creation. Success path correctly snapshots before clearing.
backend/backend/core/scheduler/migrations/0002_taskrunhistory_trigger_scope.py Clean migration adding trigger/scope CharField columns with safe defaults and DB indexes; backward compatible for existing rows.
backend/backend/core/scheduler/models.py Adds trigger and scope CharField fields with choices, defaults, and indexes to TaskRunHistory; schema matches the migration.
backend/backend/core/scheduler/views.py Adds optional trigger/scope/status query-param filtering to task_run_history; additive, uses ORM parameterized queries, no auth regressions.
frontend/src/ide/scheduler/JobList.jsx Reads create/model/project URL params and opens the create drawer with pre-fill values; clears params after use to avoid sticky state.
frontend/src/ide/scheduler/JobDeploy.jsx Adds prefillProject/prefillModel props with dedicated useEffects that correctly set form values and selectedProjectId before the async project list resolves.
frontend/src/ide/scheduler/service.js Adds getLatestRunStatus helper fetching page 1 / limit 1 from run-history endpoint; clean addition with no regressions.

Sequence Diagram

sequenceDiagram
    actor User
    participant Frontend
    participant Backend
    participant Celery Worker

    Note over Frontend,Backend: Server-side Run History Filtering
    User->>Frontend: Changes Trigger / Scope / Status filter
    Frontend->>Backend: GET /run-history/{id}?trigger=&scope=&status=&page=1
    Backend->>Frontend: Filtered + paginated runs

    User->>Frontend: Clicks page 2
    Frontend->>Backend: GET /run-history/{id}?page=2 (handlePagination)
    Note over Frontend: setCurrentPage(2) → new getRunHistoryList ref
    Frontend->>Backend: GET /run-history/{id}?page=1 (filter useEffect re-fires!)
    Backend->>Frontend: Page 2 data (may lose race)
    Backend->>Frontend: Page 1 data resets display

    Note over Frontend,Celery Worker: Live Deploy Progress Polling
    User->>Frontend: Confirms Quick Deploy
    Frontend->>Backend: POST /trigger-periodic-task/{id}/model/{name}
    Backend->>Celery Worker: send_task async
    Backend->>Frontend: success true
    Frontend->>Frontend: startDeployPolling(taskId) t+0s

    alt Worker slow to start
        Frontend->>Backend: GET /run-history/{id}?limit=1 t+5s
        Backend->>Frontend: Previous run status SUCCESS
        Frontend->>Frontend: Polling stops wrong Deploy Completed toast
    else Normal path
        Celery Worker->>Backend: Creates TaskRunHistory STARTED
        Frontend->>Backend: GET /run-history/{id}?limit=1 t+5s
        Backend->>Frontend: New run status STARTED
        Celery Worker->>Backend: Writes BASE_RESULT metrics to TaskRunHistory.result
        Frontend->>Backend: GET /run-history/{id}?limit=1 t+10s
        Backend->>Frontend: New run status SUCCESS or FAILURE
        Frontend->>Frontend: Completion toast and stop polling
    end
Loading

Fix All in Claude Code

Prompt To Fix All With AI
This is a comment left during a code review.
Path: frontend/src/ide/run-history/Runhistory.jsx
Line: 195-209

Comment:
**Pagination always resets to page 1 after a page change**

`getRunHistoryList` is defined with `useCallback` and includes `currentPage` and `pageSize` in its dependency array (line 147). Every time `handlePagination` calls `setCurrentPage(newPage)` / `setPageSize(newPageSize)`, React re-renders and produces a **new** `getRunHistoryList` reference. That new reference is in this effect's dep array, so the effect immediately re-fires and calls `getRunHistoryList(filterQueries.job, 1, pageSize, …)` — hardcoded to page 1. This races against the page-N call dispatched inside `handlePagination`, with the page-1 call typically winning and resetting the display.

Remove `getRunHistoryList` (and `pageSize`, which also changes on resize events handled by `handlePagination`) from this effect's deps — pagination changes are already handled inside `handlePagination` itself. The filter effect should only react to actual filter-value changes:

```suggestion
  useEffect(() => {
    if (!filterQueries.job) return;
    getRunHistoryList(filterQueries.job, 1, pageSize, {
      status: filterQueries.status,
      trigger: filterQueries.trigger,
      scope: filterQueries.scope,
    });
    // eslint-disable-next-line react-hooks/exhaustive-deps
  }, [
    filterQueries.status,
    filterQueries.trigger,
    filterQueries.scope,
    filterQueries.job,
  ]);
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: frontend/src/ide/editor/no-code-model/no-code-model.jsx
Line: 1869-1917

Comment:
**Polling may terminate immediately on a previous run's terminal state**

`getLatestRunStatus` returns the most recent `TaskRunHistory` row for `taskId`, ordered by `-start_time` — it is not scoped to the run that was just dispatched. The new `TaskRunHistory` record is created inside the Celery worker (in `trigger_scheduled_run`), not in the dispatch API response. If the first poll fires (5 s after dispatch) before the worker has created that record, the endpoint returns the **previous** run. If the previous run was `SUCCESS` or `FAILURE`, the terminal check triggers immediately, shows "Deploy Completed / Failed" toast for the wrong run, and stops polling — leaving the actual new run untracked.

Consider snapshotting a `since` timestamp and skipping terminal detection until a run with `start_time >= since` is observed:

```js
const startDeployPolling = (taskId) => {
  if (pollingRef.current) clearInterval(pollingRef.current);
  const dispatchedAt = Date.now();
  setDeployPolling({ taskId, status: "STARTED" });
  pollingRef.current = setInterval(async () => {
    try {
      const run = await getLatestRunStatus(projectId, taskId);
      if (!run) return;
      const runStart = run.start_time ? new Date(run.start_time).getTime() : 0;
      if (runStart < dispatchedAt - 10_000) return; // 10s grace
      const terminal = ["SUCCESS", "FAILURE", "REVOKED"].includes(run.status);
      // ... rest unchanged
    } catch { /* silently retry */ }
  }, 5000);
};
```

How can I resolve this? If you propose a fix, please make it concise.

Reviews (3): Last reviewed commit: "fix: prettier formatting for toast deep-..." | Re-trigger Greptile

Comment thread backend/backend/core/scheduler/celery_tasks.py
Comment thread frontend/src/ide/scheduler/JobList.jsx
…e bugs

- Pass active filters through handlePagination and handleRefresh (P1)
- Snapshot-then-clear BASE_RESULT to prevent stale metrics across worker reuse (P1)
- Fix handleRefresh stale closure deps (P2)
- Forward project URL param from goToScheduler to JobDeploy (P2)
- Prefer DB columns over kwargs for trigger/scope in list_recent_runs_for_model (P2)
- Sanitize taskId with encodeURIComponent in toast deep-links (CodeQL)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@abhizipstack
Copy link
Copy Markdown
Contributor Author

All 6 issues addressed in 3c44484:

  1. Filters dropped on pagination (P1)handlePagination now passes active filterQueries to getRunHistoryList.
  2. BASE_RESULT stale global (P1) — Added snapshot-then-clear pattern + _clear_base_result() helper called in all exit paths (success, timeout, exception).
  3. handleRefresh stale closure (P2) — Updated deps to include filterQueries, currentPage, pageSize, getRunHistoryList.
  4. project URL param ignored (P2)JobList now reads project from URL params and forwards prefillProject to JobDeploy, which pre-selects it.
  5. list_recent_runs_for_model kwargs (P2) — Now prefers run.trigger/run.scope DB columns, falling back to kwargs for pre-migration rows.
  6. CodeQL XSS (DOM text as HTML) — Applied encodeURIComponent() to taskId in toast deep-link URLs.

Comment on lines +436 to +464
try:
from visitran.events.printer import BASE_RESULT

def _clean(raw):
return raw.split("'")[1].split(".")[-1] if "'" in raw else raw

user_results = [
r for r in BASE_RESULT if not _clean(r.node_name).startswith("Source")
]
run.result = {
"models": [
{
"name": _clean(r.node_name),
"status": r.status,
"end_status": r.end_status,
"sequence": r.sequence_num,
}
for r in user_results
],
"total": len(user_results),
"passed": sum(1 for r in user_results if r.end_status == "OK"),
"failed": sum(1 for r in user_results if r.end_status == "FAIL"),
}
except Exception:
pass
run.status = "FAILURE"
run.end_time = timezone.now()
run.error_message = error_msg
run.save(update_fields=["status", "end_time", "error_message"])
run.save(update_fields=["status", "end_time", "error_message", "result"])
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 _mark_failure always reads an already-cleared BASE_RESULT

Both exception handlers call _clear_base_result() immediately before _mark_failure(), so by the time _mark_failure reads BASE_RESULT (line 443), the list is empty. Every failed run will persist total: 0, passed: 0, failed: 0 — the partial-execution metrics feature silently produces nothing on failure.

The success path avoids this by snapshotting before clearing (results_snapshot = list(BASE_RESULT); BASE_RESULT.clear()). The failure path needs the same treatment: snapshot the results first, clear the global, then pass the snapshot to _mark_failure.

Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/backend/core/scheduler/celery_tasks.py
Line: 436-464

Comment:
**`_mark_failure` always reads an already-cleared `BASE_RESULT`**

Both exception handlers call `_clear_base_result()` immediately before `_mark_failure()`, so by the time `_mark_failure` reads `BASE_RESULT` (line 443), the list is empty. Every failed run will persist `total: 0, passed: 0, failed: 0` — the partial-execution metrics feature silently produces nothing on failure.

The success path avoids this by snapshotting before clearing (`results_snapshot = list(BASE_RESULT); BASE_RESULT.clear()`). The failure path needs the same treatment: snapshot the results first, clear the global, then pass the snapshot to `_mark_failure`.

How can I resolve this? If you propose a fix, please make it concise.

Fix in Claude Code

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Comment on lines 195 to 209
useEffect(() => {
let filtered = backUpData;
if (filterQueries.status) {
filtered = filtered.filter((el) => el.status === filterQueries.status);
}
if (filterQueries.trigger) {
filtered = filtered.filter(
(el) => getRunTriggerScope(el).trigger === filterQueries.trigger
);
}
if (filterQueries.scope) {
filtered = filtered.filter(
(el) => getRunTriggerScope(el).scope === filterQueries.scope
);
}
setJobHistoryData(filtered);
if (!filterQueries.job) return;
getRunHistoryList(filterQueries.job, 1, pageSize, {
status: filterQueries.status,
trigger: filterQueries.trigger,
scope: filterQueries.scope,
});
}, [
filterQueries.status,
filterQueries.trigger,
filterQueries.scope,
backUpData,
filterQueries.job,
getRunHistoryList,
pageSize,
]);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Pagination always resets to page 1 after a page change

getRunHistoryList is defined with useCallback and includes currentPage and pageSize in its dependency array (line 147). Every time handlePagination calls setCurrentPage(newPage) / setPageSize(newPageSize), React re-renders and produces a new getRunHistoryList reference. That new reference is in this effect's dep array, so the effect immediately re-fires and calls getRunHistoryList(filterQueries.job, 1, pageSize, …) — hardcoded to page 1. This races against the page-N call dispatched inside handlePagination, with the page-1 call typically winning and resetting the display.

Remove getRunHistoryList (and pageSize, which also changes on resize events handled by handlePagination) from this effect's deps — pagination changes are already handled inside handlePagination itself. The filter effect should only react to actual filter-value changes:

Suggested change
useEffect(() => {
let filtered = backUpData;
if (filterQueries.status) {
filtered = filtered.filter((el) => el.status === filterQueries.status);
}
if (filterQueries.trigger) {
filtered = filtered.filter(
(el) => getRunTriggerScope(el).trigger === filterQueries.trigger
);
}
if (filterQueries.scope) {
filtered = filtered.filter(
(el) => getRunTriggerScope(el).scope === filterQueries.scope
);
}
setJobHistoryData(filtered);
if (!filterQueries.job) return;
getRunHistoryList(filterQueries.job, 1, pageSize, {
status: filterQueries.status,
trigger: filterQueries.trigger,
scope: filterQueries.scope,
});
}, [
filterQueries.status,
filterQueries.trigger,
filterQueries.scope,
backUpData,
filterQueries.job,
getRunHistoryList,
pageSize,
]);
useEffect(() => {
if (!filterQueries.job) return;
getRunHistoryList(filterQueries.job, 1, pageSize, {
status: filterQueries.status,
trigger: filterQueries.trigger,
scope: filterQueries.scope,
});
// eslint-disable-next-line react-hooks/exhaustive-deps
}, [
filterQueries.status,
filterQueries.trigger,
filterQueries.scope,
filterQueries.job,
]);
Prompt To Fix With AI
This is a comment left during a code review.
Path: frontend/src/ide/run-history/Runhistory.jsx
Line: 195-209

Comment:
**Pagination always resets to page 1 after a page change**

`getRunHistoryList` is defined with `useCallback` and includes `currentPage` and `pageSize` in its dependency array (line 147). Every time `handlePagination` calls `setCurrentPage(newPage)` / `setPageSize(newPageSize)`, React re-renders and produces a **new** `getRunHistoryList` reference. That new reference is in this effect's dep array, so the effect immediately re-fires and calls `getRunHistoryList(filterQueries.job, 1, pageSize, …)` — hardcoded to page 1. This races against the page-N call dispatched inside `handlePagination`, with the page-1 call typically winning and resetting the display.

Remove `getRunHistoryList` (and `pageSize`, which also changes on resize events handled by `handlePagination`) from this effect's deps — pagination changes are already handled inside `handlePagination` itself. The filter effect should only react to actual filter-value changes:

```suggestion
  useEffect(() => {
    if (!filterQueries.job) return;
    getRunHistoryList(filterQueries.job, 1, pageSize, {
      status: filterQueries.status,
      trigger: filterQueries.trigger,
      scope: filterQueries.scope,
    });
    // eslint-disable-next-line react-hooks/exhaustive-deps
  }, [
    filterQueries.status,
    filterQueries.trigger,
    filterQueries.scope,
    filterQueries.job,
  ]);
```

How can I resolve this? If you propose a fix, please make it concise.

Fix in Claude Code

@wicky-zipstack
Copy link
Copy Markdown
Contributor

wicky-zipstack commented Apr 16, 2026

A few items to consider:

1. Same repr(cls) parsing anti-pattern as #54 / #59celery_tasks.py:47

def _clean_name(raw):
    if "'" in raw:
        return raw.split("'")[1].split(".")[-1]

PR #59 already fixed this with cls.__name__ elsewhere. Better to write a clean name into BASE_RESULT.node_name upstream and parse zero times.

2. _clean_name duplicated — defined in both trigger_scheduled_run and _mark_failure. Extract to module level.

3. Polling has no max duration / backoffno-code-model.jsx:370. Hardcoded 5s interval = 360 backend hits per 30-min deploy. Add a max-poll-count or exponential backoff. Also no upper-bound timeout — if backend never returns terminal status, polling runs until unmount.

4. RETRY status not handled in terminal listno-code-model.jsx:329 — only ["SUCCESS", "FAILURE", "REVOKED"]. Button stays "Deploying…" during retries — fine if intentional, worth confirming.

5. BASE_RESULT is a module-level global — works for single-threaded Celery worker. If you ever run prefork pool with concurrency > 1 in same worker, the global will be shared/contaminated across concurrent jobs. Worth verifying worker config.

6. startswith("Source") filter is brittlecelery_tasks.py:54. A user-named SourceData model would be hidden. Use a flag on the node instead. (Tech debt — fine for now.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants