feat: complete agent self-upgrade closure#97
Conversation
Closes the three gaps in the existing agent self-upgrade flow: - Result/progress reporting from agent to server (new AgentMessage variants) - Frontend UI in CapabilitiesDialog + server list badge - Pre-flight validation + .bak.<timestamp> retention for rollback safety
Addresses 6 findings and 2 open questions: - Add job_id (UUID v4) as primary matching key to prevent stale messages from polluting same-version retries; keep target_version as compat fallback. - Move start_job after all fallible pre-checks so agent-offline / platform-unsupported / checksum-fetch failures no longer create phantom Running jobs. - Extend AgentInfoUpdated to broadcast agent_version; front-end react-query cache now sees live Current version after upgrade completes. - Extend FullSync payload with upgrades array so WS reconnect / fresh page loads hydrate list badges correctly. - Latest-version endpoint now auto-detects GitHub Releases and supports latest_version_url override for self-hosted installs. - CapabilityDenied(upgrade) immediately fails the active job (no 120s wait). - Relocate AgentVersionSection from admin-only CapabilitiesDialog to Server detail page; Members get read-only visibility, admins keep Upgrade/Retry buttons. - Correct i18n stance: apps/web uses react-i18next; add en/zh key table.
…iesDialog reference
Nine tasks covering protocol extension, server tracker + timeout worker, latest-version lookup service, REST/WS wiring with start_job ordering, agent hardening with preflight and timestamped .bak rollback, server integration coverage, frontend Zustand store + hooks + WS hydration, server detail UI + list badge, and docs/QA sweep. Includes four review-driven amendments folded in up front: - Preserve CAP_UPGRADE pre-check in trigger_upgrade - Fold CapabilityDenied upgrade handling into existing match arm - Convert handleWsMessage from private to exported in place - Add Verifying-failure and timeout-sweep integration tests
Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-openagent) Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (8)
✅ Files skipped from review due to trivial changes (1)
🚧 Files skipped from review as they are similar to previous changes (7)
📝 Walkthrough🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
crates/agent/src/reporter.rs (1)
1935-1945:⚠️ Potential issue | 🟡 MinorMissing
PreFlightstage emission beforeInstalling.The protocol defines five upgrade stages (
Downloading,Verifying,PreFlight,Installing,Restarting), but this implementation skipsPreFlight. According to the PR documentation and i18n strings,PreFlightrepresents pre-installation checks (e.g., backup creation). Consider emitting it after writing the temp file and before the backup/replace operation.Proposed fix to add PreFlight stage
#[cfg(unix)] { use std::os::unix::fs::PermissionsExt; std::fs::set_permissions(&tmp_path, std::fs::Permissions::from_mode(0o755))?; } + emit_upgrade_progress(&tx, job_id.clone(), version, UpgradeStage::PreFlight).await; + emit_upgrade_progress(&tx, job_id.clone(), version, UpgradeStage::Installing).await;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/agent/src/reporter.rs` around lines 1935 - 1945, The code skips emitting the PreFlight stage; add an emit_upgrade_progress call with UpgradeStage::PreFlight after the temp file has been written and before the backup/replace block so pre-installation checks are reported. Specifically, call emit_upgrade_progress(&tx, job_id.clone(), version, UpgradeStage::PreFlight).await; placed between the temp file write/flush step and the backup/replace operations that use std::fs::remove_file/rename, using job_id.clone() to avoid moving job_id prematurely.
🧹 Nitpick comments (2)
apps/web/src/components/server/server-card.tsx (1)
172-172: Keep the store subscription scoped to the badge.Line 172 subscribes the entire
ServerCardInnerto upgrade-job updates, so every progress tick will re-render the whole memoized card, including charts and traffic/network work. Please extract this into a tiny badge-only child, like theUpgradeBadgeCellpattern already used inapps/web/src/routes/_authed/servers/index.tsx.♻️ Suggested extraction
+function ServerCardUpgradeBadge({ serverId }: { serverId: string }) { + const upgradeJob = useUpgradeJobsStore((state) => state.jobs.get(serverId)) + return <UpgradeJobBadge job={upgradeJob} /> +} + const ServerCardInner = ({ server }: ServerCardProps) => { const { t } = useTranslation(['servers']) const { data: networkOverview = [] } = useNetworkOverview() const { data: realtimeData } = useNetworkRealtime(server.id) const { data: trafficOverview } = useTrafficOverview() - const upgradeJob = useUpgradeJobsStore((state) => state.jobs.get(server.id)) // ... <div className="flex items-center gap-1.5"> - <UpgradeJobBadge job={upgradeJob} /> + <ServerCardUpgradeBadge serverId={server.id} /> <StatusBadge online={server.online} /> </div>Please sanity-check this with React Profiler while an upgrade is streaming progress in grid view.
Also applies to: 219-222
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/src/components/server/server-card.tsx` at line 172, ServerCardInner currently calls useUpgradeJobsStore((state) => state.jobs.get(server.id)) which subscribes the whole memoized card to upgrade-job updates; extract that subscription into a tiny child component (e.g., UpgradeBadge or reuse the existing UpgradeBadgeCell pattern) that receives server.id as a prop and calls useUpgradeJobsStore there to read only the job for that id, then render the badge from that child inside ServerCardInner and remove the useUpgradeJobsStore call from ServerCardInner (also fix similar subscriptions around the code referenced at the original lines 219-222); after change, sanity-check with React Profiler to ensure only the badge re-renders on progress ticks.docs/superpowers/plans/2026-04-14-agent-self-upgrade-closure.md (1)
2027-2028: Note: Actual implementation differs from plan's store interface.The plan specifies
upsertJobmethod but the actual implementation inupgrade-jobs-store.tsusessetJobinstead. The plan'sjobstype isRecord<string, UpgradeJobDto>but the implementation usesMap<string, UpgradeJob>. This is acceptable as implementation details evolved, but worth noting for future plan accuracy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/superpowers/plans/2026-04-14-agent-self-upgrade-closure.md` around lines 2027 - 2028, The plan's API and types are out-of-sync with the implementation: replace references to upsertJob and Record<string, UpgradeJobDto> in the plan with the actual implementation symbols (setJob and Map<string, UpgradeJob>) and update any examples like state.jobs[job.server_id] to the Map access pattern (e.g., jobs.get(job.server_id)) or otherwise document that the store uses a Map; alternatively, if you prefer the plan to drive code, change the implementation in upgrade-jobs-store.ts to expose upsertJob and use a plain Record<string, UpgradeJobDto> so names and types match—pick one approach and make the plan and implementation consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@apps/web/src/components/dashboard/widget-config-dialog.test.tsx`:
- Around line 74-89: The Checkbox mock's onChange currently toggles using
!checked which can drift from real DOM behavior; update the mock component
(Checkbox in the test file) to read the new checked state from the event (use
event.currentTarget.checked) inside the onChange handler and pass that value to
onCheckedChange so the test simulates actual input state transitions
consistently.
In `@apps/web/src/components/server/upgrade-job-badge.tsx`:
- Around line 32-35: Update the Tooltip usage to match `@base-ui/react/tooltip`'s
prop API: in the TooltipProvider component replace the invalid prop
delayDuration with delay (i.e., TooltipProvider delay={100}), and remove the
unsupported asChild prop from TooltipTrigger (use TooltipTrigger directly
wrapping the Badge instead). Keep Tooltip, TooltipTrigger, Badge and
config.variant as they are; only change the prop name on TooltipProvider and
remove asChild from TooltipTrigger.
In `@apps/web/src/components/uptime/uptime-timeline.test.tsx`:
- Around line 7-29: The test uses vi.mock(...) but never imports vi, causing
TS2304; update the test module's imports to include vi from 'vitest' (add vi to
the existing import statement or add a new import) so that the vi.mock call in
uptime-timeline.test.tsx resolves and the TypeScript typecheck passes.
In `@apps/web/src/hooks/use-servers-ws.ts`:
- Around line 351-374: TypeScript isn't narrowing `raw` in the
'upgrade_progress' branch, so replace the ad-hoc typeof checks + cast with a
user-defined type guard; add a function like isUpgradeProgressPayload(x): x is {
server_id: string; job_id: string; target_version: string; stage: string } and
call if (!isUpgradeProgressPayload(raw)) break; then safely destructure const {
server_id, job_id, target_version, stage } = raw and continue to call
useUpgradeJobsStore.getState().getJob(server_id) and setJob(...) using
UpgradeJob['stage'] as needed.
- Around line 376-407: The TypeScript error comes from casting raw to a complex
type inside the 'upgrade_result' case; after your runtime typeof checks, stop
using the inline "as { ... }" cast and instead build a properly typed payload
object from the validated raw fields (e.g. create const payload = { server_id:
raw.server_id, job_id: raw.job_id, target_version: raw.target_version, status:
raw.status, stage: raw.stage, error: raw.error ?? null, backup_path:
raw.backup_path ?? null } and then narrow stage/status into UpgradeJob['stage']
and UpgradeJob['status'] as needed), then call
useUpgradeJobsStore.getState().setJob(server_id, { ...payload, stage:
payload.stage ?? existingJob?.stage ?? 'downloading', started_at:
existingJob?.started_at ?? now, finished_at: now }); this avoids the failing
cast while preserving your runtime checks and uses the existing functions getJob
and setJob and the UpgradeJob types.
In `@apps/web/src/routes/_authed/servers/`$id.tsx:
- Around line 527-528: The useUpgradeJobsStore hook (used to derive upgradeJob)
is called after early returns (serverLoading and !server), violating Rules of
Hooks; move the useUpgradeJobsStore((state) => state.jobs.get(id)) call up to
the top of the component with the other hooks (e.g., immediately after existing
useState/useEffect or near line ~365), so it always runs on every render, then
remove the late call at the bottom (the current upgradeJob assignment) and use
the moved upgradeJob variable in the JSX/logic that follows the early returns.
In `@apps/web/src/routes/_authed/servers/index.tsx`:
- Around line 140-143: The column with id 'upgrade' currently sets header to
null which hides the label from screen readers; update the column definition
(the header property on the column with id 'upgrade' that renders
UpgradeBadgeCell in cell) to return a non-visual/visually-hidden label (e.g., a
span with an "sr-only" class or your app's visuallyHidden helper) containing the
text "Upgrade" so the column is announced to assistive tech while keeping the
visual compactness.
In `@apps/web/src/stores/upgrade-jobs-store.test.ts`:
- Around line 15-40: The helper makeJob is widening literal types to plain
string causing incompatible calls; change its type signature to use the real
model (e.g., makeJob(overrides: Partial<UpgradeJob> = {}): UpgradeJob) so the
returned object and the overrides preserve UpgradeStage and UpgradeStatus unions
instead of string; update the function return type to UpgradeJob and ensure
overrides is Partial<UpgradeJob> (or at minimum type stage/status as
UpgradeStage/UpgradeStatus) so calls to setJob and setJobs accept the result
without type errors.
In `@apps/web/src/stores/upgrade-jobs-store.ts`:
- Around line 29-31: The isFinished function currently only treats 'succeeded'
and 'failed' as terminal states; include 'timeout' so timeout jobs auto-clear.
Update the isFinished(status: UpgradeStatus) implementation (function
isFinished) to return true for status === 'succeeded' || status === 'failed' ||
status === 'timeout', ensuring UpgradeStatus 'timeout' is recognized as a
finished state and will be auto-cleared like other terminal states.
In `@crates/agent/src/reporter.rs`:
- Around line 1952-1963: The failure emission currently passes job_id: None when
cmd.spawn() fails because job_id was previously moved; locate the code around
emit_upgrade_progress and the cmd.spawn() block and preserve the job id by
cloning it before it is consumed (or change ownership to borrow) so you can pass
Some(job_id.clone()) into emit_upgrade_failure(&tx, Some(job_id.clone()),
version.to_string(), UpgradeStage::Restarting, error.to_string(),
Some(backup_path.display().to_string())). Ensure you clone the original job_id
variable right before the call that consumes it (the emit_upgrade_progress call
at/near where job_id is currently moved) so the spawn failure path has the real
job id to report.
In `@crates/server/src/router/api/server.rs`:
- Around line 571-589: Before creating a job, perform preflight checks: after
getting the sender via state.agent_manager.get_sender(&id) but before calling
state.upgrade_release_service.resolve_asset(...) and
state.upgrade_tracker.start_job(...), read the agent's current runtime/system
info (e.g., from the returned sender or agent record) to (1) detect a no-op
upgrade (agent already at the requested version) and return an immediate
success/Idempotent response, and (2) verify the agent's effective capabilities
include CAP_UPGRADE and return a suitable error (e.g., AppError::Forbidden or
AppError::Conflict) if missing; implement these checks using the existing
symbols (state.agent_manager.get_sender, CAP_UPGRADE,
state.upgrade_release_service.resolve_asset, state.upgrade_tracker.start_job,
StartUpgradeJobError::Conflict) so the code fails fast before fetching assets or
creating jobs.
In `@crates/server/src/service/upgrade_release.rs`:
- Around line 84-88: The code in upgrade_release.rs uses let-chain syntax (the
if let Some(cached) = cache.as_ref() && !cached.is_expired() { return
cached.response(); }) which requires Rust 1.88+/edition 2024; update the project
MSRV by adding rust-version = "1.88" to Cargo.toml (or document the
requirement), or alternatively rewrite the branch to avoid let-chains (e.g.,
match or nested ifs) if you cannot raise the MSRV; reference the let-chain line
around cache.as_ref(), cached.is_expired(), and cached.response() when making
the change.
In `@tests/agent-upgrade.md`:
- Around line 124-173: The checklist references WS messages and a GET endpoint
that don't exist; update the test steps and API surface to match the actual
upgrade flow driven by full_sync.upgrades, upgrade_progress, and upgrade_result
and the router that only exposes POST /api/servers/{id}/upgrade; remove or
replace references to CapabilitiesChanged/ServerUpdate WebSocket messages and
/api/servers/:id/upgrade-status, and instead instruct QA to observe the
full_sync.upgrades event stream and the upgrade_progress/upgrade_result payloads
(or the single POST call response and subsequent state changes) when validating
behavior.
- Around line 91-101: The test scenario should be changed to verify same-server
concurrent upgrade rejection because UpgradeJobTracker conflicts on server_id;
update the steps and expectations in the "Verify Concurrent Upgrade Rejection"
section to start an upgrade on a single server (e.g., Server A), then attempt to
start another upgrade on that same server (or rapidly click the upgrade button
on that same server) and assert UpgradeJobTracker rejects the second attempt
with the "Upgrade already in progress" message and the UI prevents initiating a
second upgrade while the first continues; ensure all mentions of "server B" or
different-server attempts are removed and reference UpgradeJobTracker and
server_id as the concurrency key in the scenario description.
---
Outside diff comments:
In `@crates/agent/src/reporter.rs`:
- Around line 1935-1945: The code skips emitting the PreFlight stage; add an
emit_upgrade_progress call with UpgradeStage::PreFlight after the temp file has
been written and before the backup/replace block so pre-installation checks are
reported. Specifically, call emit_upgrade_progress(&tx, job_id.clone(), version,
UpgradeStage::PreFlight).await; placed between the temp file write/flush step
and the backup/replace operations that use std::fs::remove_file/rename, using
job_id.clone() to avoid moving job_id prematurely.
---
Nitpick comments:
In `@apps/web/src/components/server/server-card.tsx`:
- Line 172: ServerCardInner currently calls useUpgradeJobsStore((state) =>
state.jobs.get(server.id)) which subscribes the whole memoized card to
upgrade-job updates; extract that subscription into a tiny child component
(e.g., UpgradeBadge or reuse the existing UpgradeBadgeCell pattern) that
receives server.id as a prop and calls useUpgradeJobsStore there to read only
the job for that id, then render the badge from that child inside
ServerCardInner and remove the useUpgradeJobsStore call from ServerCardInner
(also fix similar subscriptions around the code referenced at the original lines
219-222); after change, sanity-check with React Profiler to ensure only the
badge re-renders on progress ticks.
In `@docs/superpowers/plans/2026-04-14-agent-self-upgrade-closure.md`:
- Around line 2027-2028: The plan's API and types are out-of-sync with the
implementation: replace references to upsertJob and Record<string,
UpgradeJobDto> in the plan with the actual implementation symbols (setJob and
Map<string, UpgradeJob>) and update any examples like state.jobs[job.server_id]
to the Map access pattern (e.g., jobs.get(job.server_id)) or otherwise document
that the store uses a Map; alternatively, if you prefer the plan to drive code,
change the implementation in upgrade-jobs-store.ts to expose upsertJob and use a
plain Record<string, UpgradeJobDto> so names and types match—pick one approach
and make the plan and implementation consistent.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 852159b3-a4ba-4c59-8477-115c746274b0
📒 Files selected for processing (41)
ENV.mdapps/docs/content/docs/cn/configuration.mdxapps/docs/content/docs/en/configuration.mdxapps/web/src/components/dashboard/widget-config-dialog.test.tsxapps/web/src/components/server/agent-version-section.test.tsxapps/web/src/components/server/agent-version-section.tsxapps/web/src/components/server/server-card.tsxapps/web/src/components/server/traffic-card.test.tsxapps/web/src/components/server/upgrade-job-badge.tsxapps/web/src/components/uptime/uptime-timeline.test.tsxapps/web/src/hooks/use-servers-ws.test.tsapps/web/src/hooks/use-servers-ws.tsapps/web/src/hooks/use-upgrade-job.test.tsapps/web/src/hooks/use-upgrade-job.tsapps/web/src/locales/en/servers.jsonapps/web/src/locales/zh/servers.jsonapps/web/src/routes/_authed/servers/$id.test.tsxapps/web/src/routes/_authed/servers/$id.tsxapps/web/src/routes/_authed/servers/index.tsxapps/web/src/stores/upgrade-jobs-store.test.tsapps/web/src/stores/upgrade-jobs-store.tscrates/agent/src/reporter.rscrates/common/src/protocol.rscrates/server/src/config.rscrates/server/src/main.rscrates/server/src/openapi.rscrates/server/src/router/api/agent.rscrates/server/src/router/api/mod.rscrates/server/src/router/api/server.rscrates/server/src/router/ws/agent.rscrates/server/src/router/ws/browser.rscrates/server/src/service/mod.rscrates/server/src/service/upgrade_release.rscrates/server/src/service/upgrade_tracker.rscrates/server/src/state.rscrates/server/src/task/mod.rscrates/server/src/task/upgrade_timeout.rsdocs/superpowers/plans/2026-04-14-agent-self-upgrade-closure.mddocs/superpowers/specs/2026-04-14-agent-self-upgrade-closure-design.mdtests/README.mdtests/agent-upgrade.md
| Checkbox: ({ | ||
| checked, | ||
| onCheckedChange, | ||
| ...props | ||
| }: { | ||
| checked?: boolean | ||
| onCheckedChange?: (checked: boolean) => void | ||
| } & Record<string, unknown>) => ( | ||
| <input | ||
| checked={checked} | ||
| data-testid="checkbox" | ||
| onChange={() => onCheckedChange?.(!checked)} | ||
| type="checkbox" | ||
| {...props} | ||
| /> | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether tri-state checkbox semantics are used in the repo.
rg -n -C3 "onCheckedChange|indeterminate|checked\s*:\s*'indeterminate'" --type=ts --type=tsxRepository: ZingerLittleBee/ServerBee
Length of output: 96
🌐 Web query:
For the @base-ui/react/checkboxversion used by this repository, what are the exact types and semantics for Checkbox.RootcheckedandonCheckedChange?
💡 Result:
For @base-ui/react/checkbox Checkbox.Root: checked: boolean | undefined (controlled checked state; use defaultChecked for uncontrolled) onCheckedChange: ((checked: boolean, eventDetails: CheckboxRootChangeEventDetails) => void) | undefined Where CheckboxRootChangeEventDetails = BaseUIChangeEventDetails with reason: 'none'. Note: The query mentions "version used by this repository" but no specific repo provided; types from @base-ui/react@1.1.0 and current docs match. Base UI (MUI) follows similar patterns to Radix but uses boolean (not 'indeterminate') for checked prop.
Citations:
- 1: https://cdn.jsdelivr.net/npm/@base-ui/react@1.1.0/checkbox/root/CheckboxRoot.d.ts
- 2: https://github.com/mui/base-ui/blob/6583c18f/packages/react/src/checkbox/root/CheckboxRoot.tsx
- 3: https://base-ui.com/react/components/checkbox
🏁 Script executed:
#!/bin/bash
# Retry rg search with correct file types (use ts instead of tsx, or use generic search)
rg -n "onCheckedChange|indeterminate" --type ts apps/web/srcRepository: ZingerLittleBee/ServerBee
Length of output: 3601
🏁 Script executed:
#!/bin/bash
# Check package.json for `@base-ui/react` version
cat apps/web/package.json | grep -A2 -B2 "@base-ui/react"Repository: ZingerLittleBee/ServerBee
Length of output: 207
🏁 Script executed:
#!/bin/bash
# Find and inspect the production checkbox component
fd -e ts -e tsx "checkbox" apps/web/src/components/ui/ 2>/dev/null | head -5Repository: ZingerLittleBee/ServerBee
Length of output: 108
🏁 Script executed:
#!/bin/bash
# Look at the actual checkbox implementation
find apps/web/src/components/ui -name "*checkbox*" -type fRepository: ZingerLittleBee/ServerBee
Length of output: 108
🏁 Script executed:
#!/bin/bash
# Inspect the production checkbox component
cat -n apps/web/src/components/ui/checkbox.tsxRepository: ZingerLittleBee/ServerBee
Length of output: 1577
Fix the checkbox mock to align with actual input state transitions.
On Line 85, using !checked to toggle can diverge from the real HTML input's state. Use event.currentTarget.checked to ensure the mock's behavior matches the actual Checkbox component semantics.
Proposed fix
<input
checked={checked}
data-testid="checkbox"
- onChange={() => onCheckedChange?.(!checked)}
+ onChange={(event) => onCheckedChange?.(event.currentTarget.checked)}
type="checkbox"
{...props}
/>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Checkbox: ({ | |
| checked, | |
| onCheckedChange, | |
| ...props | |
| }: { | |
| checked?: boolean | |
| onCheckedChange?: (checked: boolean) => void | |
| } & Record<string, unknown>) => ( | |
| <input | |
| checked={checked} | |
| data-testid="checkbox" | |
| onChange={() => onCheckedChange?.(!checked)} | |
| type="checkbox" | |
| {...props} | |
| /> | |
| ) | |
| Checkbox: ({ | |
| checked, | |
| onCheckedChange, | |
| ...props | |
| }: { | |
| checked?: boolean | |
| onCheckedChange?: (checked: boolean) => void | |
| } & Record<string, unknown>) => ( | |
| <input | |
| checked={checked} | |
| data-testid="checkbox" | |
| onChange={(event) => onCheckedChange?.(event.currentTarget.checked)} | |
| type="checkbox" | |
| {...props} | |
| /> | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/src/components/dashboard/widget-config-dialog.test.tsx` around lines
74 - 89, The Checkbox mock's onChange currently toggles using !checked which can
drift from real DOM behavior; update the mock component (Checkbox in the test
file) to read the new checked state from the event (use
event.currentTarget.checked) inside the onChange handler and pass that value to
onCheckedChange so the test simulates actual input state transitions
consistently.
| case 'upgrade_progress': { | ||
| if ( | ||
| typeof raw.server_id !== 'string' || | ||
| typeof raw.job_id !== 'string' || | ||
| typeof raw.target_version !== 'string' || | ||
| typeof raw.stage !== 'string' | ||
| ) { | ||
| break | ||
| } | ||
| const { server_id, target_version, stage } = raw as { | ||
| server_id: string | ||
| job_id: string | ||
| target_version: string | ||
| stage: string | ||
| } | ||
| const existingJob = useUpgradeJobsStore.getState().getJob(server_id) | ||
| if (existingJob) { | ||
| useUpgradeJobsStore.getState().setJob(server_id, { | ||
| ...existingJob, | ||
| stage: stage as UpgradeJob['stage'], | ||
| target_version | ||
| }) | ||
| } | ||
| break |
There was a problem hiding this comment.
TypeScript narrowing issue causes pipeline failure at line 360.
The pipeline shows TS2352 error: TypeScript doesn't recognize that the validation checks (lines 352-357) narrow the type. The cast at line 360 fails because TypeScript sees raw as { type: string } & Record<string, unknown> which doesn't satisfy the target type.
🐛 Proposed fix to satisfy TypeScript
case 'upgrade_progress': {
if (
typeof raw.server_id !== 'string' ||
typeof raw.job_id !== 'string' ||
typeof raw.target_version !== 'string' ||
typeof raw.stage !== 'string'
) {
break
}
- const { server_id, target_version, stage } = raw as {
- server_id: string
- job_id: string
- target_version: string
- stage: string
- }
+ const server_id = raw.server_id as string
+ const target_version = raw.target_version as string
+ const stage = raw.stage as string
const existingJob = useUpgradeJobsStore.getState().getJob(server_id)
if (existingJob) {
useUpgradeJobsStore.getState().setJob(server_id, {
...existingJob,
stage: stage as UpgradeJob['stage'],
target_version
})
}
break
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case 'upgrade_progress': { | |
| if ( | |
| typeof raw.server_id !== 'string' || | |
| typeof raw.job_id !== 'string' || | |
| typeof raw.target_version !== 'string' || | |
| typeof raw.stage !== 'string' | |
| ) { | |
| break | |
| } | |
| const { server_id, target_version, stage } = raw as { | |
| server_id: string | |
| job_id: string | |
| target_version: string | |
| stage: string | |
| } | |
| const existingJob = useUpgradeJobsStore.getState().getJob(server_id) | |
| if (existingJob) { | |
| useUpgradeJobsStore.getState().setJob(server_id, { | |
| ...existingJob, | |
| stage: stage as UpgradeJob['stage'], | |
| target_version | |
| }) | |
| } | |
| break | |
| case 'upgrade_progress': { | |
| if ( | |
| typeof raw.server_id !== 'string' || | |
| typeof raw.job_id !== 'string' || | |
| typeof raw.target_version !== 'string' || | |
| typeof raw.stage !== 'string' | |
| ) { | |
| break | |
| } | |
| const server_id = raw.server_id as string | |
| const target_version = raw.target_version as string | |
| const stage = raw.stage as string | |
| const existingJob = useUpgradeJobsStore.getState().getJob(server_id) | |
| if (existingJob) { | |
| useUpgradeJobsStore.getState().setJob(server_id, { | |
| ...existingJob, | |
| stage: stage as UpgradeJob['stage'], | |
| target_version | |
| }) | |
| } | |
| break |
🧰 Tools
🪛 GitHub Actions: CI
[error] 360-360: TS2352: Conversion may be a mistake when casting '{ type: string; } & Record<string, unknown>' to '{ server_id: string; job_id: string; target_version: string; stage: string; }'. Missing properties: server_id, job_id, target_version, stage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/src/hooks/use-servers-ws.ts` around lines 351 - 374, TypeScript
isn't narrowing `raw` in the 'upgrade_progress' branch, so replace the ad-hoc
typeof checks + cast with a user-defined type guard; add a function like
isUpgradeProgressPayload(x): x is { server_id: string; job_id: string;
target_version: string; stage: string } and call if
(!isUpgradeProgressPayload(raw)) break; then safely destructure const {
server_id, job_id, target_version, stage } = raw and continue to call
useUpgradeJobsStore.getState().getJob(server_id) and setJob(...) using
UpgradeJob['stage'] as needed.
| case 'upgrade_result': { | ||
| if ( | ||
| typeof raw.server_id !== 'string' || | ||
| typeof raw.job_id !== 'string' || | ||
| typeof raw.target_version !== 'string' || | ||
| typeof raw.status !== 'string' | ||
| ) { | ||
| break | ||
| } | ||
| const { server_id, job_id, target_version, status, stage, error, backup_path } = raw as { | ||
| server_id: string | ||
| job_id: string | ||
| target_version: string | ||
| status: string | ||
| stage?: string | ||
| error?: string | null | ||
| backup_path?: string | null | ||
| } | ||
| const existingJob = useUpgradeJobsStore.getState().getJob(server_id) | ||
| const now = new Date().toISOString() | ||
| useUpgradeJobsStore.getState().setJob(server_id, { | ||
| server_id, | ||
| job_id, | ||
| target_version, | ||
| stage: (stage as UpgradeJob['stage']) ?? existingJob?.stage ?? 'downloading', | ||
| status: status as UpgradeJob['status'], | ||
| error: error ?? null, | ||
| backup_path: backup_path ?? null, | ||
| started_at: existingJob?.started_at ?? now, | ||
| finished_at: now | ||
| }) | ||
| break |
There was a problem hiding this comment.
TypeScript narrowing issue causes pipeline failure at line 385.
Same issue as upgrade_progress - the cast fails TypeScript validation despite the runtime checks.
🐛 Proposed fix to satisfy TypeScript
case 'upgrade_result': {
if (
typeof raw.server_id !== 'string' ||
typeof raw.job_id !== 'string' ||
typeof raw.target_version !== 'string' ||
typeof raw.status !== 'string'
) {
break
}
- const { server_id, job_id, target_version, status, stage, error, backup_path } = raw as {
- server_id: string
- job_id: string
- target_version: string
- status: string
- stage?: string
- error?: string | null
- backup_path?: string | null
- }
+ const server_id = raw.server_id as string
+ const job_id = raw.job_id as string
+ const target_version = raw.target_version as string
+ const status = raw.status as string
+ const stage = raw.stage as string | undefined
+ const error = raw.error as string | null | undefined
+ const backup_path = raw.backup_path as string | null | undefined
const existingJob = useUpgradeJobsStore.getState().getJob(server_id)
const now = new Date().toISOString()
useUpgradeJobsStore.getState().setJob(server_id, {
server_id,
job_id,
target_version,
stage: (stage as UpgradeJob['stage']) ?? existingJob?.stage ?? 'downloading',
status: status as UpgradeJob['status'],
error: error ?? null,
backup_path: backup_path ?? null,
started_at: existingJob?.started_at ?? now,
finished_at: now
})
break
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| case 'upgrade_result': { | |
| if ( | |
| typeof raw.server_id !== 'string' || | |
| typeof raw.job_id !== 'string' || | |
| typeof raw.target_version !== 'string' || | |
| typeof raw.status !== 'string' | |
| ) { | |
| break | |
| } | |
| const { server_id, job_id, target_version, status, stage, error, backup_path } = raw as { | |
| server_id: string | |
| job_id: string | |
| target_version: string | |
| status: string | |
| stage?: string | |
| error?: string | null | |
| backup_path?: string | null | |
| } | |
| const existingJob = useUpgradeJobsStore.getState().getJob(server_id) | |
| const now = new Date().toISOString() | |
| useUpgradeJobsStore.getState().setJob(server_id, { | |
| server_id, | |
| job_id, | |
| target_version, | |
| stage: (stage as UpgradeJob['stage']) ?? existingJob?.stage ?? 'downloading', | |
| status: status as UpgradeJob['status'], | |
| error: error ?? null, | |
| backup_path: backup_path ?? null, | |
| started_at: existingJob?.started_at ?? now, | |
| finished_at: now | |
| }) | |
| break | |
| case 'upgrade_result': { | |
| if ( | |
| typeof raw.server_id !== 'string' || | |
| typeof raw.job_id !== 'string' || | |
| typeof raw.target_version !== 'string' || | |
| typeof raw.status !== 'string' | |
| ) { | |
| break | |
| } | |
| const server_id = raw.server_id as string | |
| const job_id = raw.job_id as string | |
| const target_version = raw.target_version as string | |
| const status = raw.status as string | |
| const stage = raw.stage as string | undefined | |
| const error = raw.error as string | null | undefined | |
| const backup_path = raw.backup_path as string | null | undefined | |
| const existingJob = useUpgradeJobsStore.getState().getJob(server_id) | |
| const now = new Date().toISOString() | |
| useUpgradeJobsStore.getState().setJob(server_id, { | |
| server_id, | |
| job_id, | |
| target_version, | |
| stage: (stage as UpgradeJob['stage']) ?? existingJob?.stage ?? 'downloading', | |
| status: status as UpgradeJob['status'], | |
| error: error ?? null, | |
| backup_path: backup_path ?? null, | |
| started_at: existingJob?.started_at ?? now, | |
| finished_at: now | |
| }) | |
| break | |
| } |
🧰 Tools
🪛 GitHub Actions: CI
[error] 385-385: TS2352: Conversion may be a mistake when casting '{ type: string; } & Record<string, unknown>' to '{ server_id: string; job_id: string; target_version: string; status: string; stage?: string | undefined; error?: string | null | undefined; backup_path?: string | null | undefined; }'. Missing properties: server_id, job_id, target_version, status.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/src/hooks/use-servers-ws.ts` around lines 376 - 407, The TypeScript
error comes from casting raw to a complex type inside the 'upgrade_result' case;
after your runtime typeof checks, stop using the inline "as { ... }" cast and
instead build a properly typed payload object from the validated raw fields
(e.g. create const payload = { server_id: raw.server_id, job_id: raw.job_id,
target_version: raw.target_version, status: raw.status, stage: raw.stage, error:
raw.error ?? null, backup_path: raw.backup_path ?? null } and then narrow
stage/status into UpgradeJob['stage'] and UpgradeJob['status'] as needed), then
call useUpgradeJobsStore.getState().setJob(server_id, { ...payload, stage:
payload.stage ?? existingJob?.stage ?? 'downloading', started_at:
existingJob?.started_at ?? now, finished_at: now }); this avoids the failing
cast while preserving your runtime checks and uses the existing functions getJob
and setJob and the UpgradeJob types.
| if let Err(error) = cmd.spawn() { | ||
| emit_upgrade_failure( | ||
| &tx, | ||
| None, | ||
| version.to_string(), | ||
| UpgradeStage::Restarting, | ||
| error.to_string(), | ||
| Some(backup_path.display().to_string()), | ||
| ) | ||
| .await; | ||
| return Err(error.into()); | ||
| } |
There was a problem hiding this comment.
job_id is None on spawn failure instead of the actual job ID.
When the new process fails to spawn, the failure is emitted with job_id: None, which prevents the server from correlating this failure with the tracked job. The job_id should be passed through.
Proposed fix
- if let Err(error) = cmd.spawn() {
+ let job_id_for_failure = job_id.clone();
+ if let Err(error) = cmd.spawn() {
emit_upgrade_failure(
&tx,
- None,
+ job_id_for_failure,
version.to_string(),
UpgradeStage::Restarting,
error.to_string(),
Some(backup_path.display().to_string()),
)
.await;
return Err(error.into());
}Note: Since job_id is consumed by emit_upgrade_progress on line 1945, you'll need to clone it earlier or restructure the ownership.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/agent/src/reporter.rs` around lines 1952 - 1963, The failure emission
currently passes job_id: None when cmd.spawn() fails because job_id was
previously moved; locate the code around emit_upgrade_progress and the
cmd.spawn() block and preserve the job id by cloning it before it is consumed
(or change ownership to borrow) so you can pass Some(job_id.clone()) into
emit_upgrade_failure(&tx, Some(job_id.clone()), version.to_string(),
UpgradeStage::Restarting, error.to_string(),
Some(backup_path.display().to_string())). Ensure you clone the original job_id
variable right before the call that consumes it (the emit_upgrade_progress call
at/near where job_id is currently moved) so the spawn failure path has the real
job id to report.
| let asset = state | ||
| .upgrade_release_service | ||
| .resolve_asset(version, &asset_name) | ||
| .await?; | ||
|
|
||
| let sender = state | ||
| .agent_manager | ||
| .get_sender(&id) | ||
| .ok_or_else(|| AppError::NotFound("Agent not connected".into()))?; | ||
|
|
||
| let job = state | ||
| .upgrade_tracker | ||
| .start_job(&id, version.to_string()) | ||
| .map_err(|error| match error { | ||
| StartUpgradeJobError::Conflict(existing) => AppError::Conflict(format!( | ||
| "Upgrade already running for server {} (job_id={}, target_version={})", | ||
| existing.server_id, existing.job_id, existing.target_version | ||
| )), | ||
| })?; |
There was a problem hiding this comment.
Fail fast on no-op or locally-denied upgrades before job creation.
This block can still fetch release metadata and open a job even when the connected agent is already on the requested version or when its effective runtime capabilities do not include CAP_UPGRADE. The first case can self-resolve as "succeeded" on the next SystemInfo; the second just creates a doomed job that the agent will bounce with CapabilityDenied.
🧭 Suggested preflight checks
+ if server
+ .agent_version
+ .as_deref()
+ .is_some_and(|current| normalize_version(current) == version)
+ {
+ return Err(AppError::Conflict(
+ "Agent is already running the requested version".into(),
+ ));
+ }
+
+ if let Some(effective_caps) = state.agent_manager.get_effective_capabilities(&id)
+ && !has_capability(effective_caps, CAP_UPGRADE)
+ {
+ return Err(AppError::Forbidden(
+ "Upgrade capability is disabled by the connected agent".into(),
+ ));
+ }
+
let asset = state
.upgrade_release_service
.resolve_asset(version, &asset_name)As per coding guidelines Validate capabilities on both server and agent sides.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/server/src/router/api/server.rs` around lines 571 - 589, Before
creating a job, perform preflight checks: after getting the sender via
state.agent_manager.get_sender(&id) but before calling
state.upgrade_release_service.resolve_asset(...) and
state.upgrade_tracker.start_job(...), read the agent's current runtime/system
info (e.g., from the returned sender or agent record) to (1) detect a no-op
upgrade (agent already at the requested version) and return an immediate
success/Idempotent response, and (2) verify the agent's effective capabilities
include CAP_UPGRADE and return a suitable error (e.g., AppError::Forbidden or
AppError::Conflict) if missing; implement these checks using the existing
symbols (state.agent_manager.get_sender, CAP_UPGRADE,
state.upgrade_release_service.resolve_asset, state.upgrade_tracker.start_job,
StartUpgradeJobError::Conflict) so the code fails fast before fetching assets or
creating jobs.
| if let Some(cached) = cache.as_ref() | ||
| && !cached.is_expired() | ||
| { | ||
| return cached.response(); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check Rust version requirements in the project
cat Cargo.toml | grep -i "rust-version\|edition"
fd "Cargo.toml" crates --exec grep -l "rust-version" {} \; 2>/dev/null | head -5 | xargs cat 2>/dev/null | grep -i "rust-version"
rustc --version 2>/dev/null || echo "rustc not found in sandbox"Repository: ZingerLittleBee/ServerBee
Length of output: 112
🌐 Web query:
Rust let-chains stabilized version when released
💡 Result:
Rust let-chains (the let_chains feature, allowing &&-chaining of let statements in if and while conditions) was stabilized in Rust 1.88.0, released on June 26, 2025. It is available without a feature flag only in the Rust 2024 edition and later. The stabilization PR (#132833) was merged on April 22, 2025, and included in the 1.88.0 release notes. Earlier partial stabilization attempts (e.g., PR #94927 for 1.64) were limited or reverted due to issues with drop order semantics, finalized properly for edition 2024.
Citations:
- 1: Stabilize let chains in the 2024 edition rust-lang/rust#132833
- 2: Tracking issue for release notes of #132833: Stabilize let chains in the 2024 edition rust-lang/rust#139951
- 3: https://doc.rust-lang.org/beta/edition-guide/rust-2024/let-chains.html
- 4: https://www.reddit.com/r/rust/comments/1k54qqo/let_chains_are_stabilized/
- 5: https://doc.rust-lang.org/stable/edition-guide/rust-2024/let-chains.html
- 6: https://blog.rust-lang.org/releases
- 7: https://releases.rs/docs/1.88.0/
- 8: https://blog.rust-lang.org/2025/06/26/Rust-1.88.0
- 9: https://doc.rust-lang.org/stable/releases.html
- 10: https://chromium.googlesource.com/external/github.com/rust-lang/rust/+/refs/tags/1.88.0/RELEASES.md
- 11: Stabilize let chains in the 2024 edition rust-lang/rust#132833
🌐 Web query:
Rust 2024 edition MSRV minimum supported version requirement
💡 Result:
There is no minimum supported Rust version (MSRV) requirement for using the Rust 2024 edition. The Rust 2024 edition was stabilized and released in Rust 1.85.0 (February 20, 2025). Any Rust version >=1.85.0 fully supports compiling code with edition="2024" in Cargo.toml. The rust-version field in Cargo.toml is optional and declares the MSRV that crate authors choose for their package; it is not tied to the edition. Edition 2024 enables Cargo's rust-version-aware resolver by default (resolver="3"), which respects dependencies' rust-version during resolution, but this does not impose an MSRV on the project itself. Projects can set edition="2024" without specifying rust-version (defaulting to no declared MSRV), and it will compile on Rust 1.85+. Older Rust versions (<1.85) cannot compile edition 2024 code due to lacking edition support, but there is no further restriction.
Citations:
- 1: https://blog.rust-lang.org/2025/02/20/Rust-1.85.0/
- 2: https://doc.rust-lang.org/edition-guide/rust-2024/index.html
- 3: https://doc.rust-lang.org/stable/edition-guide/rust-2024/index.html
- 4: https://doc.rust-lang.org/beta/edition-guide/rust-2024/cargo-resolver.html
- 5: https://doc.rust-lang.org/stable/edition-guide/editions/creating-a-new-project.html
- 6: https://doc.rust-lang.org/cargo/reference/rust-version.html
- 7: https://blog.rust-lang.org/2025/01/09/Rust-1.84.0/
- 8: Stabilize the 2024 edition rust-lang/rust#133349
- 9: https://doc.rust-lang.org/stable/edition-guide/rust-2024/
Update MSRV or Rust version requirement to match let-chains stabilization in edition 2024.
Let-chains syntax was stabilized in Rust 1.88.0 (released June 26, 2025) and is available without a feature flag only in the 2024 edition. The project currently uses edition = "2024" without an explicit rust-version field in Cargo.toml. To ensure the code compiles, the project must require Rust 1.88.0 or later. Set rust-version = "1.88" in Cargo.toml or document the MSRV requirement.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/server/src/service/upgrade_release.rs` around lines 84 - 88, The code
in upgrade_release.rs uses let-chain syntax (the if let Some(cached) =
cache.as_ref() && !cached.is_expired() { return cached.response(); }) which
requires Rust 1.88+/edition 2024; update the project MSRV by adding rust-version
= "1.88" to Cargo.toml (or document the requirement), or alternatively rewrite
the branch to avoid let-chains (e.g., match or nested ifs) if you cannot raise
the MSRV; reference the let-chain line around cache.as_ref(),
cached.is_expired(), and cached.response() when making the change.
| ### 6. Verify Concurrent Upgrade Rejection | ||
|
|
||
| **Steps:** | ||
| 1. Start an upgrade on server A | ||
| 2. While upgrade is in progress, attempt to start upgrade on server B | ||
| 3. Or rapidly click upgrade button multiple times on same server | ||
|
|
||
| **Expected:** | ||
| - Second upgrade attempt rejected with "Upgrade already in progress" message | ||
| - UI prevents concurrent upgrade initiation | ||
| - First upgrade continues unaffected |
There was a problem hiding this comment.
Make this scenario same-server.
UpgradeJobTracker conflicts on server_id, so an upgrade on server B should be allowed. The rejection case here is only "start another upgrade on the same server while one is already running."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/agent-upgrade.md` around lines 91 - 101, The test scenario should be
changed to verify same-server concurrent upgrade rejection because
UpgradeJobTracker conflicts on server_id; update the steps and expectations in
the "Verify Concurrent Upgrade Rejection" section to start an upgrade on a
single server (e.g., Server A), then attempt to start another upgrade on that
same server (or rapidly click the upgrade button on that same server) and assert
UpgradeJobTracker rejects the second attempt with the "Upgrade already in
progress" message and the UI prevents initiating a second upgrade while the
first continues; ensure all mentions of "server B" or different-server attempts
are removed and reference UpgradeJobTracker and server_id as the concurrency key
in the scenario description.
| ### 8. Test WebSocket Real-Time Updates | ||
|
|
||
| **Steps:** | ||
| 1. Open browser DevTools Network tab | ||
| 2. Connect to WebSocket `/api/ws/servers` | ||
| 3. Trigger upgrade from another browser/session | ||
| 4. Monitor WebSocket messages | ||
|
|
||
| **Expected:** | ||
| - `CapabilitiesChanged` message received when upgrade starts (capability temporarily disabled) | ||
| - `ServerUpdate` messages with upgrade progress in payload | ||
| - `CapabilitiesChanged` message received when upgrade completes (capability re-enabled) | ||
| - UI updates in real-time without page refresh | ||
|
|
||
| --- | ||
|
|
||
| ### 9. Verify Capability Check | ||
|
|
||
| **Steps:** | ||
| 1. Disable `CAP_UPGRADE` on a server (via database or API) | ||
| 2. Navigate to that server's detail page | ||
|
|
||
| **Expected:** | ||
| - "Upgrade Agent" button is hidden | ||
| - Upgrade option not available in UI | ||
| - Attempting upgrade via API returns capability error | ||
|
|
||
| --- | ||
|
|
||
| ### 10. Rollback Verification | ||
|
|
||
| **Steps:** | ||
| 1. Trigger upgrade | ||
| 2. During `installing` stage, force agent disconnect (kill process) | ||
| 3. Restart agent manually | ||
|
|
||
| **Expected:** | ||
| - Agent starts with previous version (backup restored) | ||
| - Server detects version mismatch on reconnect | ||
| - Upgrade can be retried | ||
|
|
||
| --- | ||
|
|
||
| ## API Endpoints | ||
|
|
||
| | Endpoint | Method | Auth | Description | | ||
| |----------|--------|------|-------------| | ||
| | `/api/servers/:id/upgrade` | POST | Admin | Trigger agent upgrade | | ||
| | `/api/servers/:id/upgrade-status` | GET | Admin | Get current upgrade status | | ||
| | `/api/ws/servers` | WS | Session | Real-time upgrade progress | |
There was a problem hiding this comment.
Update the checklist to the actual WS/API surface.
The upgrade flow in this PR is driven by full_sync.upgrades, upgrade_progress, and upgrade_result, and the provided router only exposes POST /api/servers/{id}/upgrade. Looking for CapabilitiesChanged, ServerUpdate progress payloads, or GET /api/servers/:id/upgrade-status will send QA after behavior that is not implemented here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/agent-upgrade.md` around lines 124 - 173, The checklist references WS
messages and a GET endpoint that don't exist; update the test steps and API
surface to match the actual upgrade flow driven by full_sync.upgrades,
upgrade_progress, and upgrade_result and the router that only exposes POST
/api/servers/{id}/upgrade; remove or replace references to
CapabilitiesChanged/ServerUpdate WebSocket messages and
/api/servers/:id/upgrade-status, and instead instruct QA to observe the
full_sync.upgrades event stream and the upgrade_progress/upgrade_result payloads
(or the single POST call response and subsequent state changes) when validating
behavior.
Summary
Test Plan
cargo test --workspacebun run testbun run typecheckSummary by CodeRabbit
New Features
Documentation
Localization