feat: lazy pool init and /drain endpoint for session cleanup#2
feat: lazy pool init and /drain endpoint for session cleanup#2ImranJM1425 wants to merge 6 commits intoBrowserCash:mainfrom
Conversation
- Pool no longer pre-warms sessions on startup (0 sessions until first scrape) - Added POST /drain endpoint to close all Browser.cash sessions via API - Graceful shutdown now closes remote sessions, not just local pool - getPool() is now async with lazy initialization
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthrough
Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant App as Application
participant Pool as Pool Manager
participant SessionPool as SessionPool
participant BrowserAPI as Browser API
rect rgba(0, 150, 200, 0.5)
Note over App,Pool: Old Flow (eager init)
App->>Pool: await initPool(POOL_SIZE)
Pool->>SessionPool: new SessionPool()
Pool->>SessionPool: await pool.init()
SessionPool-->>Pool: ready
Pool-->>App: pool instance
end
rect rgba(200, 100, 0, 0.5)
Note over App,Pool: New Flow (lazy init)
App->>Pool: configurePool(POOL_SIZE)
Pool-->>App: returns
Client->>App: GET /crawl
App->>Pool: await getPool()
Pool->>SessionPool: new SessionPool() / await pool.init()
SessionPool-->>Pool: ready
Pool-->>App: pool instance
App->>SessionPool: pool.acquire()
SessionPool-->>App: session
end
rect rgba(200, 0, 100, 0.5)
Note over App,BrowserAPI: Drain Endpoint
Client->>App: POST /drain
App->>Pool: await closeAllSessions()
Pool->>BrowserAPI: GET /v1/browser/sessions (paged)
BrowserAPI-->>Pool: session lists
Pool->>BrowserAPI: DELETE /v1/browser/sessions/{id} (with timeout)
BrowserAPI-->>Pool: delete response
Pool-->>App: { closed: N }
App-->>Client: { ok: true, closed: N }
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/crawl.ts`:
- Around line 185-186: Move the pool initialization and acquisition into the
route's try block: call await getPool() and await pool.acquire() inside the
existing try that handles the scrape so any errors hit the scrape_failed
response and perf summary, and remove the preregistered const pool/session
before the try. Also update the finally/release logic to only call
session.release() (or pool.release(session)) when the local session variable is
defined/non-null to avoid releasing a non-acquired session; keep the rest of the
perf summary and error handling intact.
In `@src/index.ts`:
- Around line 17-21: The /drain route is an unauthenticated remote kill switch;
require an admin authentication check before calling closeAllSessions(). Add a
guard in the app.post('/drain', ...) handler that validates a configured admin
secret (e.g., process.env.ADMIN_API_KEY) against a supplied header
(Authorization: Bearer <token> or X-Admin-Token) using a constant-time
comparison, return 401/403 for missing/invalid tokens, and only call
closeAllSessions() when the token matches; alternatively restrict access to
localhost if intended for local use. Ensure you reference the existing
app.post('/drain') handler and the closeAllSessions function when implementing
the check.
In `@src/pool.ts`:
- Around line 76-99: closeAllSessions() currently can hang (no per-request
timeout), over-count closures (increments closed without checking DELETE
response), and stops early due to a hard page < 5 cap; fix by adding a
per-request timeout/abort (e.g., AbortController) for both GET and DELETE
fetches, only increment the closed counter when the DELETE response is
successful (response.ok / status 2xx), and remove the fixed page cap by
iterating until page >= (data.totalPages ?? 1) - 1 or using a do/while driven by
totalPages; also consider an overall timeout for closeAllSessions() to prevent
indefinite hangs. Use the existing function name closeAllSessions, the variables
BROWSER_API_KEY, sessions/active, and the DELETE fetch that uses sessionId to
locate and update the exact spots to change.
- Around line 23-45: The lazy-initialization in getPool currently races because
multiple callers can await pool.init() concurrently and a failed init leaves a
half-created pool; introduce a single-flight init promise (e.g.,
poolInitPromise) used by getPool so the first caller creates and assigns
poolInitPromise before awaiting, subsequent callers await that same promise, and
on rejection clear pool and poolInitPromise so retries start fresh; update
getPool to set poolInitPromise when starting init, await it instead of calling
pool.init() directly in multiple places, and ensure poolInitialized is only set
on resolved init.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ca9adc5a-8e9a-4ce8-98bd-ed66583d732b
📒 Files selected for processing (3)
src/crawl.tssrc/index.tssrc/pool.ts
…eanup - crawl.ts: move getPool()/acquire() inside try block so init failures hit the scrape_failed error handler; guard release with null check - index.ts: add DRAIN_TOKEN auth to /drain endpoint via X-Drain-Token header (optional — backward compatible when env var not set) - pool.ts getPool(): single-flight init promise to prevent concurrent callers from double-initializing; failed init resets state cleanly - pool.ts closeAllSessions(): add 10s AbortSignal.timeout on all fetches, only count closed when response.ok, use do/while driven by totalPages instead of hard page<5 cap
Browser.cash API uses 1-based pagination. page=0 returns a ZodError validation error which was silently caught, causing drain to always report closed:0 even with orphaned sessions.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/pool.ts (1)
65-70:⚠️ Potential issue | 🟠 MajorDon't let a local shutdown failure abort the remote drain.
If
pool.shutdown()throws,closeAllSessions()exits before it reaches the REST cleanup, and both singleton vars stay populated. That leaves remote sessions open and can make latergetPool()calls reuse a stale pool.🛠️ Suggested fix
export async function shutdownPool(): Promise<void> { if (pool) { - await pool.shutdown(); - pool = null; - poolInitPromise = null; + try { + await pool.shutdown(); + } finally { + pool = null; + poolInitPromise = null; + } } } export async function closeAllSessions(): Promise<{ closed: number }> { - await shutdownPool(); - let closed = 0; try { + try { + await shutdownPool(); + } catch (err) { + if (DEBUG_LOG) { + console.error('[pool] Error shutting down local pool:', err); + } + } + let page = 1;Also applies to: 78-79
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/pool.ts` around lines 65 - 70, shutdownPool currently lets an exception from pool.shutdown() abort the rest of the function so closeAllSessions() never runs and the singletons (pool, poolInitPromise) remain populated; wrap the call to pool.shutdown() in a try/catch (or use try/finally) so any error is caught/logged but execution continues to call closeAllSessions() and then always clear pool and poolInitPromise; also ensure closeAllSessions() itself is awaited inside its own try/catch so both the local shutdown and the remote drain are attempted and errors from each are handled without preventing the cleanup of the singleton vars.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/pool.ts`:
- Around line 83-84: The loop stopping condition skips the final page because
page is incremented before checking against totalPages; change the loop
condition(s) from while (page < totalPages) to while (page <= totalPages) (or
otherwise ensure increment happens after processing) so the last page is
processed; apply this change for the occurrences using the page and totalPages
variables (the two while loops around these variables).
- Around line 88-95: The session-list fetch does not check response.ok before
calling res.json(), so update the code around the fetch in src/pool.ts (the call
that assigns to res and data using page, BROWSER_API_KEY and listCtrl) to verify
res.ok (or res.status) and handle non-2xx responses by logging or throwing a
clear error instead of proceeding to parse; apply the same fix to the other list
fetch block later (the one around lines 115–121) so both session enumeration
requests mirror the DELETE check (delRes.ok) and fail fast rather than silently
returning { closed: 0 } on API errors.
---
Outside diff comments:
In `@src/pool.ts`:
- Around line 65-70: shutdownPool currently lets an exception from
pool.shutdown() abort the rest of the function so closeAllSessions() never runs
and the singletons (pool, poolInitPromise) remain populated; wrap the call to
pool.shutdown() in a try/catch (or use try/finally) so any error is
caught/logged but execution continues to call closeAllSessions() and then always
clear pool and poolInitPromise; also ensure closeAllSessions() itself is awaited
inside its own try/catch so both the local shutdown and the remote drain are
attempted and errors from each are handled without preventing the cleanup of the
singleton vars.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9f9b84c4-596e-4a33-bae5-7ba9389098e1
📒 Files selected for processing (3)
src/crawl.tssrc/index.tssrc/pool.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/crawl.ts
- src/index.ts
… resilience
- Change while (page < totalPages) to while (page <= totalPages) so the
last page of sessions is processed during drain
- Add res.ok check before res.json() on session-list fetch to fail fast
on non-2xx responses instead of silently returning { closed: 0 }
- Wrap pool.shutdown() in try/catch/finally so closeAllSessions() always
runs and singletons are always cleared even if shutdown throws
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/pool.ts (2)
110-121: Consider parallelizing DELETE requests for faster drain.Sequential deletes can be slow when many sessions exist. Batching with
Promise.allSettledcould improve drain performance.♻️ Example parallel approach
- for (const s of active) { - try { - const delCtrl = AbortSignal.timeout(CLOSE_TIMEOUT_MS); - const delRes = await fetch( - `https://api.driver.dev/v1/browser/session?sessionId=${s.sessionId}`, - { method: 'DELETE', headers: { Authorization: `Bearer ${BROWSER_API_KEY}` }, signal: delCtrl } - ); - if (delRes.ok) { - closed++; - } - } catch { /* ignore individual failures */ } - } + const deletePromises = active.map(async (s) => { + try { + const delCtrl = AbortSignal.timeout(CLOSE_TIMEOUT_MS); + const delRes = await fetch( + `https://api.driver.dev/v1/browser/session?sessionId=${s.sessionId}`, + { method: 'DELETE', headers: { Authorization: `Bearer ${BROWSER_API_KEY}` }, signal: delCtrl } + ); + return delRes.ok ? 1 : 0; + } catch { + return 0; + } + }); + const results = await Promise.allSettled(deletePromises); + closed += results.reduce((sum, r) => sum + (r.status === 'fulfilled' ? r.value : 0), 0);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/pool.ts` around lines 110 - 121, The loop over active sessions performs DELETE requests sequentially (see variable active, CLOSE_TIMEOUT_MS, BROWSER_API_KEY and the fetch to https://api.driver.dev/v1/browser/session) — change it to issue requests in parallel using Promise.allSettled: map active to promises that create an AbortSignal.timeout(CLOSE_TIMEOUT_MS), perform the fetch with the same headers and signal, and resolve a boolean/marker when delRes.ok; then await Promise.allSettled on that array and compute closed by counting successful ok results, catching/ignoring per-request failures as currently done.
58-60: Consider warning if pool is already initialized.Calling
configurePool()aftergetPool()has run silently ignores the new size. Adding a defensive log or throw would prevent misconfiguration.export function configurePool(size: number): void { + if (pool || poolInitPromise) { + console.warn('[pool] configurePool called after pool initialization; size change ignored'); + } poolSize = size; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/pool.ts` around lines 58 - 60, The configurePool function currently overwrites poolSize silently even if the pool has already been created via getPool; update configurePool to check whether the pool is already initialized (e.g., by calling getPool() or checking the internal pool instance/flag) and if so emit a defensive alert (console.warn or your module logger) or throw an Error to prevent silent misconfiguration; reference the configurePool and getPool symbols and use the existing poolSize variable to decide behavior and message content.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/pool.ts`:
- Around line 110-121: The loop over active sessions performs DELETE requests
sequentially (see variable active, CLOSE_TIMEOUT_MS, BROWSER_API_KEY and the
fetch to https://api.driver.dev/v1/browser/session) — change it to issue
requests in parallel using Promise.allSettled: map active to promises that
create an AbortSignal.timeout(CLOSE_TIMEOUT_MS), perform the fetch with the same
headers and signal, and resolve a boolean/marker when delRes.ok; then await
Promise.allSettled on that array and compute closed by counting successful ok
results, catching/ignoring per-request failures as currently done.
- Around line 58-60: The configurePool function currently overwrites poolSize
silently even if the pool has already been created via getPool; update
configurePool to check whether the pool is already initialized (e.g., by calling
getPool() or checking the internal pool instance/flag) and if so emit a
defensive alert (console.warn or your module logger) or throw an Error to
prevent silent misconfiguration; reference the configurePool and getPool symbols
and use the existing poolSize variable to decide behavior and message content.
Ensures the REST API session drain always runs even if pool.shutdown() fails, so Browser.cash sessions are cleaned up regardless of local pool state.
- Replace sequential DELETE loop with Promise.allSettled for faster drain when many sessions exist - Add warning and early return in configurePool if pool is already initialized to prevent silent misconfiguration
Summary by CodeRabbit
New Features
Improvements
Bug Fixes