Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
848 changes: 0 additions & 848 deletions app/actions.tsx

This file was deleted.

388 changes: 388 additions & 0 deletions app/api/chat/stream/route.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,388 @@
import { CoreMessage, ToolResultPart, streamText, LanguageModel } from 'ai'
import { nanoid } from '@/lib/utils'
import { getCurrentUserIdOnServer } from '@/lib/auth/get-current-user'
import { taskManager, inquire, querySuggestor } from '@/lib/agents'
import { researcher } from '@/lib/agents/researcher'
import { writer } from '@/lib/agents/writer'
import { resolutionSearch, type DrawnFeature } from '@/lib/agents/resolution-search'
import { getModel } from '@/lib/utils'
import { getSystemPrompt, saveChat } from '@/lib/actions/chat'
import type { Chat, AIMessage } from '@/lib/types'
import type { MapProvider } from '@/lib/store/settings'

export const maxDuration = 60

const streamHeaders = {
'Content-Type': 'text/plain; charset=utf-8',
'x-vercel-ai-data-stream': 'v1',
}

export async function POST(request: Request) {
const userId = await getCurrentUserIdOnServer()
if (!userId) {
return new Response(JSON.stringify({ error: 'Unauthorized' }), {
status: 401,
headers: { 'Content-Type': 'application/json' }
})
}
Comment thread
coderabbitai[bot] marked this conversation as resolved.

const body = await request.json()
const {
messages: clientMessages,
chatId = nanoid(),
action,
mapProvider = 'mapbox',
drawnFeatures: drawnFeaturesRaw,
timezone,
latitude,
longitude,
fileData,
mapboxImageData,
googleImageData,
} = body

const drawnFeatures: DrawnFeature[] = drawnFeaturesRaw || []
const location = (latitude !== undefined && longitude !== undefined)
? { lat: parseFloat(latitude), lng: parseFloat(longitude) }
: undefined

const currentSystemPrompt = (await getSystemPrompt(userId)) || ''
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

RAG context is still not being passed to the researcher.

currentSystemPrompt is assigned directly from getSystemPrompt(userId) and passed to researcher on line 135, with no call to the RAG augmentation (generate_embedding / hybrid_search from lib/actions/rag.ts). The retrieved-context infrastructure remains dormant in this endpoint, replicating the exact same bug previously identified in app/actions.tsx (where currentSystemPrompt was passed instead of augmentedSystemPrompt). If RAG was intended to be active post-migration, augment the prompt here before invoking researcher.

Based on learnings: "there's a bug on line 433 of app/actions.tsx where currentSystemPrompt is passed to the researcher instead of augmentedSystemPrompt, preventing the retrieved context from being used" — the same pattern is repeated here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/chat/stream/route.ts` at line 41, currentSystemPrompt is passed
directly to researcher and never augmented with RAG context; call the RAG
augmentation flow (use the functions from lib/actions/rag.ts such as
generate_embedding and hybrid_search or your existing wrapper that returns
augmentedSystemPrompt) after fetching getSystemPrompt(userId) and before
invoking researcher, then pass augmentedSystemPrompt to researcher instead of
currentSystemPrompt (reference symbols: currentSystemPrompt, getSystemPrompt,
researcher, generate_embedding, hybrid_search, augmentedSystemPrompt).

const useSpecificAPI = process.env.USE_SPECIFIC_API_FOR_WRITER === 'true'
const maxMsgs = useSpecificAPI ? 5 : 10

// Build core messages from client messages
const messages: CoreMessage[] = (clientMessages || [])
.filter((m: any) => m.role === 'user' || m.role === 'assistant')
.map((m: any) => ({
role: m.role,
content: m.content
}))

// Trim to max messages
if (messages.length > maxMsgs) {
messages.splice(0, messages.length - maxMsgs)
}
Comment on lines +53 to +64
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Trimming via splice breaks the title heuristic and drops tool turns from the replay context.

Two issues co-located here:

  1. messages.splice(0, messages.length - maxMsgs) mutates the array in place, then saveChatAsync later derives the chat title from messages[0] (lines 310-318). After trimming, messages[0] is whichever message survived the cut — often an old assistant turn, not the user's original prompt — so titles drift to assistant text like "Here is the analysis…" instead of the user's question.
  2. The .filter((m) => m.role === 'user' || m.role === 'assistant') then .map((m) => ({ role, content })) drops m.toolInvocations and any role === 'tool' turns. Replayed assistant messages that previously made tool calls now look like plain text to the model, making it likely to re-issue the same calls. If this is intentional, add a comment.
-  // Build core messages from client messages
-  const messages: CoreMessage[] = (clientMessages || [])
-    .filter((m: any) => m.role === 'user' || m.role === 'assistant')
-    .map((m: any) => ({
-      role: m.role,
-      content: m.content
-    }))
-
-  // Trim to max messages
-  if (messages.length > maxMsgs) {
-    messages.splice(0, messages.length - maxMsgs)
-  }
+  // Capture the original first user message before trimming so chat title stays stable.
+  const originalFirstUserMessage = (clientMessages || []).find((m: any) => m.role === 'user')
+
+  const messages: CoreMessage[] = (clientMessages || [])
+    .filter((m: any) => m.role === 'user' || m.role === 'assistant' || m.role === 'tool')
+    .map((m: any) => ({
+      role: m.role,
+      content: m.content,
+      // preserve assistant tool calls so the model doesn't re-issue them
+      ...(m.toolInvocations ? { toolInvocations: m.toolInvocations } : {}),
+    }))
+
+  if (messages.length > maxMsgs) {
+    messages.splice(0, messages.length - maxMsgs)
+  }

…and pass originalFirstUserMessage into saveChatAsync for title computation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/chat/stream/route.ts` around lines 53 - 64, The current in-place
splice and aggressive filter/map remove tool turns and mutate messages used
later for title heuristics: instead of mutating messages with
messages.splice(...) create a trimmed copy (e.g., trimmedMessages =
messages.slice(-maxMsgs)) so the original array (and original first user
message) remains intact; when building messages preserve tool turns and
toolInvocations by including role === 'tool' and copying m.toolInvocations in
the map that produces CoreMessage (rather than dropping those fields); finally
pass the preserved originalFirstUserMessage (capture messages[0] before
trimming) into saveChatAsync for title computation so the title heuristic uses
the original user prompt.


// Resolution search action
if (action === 'resolution_search' && fileData) {
return handleResolutionSearch({
messages, chatId, userId, fileData, mapboxImageData, googleImageData,
timezone, drawnFeatures, location
})
}

// Hardcoded responses
const lastMsg = messages[messages.length - 1]
const lastText = typeof lastMsg?.content === 'string' ? lastMsg.content.trim().toLowerCase() : ''
if (lastText === 'what is a planet computer?' || lastText === 'what is qcx-terra?') {
const definition = lastText === 'what is a planet computer?'
? `A planet computer is a proprietary environment aware system that interoperates Climate forecasting, mapping and scheduling using cutting edge multi-agents to streamline automation and exploration on a planet. Available for our Pro and Enterprise customers. [QCX Pricing](https://www.queue.cx/#pricing)`
: `QCX-Terra is a model garden of pixel level precision geospatial foundational models for efficient land feature predictions from satellite imagery. Available for our Pro and Enterprise customers. [QCX Pricing](https://www.queue.cx/#pricing)`;

const encoder = new TextEncoder()
const stream = new ReadableStream({
start(controller) {
controller.enqueue(encoder.encode(`0:${JSON.stringify(definition)}\n`))
controller.enqueue(encoder.encode(`2:[{"relatedQueries":{"items":[]},"type":"related"}]\n`))
controller.enqueue(encoder.encode(`e:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`))
controller.enqueue(encoder.encode(`d:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`))
controller.close()
}
})

saveChatAsync(chatId, userId, messages, definition)

return new Response(stream, {
headers: streamHeaders
})
Comment thread
coderabbitai[bot] marked this conversation as resolved.
}
Comment on lines +77 to +98
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Hardcoded replies: content drift vs. system prompt, and fragile string matching.

  • The Planet Computer definition returned here says "weather forecasting", while the system prompt source of truth in lib/agents/researcher.tsx (line 73) says "Climate forecasting". Pick one.
  • Matching on exact lowercased strings (including a trailing ?) means "What is a planet computer" (no question mark) or an extra trailing space falls through to the task manager and gets a different, model-generated answer — a visible inconsistency. Consider normalizing more aggressively (strip punctuation/whitespace) or move these responses into the system prompt only.
  • On empty messages, messages[messages.length - 1] is undefined — safe due to the optional chaining on line 68, but worth asserting at least one message before branching into the task-manager path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/chat/stream/route.ts` around lines 69 - 92, The hardcoded reply
branch uses fragile exact-match logic and inconsistent wording; update it to
normalize the last user message (trim, collapse whitespace, remove trailing
punctuation and lowercase) before matching instead of comparing raw lastText,
and choose wording consistent with the system prompt (use "Climate forecasting"
everywhere) when building the definition variable; also guard the branch by
asserting messages.length > 0 (or early-return) before accessing
messages[messages.length - 1], and keep the call to saveChatAsync(chatId,
userId, messages, definition) after these checks; alternatively consider moving
these canned replies into the system prompt in lib/agents/researcher.tsx and
only use this branch as a fallback.

Comment on lines +75 to +98
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Hardcoded reply branch is fragile and disagrees with the system prompt source-of-truth.

  • The Planet Computer string here says "Climate forecasting" — fine, but the original concern about exact lastText === 'what is a planet computer?' matching still applies: a missing trailing ?, an extra space, or any capitalization quirk in punctuation will fall through to the model and produce a different answer, creating visible inconsistency. Normalize aggressively (strip non-alphanumerics) or, better, move these definitions into the system prompt only.
  • This branch also bypasses taskManager, RAG augmentation, and tool persistence, but still calls saveChatAsync(chatId, userId, messages, definition) with the raw definition — meaning these turns don't capture related queries or any tool outputs in DB. If the user later asks a follow-up, the rehydrated history won't include the canned answer's related queries.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/chat/stream/route.ts` around lines 75 - 98, The hardcoded reply
branch using lastText exact matches (variable lastText) is fragile and bypasses
taskManager/RAG/tool persistence while saving only the raw definition via
saveChatAsync; fix by normalizing the input (e.g., strip punctuation and
whitespace and lowercase lastText) or better yet move these canned definitions
into the system prompt/source-of-truth, and ensure the branch funnels responses
through the same pipeline as other chats: invoke taskManager or the normal
response composer so RAG augmentation and tool outputs are applied, and when
persisting use the same save path (e.g., saveChatAsync) but include the full
response metadata (related queries/tool outputs) rather than only the raw
definition so rehydrated history contains relatedQueries and usage fields;
locate changes around lastMsg/lastText handling, the conditional block that
constructs definition and the call to saveChatAsync to implement this.


// Task manager: decide inquire vs proceed
let nextAction = 'proceed'
try {
const taskResult = await taskManager(messages)
if (taskResult?.object?.next === 'inquire') {
nextAction = 'inquire'
}
} catch (e) {
console.error('Task manager error:', e)
}

// Inquiry path
if (nextAction === 'inquire') {
const inquiryResult = await inquire(messages)
const encoder = new TextEncoder()
const stream = new ReadableStream({
start(controller) {
// Send inquiry data as a data annotation
const annotation = { type: 'inquiry', data: inquiryResult }
controller.enqueue(encoder.encode(`2:[${JSON.stringify(annotation)}]\n`))
controller.enqueue(encoder.encode(`e:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`))
controller.enqueue(encoder.encode(`d:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`))
controller.close()
}
})
return new Response(stream, {
headers: streamHeaders
})
}

// Proceed path: researcher -> optionally writer -> query suggestor
let answer = ''
let toolOutputs: ToolResultPart[] = []
let errorOccurred = false
const allToolOutputs: ToolResultPart[] = []
const maxAttempts = 3
let attempts = 0

while (
attempts < maxAttempts &&
(useSpecificAPI
? answer.length === 0
: answer.length === 0 && !errorOccurred)
) {
attempts++
const { fullResponse, hasError, toolResponses, newSegments } = await researcher(
currentSystemPrompt,
messages,
mapProvider as MapProvider,
useSpecificAPI,
drawnFeatures
)
answer = fullResponse
toolOutputs = toolResponses
errorOccurred = hasError
allToolOutputs.push(...toolResponses)
// Only append segments to messages on success or final attempt
if (answer.length > 0 || errorOccurred || attempts >= maxAttempts) {
messages.push(...newSegments)
}
}
Comment thread
coderabbitai[bot] marked this conversation as resolved.

if (useSpecificAPI && answer.length === 0) {
const latestMessages = messages.slice(maxMsgs * -1)
answer = await writer(currentSystemPrompt, latestMessages)
}

// Get related queries (sanitize to remove image parts)
let relatedQueries = {}
if (!errorOccurred) {
const sanitizedMessages: CoreMessage[] = messages.map((m: any) => {
if (Array.isArray(m.content)) {
return { ...m, content: m.content.filter((part: any) => part.type !== 'image') } as CoreMessage
}
return m
})
relatedQueries = await querySuggestor(sanitizedMessages)
Comment on lines +168 to +176
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Confirm querySuggestor's return shape vs the consumer's expectation.
rg -nP -C5 "export\s+async\s+function\s+querySuggestor" lib/agents/query-suggestor.tsx
rg -nP -C5 "relatedQueries|PartialRelated" lib/schema/related.ts
rg -nP -C2 "related\.relatedQueries|relatedQueries\.items" components/

Repository: QueueLab/QCX

Length of output: 1244


🏁 Script executed:

# Find where PartialRelated is defined (broader search)
rg -n "PartialRelated" --type ts --type tsx

Repository: QueueLab/QCX

Length of output: 84


🏁 Script executed:

# Get complete querySuggestor function implementation
rg -n -A20 "export\s+async\s+function\s+querySuggestor" --type ts --type tsx

Repository: QueueLab/QCX

Length of output: 84


🏁 Script executed:

# Check the route.ts file around lines 168-200 to see how annotation is emitted
fd "route.ts" | xargs head -n 210 | tail -n 50

Repository: QueueLab/QCX

Length of output: 1650


🏁 Script executed:

# Find PartialRelated definition
rg -n "PartialRelated" -t ts -t js

Repository: QueueLab/QCX

Length of output: 1868


🏁 Script executed:

# Find the chat stream route specifically
fd "stream/route" -e ts -e js

Repository: QueueLab/QCX

Length of output: 38


🏁 Script executed:

# Get complete querySuggestor function
rg -n -A30 "export async function querySuggestor" -t ts -t js

Repository: QueueLab/QCX

Length of output: 2651


🏁 Script executed:

# Get the relatedSchema definition to understand the structure
rg -n -A15 "export.*relatedSchema" lib/schema/related.tsx

Repository: QueueLab/QCX

Length of output: 341


🏁 Script executed:

# Get the chat stream route to see annotation emission around lines 168-200
cat -n app/api/chat/stream/route.ts | sed -n '160,205p'

Repository: QueueLab/QCX

Length of output: 2049


Remove the shape mismatch concern; the structure is correct.

querySuggestor returns PartialRelated with top-level shape { items: [...] } (per relatedSchema in lib/schema/related.tsx). The annotation emits { type: 'related', relatedQueries } where relatedQueries is that object, and the consumer correctly accesses related.relatedQueries.items.

However, the bandwidth concern is valid: relatedQueries remains {} when errorOccurred is true (line 168), but the annotation is still emitted unconditionally (lines 199–200). The consumer guards against empty results, but streaming {type:'related', relatedQueries:{}} on every error wastes bandwidth. Consider only emitting the annotation when !errorOccurred && relatedQueries.items?.length > 0.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/chat/stream/route.ts` around lines 168 - 176, relatedQueries is left
as {} when errorOccurred and then emitted unconditionally, wasting bandwidth;
change logic so relatedQueries is only set and streamed when querySuggestor
returned non-empty results. Specifically: keep using the sanitizedMessages ->
querySuggestor(...) call but set relatedQueries to undefined/null by default,
and when emitting the annotation that currently sends { type: 'related',
relatedQueries } only do so if !errorOccurred && relatedQueries?.items?.length >
0; update the emission code that writes the { type: 'related', relatedQueries }
annotation to be conditional on that check.

}

// Build streaming response
const encoder = new TextEncoder()
const stream = new ReadableStream({
start(controller) {
// Send tool results as annotations
for (const toolResult of allToolOutputs) {
const annotation = {
type: 'tool_result',
toolName: toolResult.toolName,
result: toolResult.result
}
controller.enqueue(encoder.encode(`2:[${JSON.stringify(annotation)}]\n`))
}

// Stream the text response
if (answer) {
controller.enqueue(encoder.encode(`0:${JSON.stringify(answer)}\n`))
}

// Send related queries as annotation
const relatedAnnotation = { type: 'related', relatedQueries }
controller.enqueue(encoder.encode(`2:[${JSON.stringify(relatedAnnotation)}]\n`))

// Finish
controller.enqueue(encoder.encode(`e:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`))
controller.enqueue(encoder.encode(`d:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`))
controller.close()
}
})

// Save chat asynchronously
saveChatAsync(chatId, userId, messages, answer, allToolOutputs, relatedQueries)

return new Response(stream, {
headers: streamHeaders
})
Comment on lines +209 to +214
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, check if the file exists and examine the relevant sections
if [ -f "app/api/chat/stream/route.ts" ]; then
  echo "=== File exists ==="
  wc -l app/api/chat/stream/route.ts
  echo ""
  echo "=== Lines 185-195 (around first occurrence) ==="
  sed -n '185,195p' app/api/chat/stream/route.ts
  echo ""
  echo "=== Lines 260-270 (around second occurrence) ==="
  sed -n '260,270p' app/api/chat/stream/route.ts
  echo ""
  echo "=== Check imports at top of file ==="
  head -30 app/api/chat/stream/route.ts
else
  echo "File not found. Searching for similar files..."
  find . -name "*route.ts" -o -name "*stream*" | grep -i chat
fi

Repository: QueueLab/QCX

Length of output: 1968


🏁 Script executed:

# Check if saveChatAsync is defined and how it's used throughout the file
rg "saveChatAsync" --context 3

Repository: QueueLab/QCX

Length of output: 1547


🏁 Script executed:

# Check the package.json to see Next.js version
if [ -f "package.json" ]; then
  echo "=== Next.js version ==="
  rg '"next"' package.json
fi

Repository: QueueLab/QCX

Length of output: 102


🏁 Script executed:

# Get the saveChatAsync function definition
rg "async function saveChatAsync" -A 20

Repository: QueueLab/QCX

Length of output: 1288


🏁 Script executed:

# Get exact line numbers for all saveChatAsync calls (non-definition)
rg -n "saveChatAsync\(" app/api/chat/stream/route.ts | grep -v "async function"

Repository: QueueLab/QCX

Length of output: 279


🏁 Script executed:

# Check if saveChat is used elsewhere and what it does
rg "saveChat" app/lib/actions/chat.ts -A 10 | head -50

Repository: QueueLab/QCX

Length of output: 120


🏁 Script executed:

# Find the chat actions file
find . -name "chat.ts" -o -name "chat.js" | grep -i actions

Repository: QueueLab/QCX

Length of output: 78


🏁 Script executed:

# Check the saveChatAsync function to confirm it does DB operations
sed -n '320,359p' app/api/chat/stream/route.ts

Repository: QueueLab/QCX

Length of output: 836


🏁 Script executed:

# Check if `after` from next/server is imported anywhere
rg "from 'next/server'" app/api/chat/stream/route.ts

Repository: QueueLab/QCX

Length of output: 38


🏁 Script executed:

# Check what saveChat does in lib/actions/chat.ts
cat -n lib/actions/chat.ts | head -80

Repository: QueueLab/QCX

Length of output: 2940


🏁 Script executed:

# Look for the full saveChat function
rg "export.*function saveChat|export const saveChat" lib/actions/chat.ts -A 15

Repository: QueueLab/QCX

Length of output: 597


🏁 Script executed:

# Confirm all imports in the route.ts file to see what is imported
head -20 app/api/chat/stream/route.ts

Repository: QueueLab/QCX

Length of output: 935


Fix fire-and-forget saveChatAsync calls that may be cut off by serverless runtime.

saveChatAsync(...) is dispatched without await at three locations (lines 87, 188, 263) just before returning the response. On Vercel/Node serverless runtimes, the function may be frozen/terminated as soon as the streamed response completes, dropping the pending database write and causing intermittent missing chat persistence. Use Next.js after() to guarantee the save completes after the response.

🔧 Proposed fix
import { CoreMessage, ToolResultPart, streamText, LanguageModel } from 'ai'
+import { after } from 'next/server'

Then wrap each saveChatAsync call:

-saveChatAsync(chatId, userId, messages, definition)
+after(() => saveChatAsync(chatId, userId, messages, definition))
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Save chat asynchronously
saveChatAsync(chatId, userId, messages, answer, allToolOutputs, relatedQueries)
return new Response(stream, {
headers: { 'Content-Type': 'text/plain; charset=utf-8' }
})
// Save chat asynchronously
after(() => saveChatAsync(chatId, userId, messages, answer, allToolOutputs, relatedQueries))
return new Response(stream, {
headers: { 'Content-Type': 'text/plain; charset=utf-8' }
})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/chat/stream/route.ts` around lines 187 - 192, The three
fire-and-forget calls to saveChatAsync (occurring just before returning streamed
Responses) must be wrapped with Next.js after() to ensure the DB write completes
after the response is sent; replace direct calls like saveChatAsync(chatId,
userId, ...) with after(() => saveChatAsync(chatId, userId, messages, answer,
allToolOutputs, relatedQueries)) for each occurrence (the saveChatAsync
invocations in the streaming route handlers) so the runtime will wait for the
save to finish even in serverless environments.

}

async function handleResolutionSearch({
messages, chatId, userId, fileData, mapboxImageData, googleImageData,
timezone, drawnFeatures, location
}: {
messages: CoreMessage[]
chatId: string
userId: string
fileData: string
mapboxImageData?: string
googleImageData?: string
timezone?: string
drawnFeatures: DrawnFeature[]
location?: { lat: number; lng: number }
}) {
const content: CoreMessage['content'] = [
{ type: 'text', text: 'Analyze this map view.' },
{ type: 'image', image: fileData, mimeType: 'image/png' }
]
messages.push({ role: 'user', content })

Comment on lines +231 to +236
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

4. Base64 images saved to db 🐞 Bug ➹ Performance

Resolution-search pushes fileData (a base64 data URL) into CoreMessage.content and
saveChatAsync persists it verbatim; saveChat then JSON-stringifies it into the
messages.content text column, risking extreme DB bloat and slow history loads.
Agent Prompt
## Issue description
Resolution-search base64 images are being persisted to the `messages` table, which can massively increase DB size and make chat history loads slow or fail.

## Issue Context
- The route inserts `{ type: 'image', image: fileData }` into `CoreMessage.content`.
- `saveChatAsync` persists that `content` verbatim.
- `saveChat` JSON-stringifies object content into `messages.content` (text).

## Fix Focus Areas
- app/api/chat/stream/route.ts[209-214]
- app/api/chat/stream/route.ts[274-337]
- lib/actions/chat.ts[106-112]

## What to implement
- Before calling `saveChatAsync` (or inside it), sanitize message contents:
  - Remove image parts entirely from messages being saved, OR
  - Replace the `image` field with a constant placeholder (e.g., `"IMAGE_PROCESSED"`).
- If you still need to persist images, store them outside the `messages` table (blob storage) and only persist a reference URL/id.

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

try {
const streamResult = await resolutionSearch(
messages,
timezone || 'UTC',
drawnFeatures,
location
)

const analysisResult = await streamResult.object

// Get related queries
const sanitizedMessages: CoreMessage[] = messages.map((m: any) => {
if (Array.isArray(m.content)) {
return { ...m, content: m.content.filter((part: any) => part.type !== 'image') } as CoreMessage
}
return m
})
const relatedQueries = await querySuggestor(sanitizedMessages)

const encoder = new TextEncoder()
const stream = new ReadableStream({
start(controller) {
// Send resolution result as annotation
const resAnnotation = {
type: 'resolution_search_result',
data: {
...analysisResult,
image: fileData,
mapboxImage: mapboxImageData,
googleImage: googleImageData
}
}
controller.enqueue(encoder.encode(`2:[${JSON.stringify(resAnnotation)}]\n`))

// Stream summary text
if (analysisResult.summary) {
controller.enqueue(encoder.encode(`0:${JSON.stringify(analysisResult.summary)}\n`))
}

// Related queries
const relatedAnnotation = { type: 'related', relatedQueries }
controller.enqueue(encoder.encode(`2:[${JSON.stringify(relatedAnnotation)}]\n`))

controller.enqueue(encoder.encode(`e:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`))
controller.enqueue(encoder.encode(`d:{"finishReason":"stop","usage":{"promptTokens":0,"completionTokens":0}}\n`))
controller.close()
}
})

saveChatAsync(chatId, userId, messages, analysisResult.summary || '')

return new Response(stream, {
headers: streamHeaders
})
} catch (error) {
console.error('Resolution search error:', error)
return new Response(JSON.stringify({ error: 'Resolution search failed' }), {
status: 500,
headers: { 'Content-Type': 'application/json' }
})
}
}
Comment on lines +217 to +298
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

handleResolutionSearch swallows post-streaming errors and persists with fire-and-forget.

Two issues remaining after the recent fixes:

  1. The outer try { … new Response(stream) … } catch covers only the synchronous setup. Once the start(controller) callback executes (lines 258-283), any throw inside it cannot reach the outer catch and the stream just terminates without the e:/d: finish frames, leaving useChat stuck in isLoading=true. Wrap the start body in try/catch and call controller.error(err) (or write a synthetic e:/d: with finishReason:'error') on failure.
  2. saveChatAsync(...) on Line 286 is fire-and-forget on the serverless runtime — same risk flagged on the main path. Use after().
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/chat/stream/route.ts` around lines 217 - 298, The stream's start
callback in handleResolutionSearch can throw without reaching the outer catch
and leaves useChat stuck; wrap the entire start(controller) body in a try/catch
and on error call controller.error(err) or enqueue synthetic finish frames (e:
and d: with finishReason:"error") before closing so the client receives a
terminal event; also stop doing fire-and-forget saveChatAsync(...) — schedule it
via after(() => saveChatAsync(chatId, userId, messages, analysisResult.summary
|| '')) (or the equivalent after wrapper used elsewhere) so the save runs safely
in the serverless background.


async function saveChatAsync(
chatId: string,
userId: string,
messages: CoreMessage[],
answer: string,
toolOutputs?: ToolResultPart[],
relatedQueries?: any
) {
try {
let title = 'Untitled Chat'
const firstMsg = messages[0]
if (firstMsg) {
if (typeof firstMsg.content === 'string') {
title = firstMsg.content.substring(0, 100)
} else if (Array.isArray(firstMsg.content)) {
const textPart = (firstMsg.content as any[]).find(p => p.type === 'text')
title = textPart?.text?.substring(0, 100) || 'Image Message'
}
}

const aiMessages: AIMessage[] = []

for (const msg of messages) {
let content: CoreMessage['content'] = msg.content
if (Array.isArray(content)) {
content = (content as any[]).filter((part: any) => part.type !== 'image') as CoreMessage['content']
}
aiMessages.push({
id: (msg as any).id || nanoid(),
role: msg.role as AIMessage['role'],
content,
type: msg.role === 'user' ? 'input' : undefined
})
}

// Add tool outputs
if (toolOutputs) {
for (const tool of toolOutputs) {
aiMessages.push({
id: nanoid(),
role: 'tool',
content: JSON.stringify(tool.result),
name: tool.toolName,
type: 'tool'
})
}
}
Comment thread
coderabbitai[bot] marked this conversation as resolved.

// Add response
if (answer) {
aiMessages.push({
id: nanoid(),
role: 'assistant',
content: answer,
type: 'response'
})
}

// Add related queries
if (relatedQueries) {
aiMessages.push({
id: nanoid(),
role: 'assistant',
content: JSON.stringify(relatedQueries),
type: 'related'
})
}

// Add end marker
aiMessages.push({
id: nanoid(),
role: 'assistant',
content: 'end',
type: 'end'
})
Comment on lines +320 to +374
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Synthetic tool/response/related/end rows get a fresh nanoid() every save → duplicates pile up on every turn.

The user/assistant messages now correctly preserve msg.id (line 328), but tool outputs (line 339), the answer (line 351), the related-queries blob (line 361), and the 'end' marker (line 370) are all built with a brand-new nanoid() on every saveChatAsync call. Since useChat replays the full conversation on each request and saveChatAsync is invoked per turn, prior turns' synthetic rows already in the DB cannot be matched by dbSaveChat's onConflictDoUpdate(id) clause — they get inserted again with new ids. Within a few turns the chat contains many duplicate tool/response/related/end rows, which (a) bloats the table, (b) breaks any deterministic "latest end / latest related" queries you may layer on top, and (c) means the getChatMessages rehydration path in app/search/[id]/page.tsx will replay duplicate assistant text bubbles.

Use a deterministic id derived from chatId + turn index + role/type (e.g., ${chatId}:${turnIndex}:response) so reruns land on the same row, or only persist the delta for the current turn rather than re-writing the full history.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/api/chat/stream/route.ts` around lines 320 - 374, The synthetic rows
pushed into aiMessages (see the toolOutputs loop, the block adding answer,
relatedQueries, and the final 'end' marker) use fresh nanoid() each save,
causing duplicate rows; change those id generations to deterministic IDs derived
from the chat context (e.g., use chatId + turnIndex + role/type like
`${chatId}:${turnIndex}:response`) or only persist the current turn's delta
instead of re-writing all messages so onConflictDoUpdate(id) will match existing
rows; update the id assignment for the tool output block (tool.toolName), the
assistant answer block, the related-queries block, and the end marker to use
that deterministic scheme.


const chat: Chat = {
id: chatId,
createdAt: new Date(),
userId,
path: `/search/${chatId}`,
title,
messages: aiMessages
}
await saveChat(chat, userId)
} catch (error) {
console.error('Error saving chat:', error)
}
}
Loading