feat(apollo-vertex): ai-chat foundation — types, hooks, utils, icons [1/6]#544
feat(apollo-vertex): ai-chat foundation — types, hooks, utils, icons [1/6]#544petervachon wants to merge 1 commit intomainfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
| export const AutopilotIcon = React.forwardRef< | ||
| SVGSVGElement, | ||
| AutopilotIconProps | ||
| >(({ size = 24, ...props }, ref) => ( |
There was a problem hiding this comment.
why use forwardRef? With React 19, you can finally treat ref like any other prop. No forwardRef
| export const AutopilotGradientIcon = React.forwardRef< | ||
| SVGSVGElement, | ||
| AutopilotGradientIconProps | ||
| >(({ size = 24, ...props }, ref) => { |
There was a problem hiding this comment.
why use forwardRef? With React 19, you can finally treat ref like any other prop. No forwardRef
| const THINKING_LABELS = [ | ||
| "Thinking…", | ||
| "Se gândește…", | ||
| "Thinking…", | ||
| "सोच रहा है…", | ||
| "Thinking…", | ||
| "Denkt nach…", | ||
| "Thinking…", | ||
| "Réflexion…", | ||
| ]; |
There was a problem hiding this comment.
I don't think this should be here, I think the user should provide these as props since these should be translated in the app ( via the useTranslate() hook )
There was a problem hiding this comment.
do we really need this thinking cycling ? I don't expect that the LLM will take more than 2 seconds to give a response. I feel like we are just copying claude code.
I think this is purely cosmetic (no actual information), it can feel twee or try-hard ("Réflexion…" 🙄), it adds state and timers to manage, and on slow devices the cycling can feel jank
We have to keep in mind that not all users have macbook pros and some are running on shitty thinkpads with 8 gb ram
There was a problem hiding this comment.
Could we replace this with a CSS pulse or shimmer on a single "Thinking…"? Same reassurance that the UI isn't frozen, zero state to manage, and it actually looks better on slow devices
There was a problem hiding this comment.
Same question as the thinking-labels hook: do we actually need this?
The comment says ChatGPT/Claude throttle to 30-50 cps, but the reason they do it is to smooth out bursty token streams - not because throttling is inherently good. If our backend streams tokens at a reasonable speed, the browser handles pacing for free
My main concerns are
- Fast readers end up waiting on the animation instead of the model (a real ChatGPT complaint)
- We're re-rendering up to 60fps via rAF instead of per-token (maybe 10-30/sec). If the message contains markdown, that's re-parsing on every frame - not free on the 8GB ThinkPads
- The "drain 2× when streaming ends" logic is a smell - it exists because the throttle falls behind reality, which means it was adding latency, not matching reading speed
useRef+useReducerto force re-renders is fighting React. If we keep any version of this, it should beuseStatefordisplayedLength- in short we need to refactor this
There was a problem hiding this comment.
Also why do we opt in for an in-house hook implementation when we can just use a package that does this? plenty of people have already solved this problem
| export function findLatestFlow(messages: UIMessage[]): ToolResultFlow | null { | ||
| for (let i = messages.length - 1; i >= 0; i--) { | ||
| const msg = messages[i]; | ||
| if (!msg || msg.role !== "assistant") continue; | ||
| const hasUserAfter = messages.slice(i + 1).some((m) => m.role === "user"); | ||
| if (hasUserAfter) continue; | ||
|
|
||
| for (const part of msg.parts) { | ||
| if (part.type === "tool-result" && "content" in part) { | ||
| const result = tryParseFlow(part.content); | ||
| if (result) return result; | ||
| } | ||
| if (part.type === "tool-call" && "output" in part) { | ||
| const result = toolResultFlowSchema.safeParse( | ||
| (part as { output?: unknown }).output, | ||
| ); | ||
| if (result.success) return result.data; | ||
| } | ||
| } | ||
| } | ||
| return null; | ||
| } | ||
|
|
There was a problem hiding this comment.
findLatestFlow scans backwards with hasUserAfter check
This is O(n²) because messages.slice(i + 1).some(...) re-scans for every iteration
For a long conversation this really matters.
Simpler approach could be to find the last user message index once, then only look at assistant messages after it - same structure as findActiveChoicesMessageIds already uses. These two functions are doing the same "find trailing assistant messages" work with different implementations.
| if (part.type === "tool-result" && "content" in part) { | ||
| try { | ||
| const parsed: unknown = JSON.parse( | ||
| (part as { content: string }).content, | ||
| ); | ||
| if ( | ||
| parsed !== null && | ||
| typeof parsed === "object" && | ||
| "type" in parsed && | ||
| (parsed as { type: unknown }).type === "choices" | ||
| ) | ||
| return true; | ||
| } catch { | ||
| // invalid JSON, skip | ||
| } |
There was a problem hiding this comment.
this should already be typed
There was a problem hiding this comment.
The casts (part as { content: string }) and (part as { output?: unknown }) suggest the UIMessagePart type isn't discriminated properly. If type === "tool-result" doesn't narrow part to have content, that's a type definition problem upstream worth fixing
| if (part.type === "tool-call" && "output" in part) { | ||
| const output = (part as { output?: unknown }).output; | ||
| if ( | ||
| output != null && | ||
| typeof output === "object" && | ||
| "type" in output && | ||
| (output as { type: unknown }).type === "choices" | ||
| ) { | ||
| return true; | ||
| } | ||
| } | ||
| return false; |
There was a problem hiding this comment.
same thing as above, this is a type issue
| export function findActiveChoicesMessageIds( | ||
| messages: UIMessage[], | ||
| ): Set<string> { | ||
| // Find the index of the most recent user message | ||
| let lastUserIdx = -1; | ||
| for (let i = messages.length - 1; i >= 0; i--) { | ||
| const msg = messages[i]; | ||
| if (msg && msg.role === "user") { | ||
| lastUserIdx = i; | ||
| break; | ||
| } | ||
| } | ||
|
|
||
| // Trailing assistant messages — everything after the latest user message | ||
| const trailingAssistants = messages | ||
| .slice(lastUserIdx + 1) | ||
| .filter((m) => m.role === "assistant"); | ||
|
|
||
| // Only suppress actions if at least one trailing assistant has choices | ||
| const hasActiveChoices = trailingAssistants.some((m) => messageHasChoices(m)); | ||
| if (!hasActiveChoices) return new Set(); | ||
|
|
||
| return new Set(trailingAssistants.map((m) => m.id)); | ||
| } |
There was a problem hiding this comment.
this suppresses actions on all trailing assistant messages when any of them has choices. Is that what we want if an unrelated assistant message (status update, other tool result) happens to land after a choices message? It'd lose its copy/regenerate too.
If our turn structure guarantees that can't happen, worth a comment stating the invariant. Otherwise we probably want to scope the suppression more tightly
Also: this can be one backwards pass instead of three (scan -> slice/filter -> some -> map), and the name implies it finds the choices messages when it actually finds their whole turn - getActiveChoiceTurnMessageIds would be clearer.
0xr3ngar
left a comment
There was a problem hiding this comment.
also another thing, ci checks are all failing
6e13d73 to
47b05ed
Compare
Dependency License Review
License distribution
Excluded packages
|
47b05ed to
a5af7e0
Compare
a5af7e0 to
d578831
Compare
|
|
||
| // ─── Flow (client-side multi-step) ─────────────────────────────────────────── | ||
|
|
||
| const flowOptionSchema = z.object({ |
There was a problem hiding this comment.
this entire file is gone on my pr where we refactor the choices tool to be more generic #584
I suggest we wait until mine is merged, and restart this approach because having just types being checked in with implementation on subsequent prs is hard to follow
we should strive to have end to end concerns / implementations in one pr
d578831 to
7bc966d
Compare
7bc966d to
c5ba891
Compare
10398bf to
4e7bebd
Compare
4e7bebd to
893e96c
Compare
|
Hi @0xr3ngar and @pieman1313 — thanks for the detailed feedback. Here's a summary of what's been addressed since your reviews:
PR #584 has merged, so the conflict around Would appreciate a re-review when you get a chance! |
pieman1313
left a comment
There was a problem hiding this comment.
I had a look at the initial pr, and considered it as the end state / north start towards which your current prs are headed.
I do not agree with the splitting of prs, and would prefer we try to have small, end to end features building one upon the other, instead of what we have now, where we took the end state pr and just broke it down to smaller prs that on their own don't give the full picture.
I poked Claude on the end state pr to break it down into complete end to end example prs, although I do doubt some of them due to the features themselves.
Here is an example breakdown of the big pr, although, I do not agree with some features, and would like to postpone them a bit until we get the bulk of the work in, and then we can focus on doing those features better.
- We should not add any mock responses or the like. we have hooked up actual llms, and should showcase everything using them
- we should postpone the flow step + choices feature until after we get the bulk of the features in
what do you think?
PR 1 — feat(apollo-vertex): ai-chat welcome screen + Autopilot rebrand
What the user gets: Open /patterns/ai-chat, see a centered "What are we
tackling today?" empty state with Autopilot branding and 3 starter
suggestion chips. Click a chip, it sends as a message.
Includes:
- New ai-chat-empty-state component
- emptyState and suggestions props on AiChat
- --ai-chat-muted-foreground token (only what the empty state paints) added
to registry.json - AgentHub wiring: title set to "Autopilot", emptyState passed in,
suggestions array passed in - Preview page: scaffolds the ai-chat preview page with an "Empty state"
section and a nav entry
PR 2 — feat(apollo-vertex): syntax-highlighted code blocks with copy
What the user gets: Ask Autopilot for a code snippet — the response renders
in a styled block with language label, syntax highlighting (light and dark
via github-dark-dimmed), and a copy button.
Includes:
- New ai-chat-code-block component, with ref-based innerHTML safety folded
in from the CI-fix commit - ai-chat-markdown rewires pre/code blocks to render AiChatCodeBlock
- Preview page: adds a "Code blocks" section showing three languages and the
dark variant
PR 3 — feat(apollo-vertex): multi-step interactive flows
What the user gets: Ask Autopilot something open-ended ("help me draft a
brief") and it presents a 2–8 step decision flow inline. User clicks
through, can go back or skip, then a single message with all answers is
sent. Suggestion chips also gain the same step counter / back / skip /
dismiss UI for the existing presentChoices tool.
Includes:
- New ai-chat-flow component and flow-tool example
- ai-chat-suggestions extended with step, totalSteps, canGoBack, canSkip and
framer-motion entrance animations - choices-tool prompt updated to describe multi-step usage
- findLatestFlow, tryParseFlow, and findActiveChoicesMessageIds added to
ai-chat-utils, plus the nesting-depth refactor from the CI-fix commit - AgentHub wiring: register flowTool, append FLOW_TOOL_PROMPT to the system
prompt - Preview page: adds an interactive "Flow" demo using the existing MOCK_FLOW
PR 4 — feat(apollo-vertex): animated thinking + assistant typewriter reveal
What the user gets: Send a message — see an animated sparkle morph into a
pulsing circle, with the "Thinking…" label cycling through translations.
When the response streams, text is throttled to ~75 characters/second for
readable reveal. Suggestion chips wait until the typewriter finishes before
appearing.
Includes:
- New ai-chat-thinking component, use-thinking-label hook, use-typewriter
hook - ai-chat-loading rewritten (shimmer plus label cycler)
- Introduces ai-chat-provider and the AiChatConfig context, with only the
fields this PR needs: typewriterCps, latestAssistantMessageId,
isLatestResponseAnimating, and its setter. Future PRs extend it - AgentHub wiring: wrap the chat with AiChatProvider configured at 75 cps
- Preview page: adds the ThinkingDemo as an isolated playground
PR 5 — feat(apollo-vertex): message actions + inline user edit
What the user gets: Hover any assistant message — get copy, thumbs up,
thumbs down, and regenerate buttons. Hover a user message — get an edit
pencil. Click edit, inline textarea with save and cancel; on save the
conversation rewinds and re-sends. Actions are suppressed during an active
choice turn, using the helper introduced in PR 3.
Includes:
- New ai-chat-message-actions component
- Edit mode added to ai-chat-message
- AiChatConfig extended with showMessageActions, showCopyButton,
activeChoicesMessageIds, onFeedback, onRegenerate, onEditMessage, plus the
MessageFeedbackType type - AgentHub wiring: onRegenerate calls reload, onEditMessage calls sendMessage
- Preview page: adds "Message actions" and "Edit mode" sections
PR 6 — feat(apollo-vertex): file attachments, paste image, source citations
What the user gets: Paste an image into the input — thumbnail appears as a
chip, click to expand in a lightbox. Click the paperclip — file picker for
any file type. Send — message bubble shows the attachment chips. Assistant
responses can render source citation chips below.
Includes:
- ai-chat-input rewrite: PendingFile model, file picker, paste handler,
thumbnail and dialog-based lightbox. Folds in the uid, instanceof
narrowing, and void onClick fixes from the CI-fix commit - MessageSource and MessageAttachment types, with rendering in
ai-chat-message - AgentHub wiring: onSubmit accepts (text, files?) and forwards files
- Preview page: adds "Attachments" and "Sources" sections using
MOCK_ATTACHMENTS_BASIC and MOCK_SOURCES_BASIC
PR 7 — feat(apollo-vertex): "Ask Autopilot" text selection menu
What the user gets: Highlight any text in an assistant message — a small
"Ask Autopilot" pill appears above the selection. Click it — the input gets
a quote chip with the highlighted text. Type a follow-up and send. Quote
chip is removable.
Includes:
- New ai-chat-selection-menu component, autopilot icon, and autopilot
gradient icon - Quote chip slot added to ai-chat-input
- Selection capture wired into ai-chat-message
- AiChatConfig extended with onQuoteSelect, plus a new enableTextSelection
prop on AiChat - AgentHub wiring: enableTextSelection turned on, onQuoteSelect populates
the input quote chip - Preview page: adds a "Selection menu" section with mock assistant text the
user can highlight
| @@ -0,0 +1,185 @@ | |||
| import type { UIMessage } from "@tanstack/ai-client"; | |||
There was a problem hiding this comment.
since we moved away from this approach, I am doubting if its a good ideea to re-build on top of it. I would prefer we have a complete end to end working example to judge, rather than just some types.
There was a problem hiding this comment.
Okay. I originally had that in there to use with the recommended prompts (as part of assistant response) as well as a multi-step interaction which presents in more of a card carousel of questions and answers.
I've removed those to clean things up, we can revisit later on.
…[1/5] Adds shared types, utility functions, hooks (use-sticky-scroll, use-thinking-label, use-typewriter), autopilot icons, registry.json entries, and supporting config (package.json, lib/auth.ts, next-env.d.ts). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
893e96c to
3b8ce4f
Compare
|
This PR is part of the original ai-chat stack which has been superseded by a restructured version. Closing in favour of the new stack:
The new stack is reorganised by user-facing feature rather than component type, includes all the same functionality plus attachments and text selection, and has a clean single-commit-per-PR history. |
Pull request was closed
Summary
types.ts) and utility functions (ai-chat-utils.ts)use-sticky-scroll,use-typewriterautopilot.tsx,autopilot-gradient.tsx)registry.jsonwith ai-chat registry entries@ai-sdk/reacttopackage.json,lib/auth.ts, andnext-env.d.tsStack
Test plan
git checkout pr/1a-aichat-foundation && pnpm dev— dev server starts without errorsregistry/ai-chat/types.ts,utils/,hooks/, andicons/all presentregistry.jsoncontains ai-chat entries🤖 Generated with Claude Code