Conversation
Co-authored-by: Copilot <copilot@github.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
Your free trial has ended. If you'd like to continue receiving code reviews, you can add a payment method here.
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: ⛔ Files ignored due to path filters (4)
📒 Files selected for processing (45)
Summary by CodeRabbitRelease Notes
WalkthroughThis PR introduces comprehensive Storybook integration with configuration and example components, adds a complete device authorization workflow with new pages and components, establishes OAuth discovery metadata endpoints, refactors authentication to export Changes
Sequence Diagram(s)sequenceDiagram
participant User as User
participant Browser as Browser
participant DeviceAuth as Device Auth<br/>Pages
participant AuthClient as Auth Client<br/>(Client-Side)
participant AuthAPI as Auth API<br/>(Server)
User->>Browser: Visit /device
Browser->>DeviceAuth: Load DeviceAuthorizationForm
DeviceAuth->>AuthClient: useAuthQuery() - check session
AuthClient->>AuthAPI: Get session
AuthAPI-->>AuthClient: Session status
AuthClient-->>DeviceAuth: Session data
alt User authenticated
DeviceAuth-->>User: Show code entry form
User->>DeviceAuth: Enter device code
DeviceAuth->>AuthClient: checkDeviceAuthorizationCode(code)
AuthClient->>AuthAPI: Lookup device code
AuthAPI-->>AuthClient: Device found
AuthClient-->>DeviceAuth: Success response
DeviceAuth-->>Browser: Redirect to /device/approve?user_code=...
else User not authenticated
DeviceAuth-->>Browser: Redirect to /login
end
Browser->>DeviceAuth: Load DeviceApprovalForm
DeviceAuth->>AuthClient: useAuthQuery() - check session
AuthClient->>AuthAPI: Get session
AuthAPI-->>AuthClient: Session + user info
AuthClient-->>DeviceAuth: Session data
DeviceAuth-->>User: Show approval dialog
User->>DeviceAuth: Click approve/deny button
alt User approves
DeviceAuth->>AuthClient: approveDeviceAuthorization(userCode)
AuthClient->>AuthAPI: Approve device
AuthAPI-->>AuthClient: Success
AuthClient-->>DeviceAuth: Approval confirmed
DeviceAuth-->>Browser: Redirect to /chat
else User denies
DeviceAuth->>AuthClient: denyDeviceAuthorization(userCode)
AuthClient->>AuthAPI: Deny device
AuthAPI-->>AuthClient: Denial confirmed
AuthClient-->>DeviceAuth: Show denial message
end
sequenceDiagram
participant Client as OAuth Client
participant Well-Known as Well-Known<br/>Endpoints
participant AuthServer as Auth Server
participant BetterAuth as Better Auth<br/>(Plugin)
Client->>Well-Known: GET /.well-known/oauth-authorization-server
Well-Known->>Well-Known: force-dynamic routing
Well-Known->>BetterAuth: oAuthDiscoveryMetadata(auth)
BetterAuth->>AuthServer: Generate OAuth metadata
AuthServer-->>BetterAuth: Metadata object
BetterAuth-->>Well-Known: Formatted response
Well-Known-->>Client: JSON response
Client->>Well-Known: GET /.well-known/oauth-protected-resource
Well-Known->>BetterAuth: oAuthProtectedResourceMetadata(auth)
BetterAuth->>AuthServer: Generate resource metadata
AuthServer-->>BetterAuth: Resource metadata
BetterAuth-->>Well-Known: Formatted response
Well-Known-->>Client: JSON response
Client->>Well-Known: GET /.well-known/agent-configuration
Well-Known->>AuthServer: auth.api.getAgentConfiguration()
AuthServer-->>Well-Known: Agent config
Well-Known-->>Client: JSON response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
🤖 Hi @ssdeanx, I've received your request, and I'm working on it now! You can track my progress in the logs for more details. |
Reviewer's GuideAdjusts libsql observation memory configuration to use resource-scoped vector retrieval with tuned token thresholds and buffering behavior, and bumps ai-related dependencies to the latest patch versions. Class diagram for updated LibsqlMemory observational configurationclassDiagram
class Memory {
+options MemoryOptions
+config MemoryConfig
}
class MemoryConfig {
+maxParallelCalls number
}
class MemoryOptions {
+readOnly boolean
+observationalMemory ObservationalMemoryOptions
+lastMessages number
+template string
}
class ObservationalMemoryOptions {
+enabled boolean
+scope string
+model string
+retrieval RetrievalConfig
+shareTokenBudget boolean
+observation ObservationConfig
+reflection ReflectionConfig
}
class RetrievalConfig {
+vector boolean
+scope string
}
class ObservationConfig {
+instruction string
+messageTokens number
+bufferTokens number
+bufferActivation number
+blockAfter number
+modelSettings ObservationModelSettings
}
class ReflectionConfig {
+instruction string
+observationTokens number
+bufferActivation number
+blockAfter number
+modelSettings ReflectionModelSettings
}
class ObservationModelSettings {
+temperature number
+maxOutputTokens number
+topK number
+topP number
}
class ReflectionModelSettings {
+temperature number
+maxOutputTokens number
+topK number
+topP number
}
Memory --> MemoryConfig
Memory --> MemoryOptions
MemoryOptions --> ObservationalMemoryOptions
MemoryOptions --> RetrievalConfig : uses
ObservationalMemoryOptions --> RetrievalConfig
ObservationalMemoryOptions --> ObservationConfig
ObservationalMemoryOptions --> ReflectionConfig
ObservationConfig --> ObservationModelSettings
ReflectionConfig --> ReflectionModelSettings
Flow diagram for observation and reflection buffering behaviorflowchart TD
A[Start token update] --> B{Mode}
B -->|Observation| C[Current message tokens]
C --> D{Tokens >= messageTokens * bufferActivation}
D -->|No| E[Continue async observation]
D -->|Yes| F{Tokens >= messageTokens * blockAfter}
F -->|No| G[Activate observation buffer]
G --> H[Prefer flushing buffer soon]
F -->|Yes| I[Force synchronous observation]
B -->|Reflection| J[Current observation tokens]
J --> K{Tokens >= observationTokens * bufferActivation}
K -->|No| L[Continue async reflection]
K -->|Yes| M{Tokens >= observationTokens * blockAfter}
M -->|No| N[Activate reflection buffer]
N --> O[Prefer flushing buffer soon]
M -->|Yes| P[Force synchronous reflection]
E --> Q[End]
H --> Q
I --> Q
L --> Q
O --> Q
P --> Q
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
Code Review SummaryStatus: 1 Issue Found | Recommendation: Address before merge Overview
Issue Details (click to expand)WARNING
Other Observations (not in diff)Issues found in unchanged code that cannot receive inline comments:
Files Reviewed (19 files)
Reviewed by grok-code-fast-1:optimized:free · 275,586 tokens |
|
🤖 I'm sorry @ssdeanx, but I was unable to process your request. Please see the logs for more details. |
There was a problem hiding this comment.
Hey - I've found 1 issue, and left some high level feedback:
- The
shareTokenBudgetoption is now set totruebut the inline comment still says "Don't share token budget..."—update the comment to accurately describe the new behavior to avoid confusion. - The indentation of
maxParallelCalls: 20no longer matches the surrounding configuration object; consider realigning it for consistency with the existing formatting style.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `shareTokenBudget` option is now set to `true` but the inline comment still says "Don't share token budget..."—update the comment to accurately describe the new behavior to avoid confusion.
- The indentation of `maxParallelCalls: 20` no longer matches the surrounding configuration object; consider realigning it for consistency with the existing formatting style.
## Individual Comments
### Comment 1
<location path="src/mastra/config/libsql.ts" line_range="69" />
<code_context>
model: 'google/gemini-2.5-flash',
- shareTokenBudget: false, // Don't share token budget between observation and reflection to preserve context
+ retrieval: { vector: true, scope: 'resource' },
+ shareTokenBudget: true, // Don't share token budget between observation and reflection to preserve context
observation: {
instruction: 'You are an assistant that observes and remembers important information from the conversation. Pay attention to details, context, and any information that might be useful for future reference.',
</code_context>
<issue_to_address>
**issue:** The `shareTokenBudget` comment contradicts the new boolean value.
The flag and its comment now disagree: `true` implies the budget is shared, while the comment says it isn’t. Please either revert to `false` if that behavior is still desired, or update the comment to accurately describe the new behavior to avoid confusion for future readers.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| model: 'google/gemini-2.5-flash', | ||
| shareTokenBudget: false, // Don't share token budget between observation and reflection to preserve context | ||
| retrieval: { vector: true, scope: 'resource' }, | ||
| shareTokenBudget: true, // Don't share token budget between observation and reflection to preserve context |
There was a problem hiding this comment.
issue: The shareTokenBudget comment contradicts the new boolean value.
The flag and its comment now disagree: true implies the budget is shared, while the comment says it isn’t. Please either revert to false if that behavior is still desired, or update the comment to accurately describe the new behavior to avoid confusion for future readers.
There was a problem hiding this comment.
Code Review
This pull request updates AI-related dependencies and reconfigures the LibsqlMemory settings, including reducing parallel calls, changing the memory scope to 'resource', and adjusting token thresholds for observation and reflection. Prompt templates were also expanded to include more context. Feedback was provided regarding a contradictory comment in the configuration file where the code change was not reflected in the documentation.
| model: 'google/gemini-2.5-flash', | ||
| shareTokenBudget: false, // Don't share token budget between observation and reflection to preserve context | ||
| retrieval: { vector: true, scope: 'resource' }, | ||
| shareTokenBudget: true, // Don't share token budget between observation and reflection to preserve context |
There was a problem hiding this comment.
There was a problem hiding this comment.
Pull request overview
Updates Mastra LibSQL-backed memory configuration (observational memory behavior + working memory template) and bumps AI SDK dependencies to the latest patch versions.
Changes:
- Tuned LibSQL
Memoryobservational-memory settings (scope, retrieval, token/buffering thresholds, parallel embed calls). - Extended working-memory template to include additional user/context sections.
- Bumped
@ai-sdk/reactandaipatch versions (and lockfile accordingly).
Reviewed changes
Copilot reviewed 2 out of 3 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| src/mastra/config/libsql.ts | Adjusts observational-memory configuration and working-memory template fields/sections. |
| package.json | Patch bumps for @ai-sdk/react and ai (plus override). |
| package-lock.json | Lockfile updates matching the dependency bumps. |
| model: 'google/gemini-2.5-flash', | ||
| shareTokenBudget: false, // Don't share token budget between observation and reflection to preserve context | ||
| retrieval: { vector: true, scope: 'resource' }, | ||
| shareTokenBudget: true, // Don't share token budget between observation and reflection to preserve context |
There was a problem hiding this comment.
shareTokenBudget is set to true, but the inline comment says "Don't share token budget between observation and reflection". Either flip the value to false or update the comment to match the behavior, otherwise future readers will misconfigure this option.
| shareTokenBudget: true, // Don't share token budget between observation and reflection to preserve context | |
| shareTokenBudget: true, // Share token budget between observation and reflection to balance context usage |
| messageTokens: 60_000, | ||
| messageTokens: 50_000, | ||
| bufferTokens: 5_000, | ||
| // Activate to retain 30% of threshold |
There was a problem hiding this comment.
The comment about bufferActivation doesn't match the configured value. bufferActivation: 0.85 means the buffer activates at 85% of the threshold, but the comment says "retain 30% of threshold". Please update the comment (or the value) so they stay consistent.
| // Activate to retain 30% of threshold | |
| // Activate at 85% of the threshold |
- Implemented `DeviceAuthorizationForm` component for user code input and verification. - Created `DeviceApprovalPage` to handle device approval requests. - Added `DeviceAuthorizationPage` for device verification process. - Integrated `checkDeviceAuthorizationCode` and `normalizeDeviceUserCode` for handling user codes. test: set up testing utilities for authentication - Added `auth.test.ts` to configure Better Auth with test utilities. - Created `test-setup.ts` to export authentication context for testing. feat: add Storybook stories and components - Developed `Button` component with various sizes and styles. - Created `Header` component with login/logout functionality. - Implemented `Page` component to demonstrate usage of `Header` and `Button`. - Added Storybook stories for `Button`, `Header`, and `Page` components. - Included styles for `Button`, `Header`, and `Page` components in respective CSS files. docs: create configuration documentation for Storybook - Added `Configure.mdx` to provide guidance on configuring Storybook for projects.
| line-height: 1; | ||
| font-family: 'Nunito Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif; | ||
| } | ||
| .storybook-button--primary { |
| background-color: #555ab9; | ||
| color: white; | ||
| } | ||
| .storybook-button--secondary { |
| color: white; | ||
| } | ||
| .storybook-button--secondary { | ||
| box-shadow: rgba(0, 0, 0, 0.15) 0px 0px 0px 1px inset; |
| background-color: transparent; | ||
| color: #333; | ||
| } | ||
| .storybook-button--small { |
| padding: 10px 16px; | ||
| font-size: 12px; | ||
| } | ||
| .storybook-button--medium { |
| padding: 11px 20px; | ||
| font-size: 14px; | ||
| } | ||
| .storybook-button--large { |
| line-height: 20px; | ||
| } | ||
|
|
||
| .storybook-page .tip-wrapper svg { |
| line-height: 12px; | ||
| } | ||
|
|
||
| .storybook-page .tip-wrapper { |
| margin-bottom: 8px; | ||
| } | ||
|
|
||
| .storybook-page .tip { |
| font-family: 'Nunito Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif; | ||
| } | ||
|
|
||
| .storybook-page h2 { |
| @@ -0,0 +1,68 @@ | |||
| .storybook-page { | |||
| @@ -0,0 +1,30 @@ | |||
| .storybook-button { | |||
| display: flex; | ||
| justify-content: space-between; | ||
| align-items: center; | ||
| border-bottom: 1px solid rgba(0, 0, 0, 0.1); |
| vertical-align: top; | ||
| } | ||
|
|
||
| .storybook-header h1 { |
| font-family: 'Nunito Sans', 'Helvetica Neue', Helvetica, Arial, sans-serif; | ||
| } | ||
|
|
||
| .storybook-page h2 { |
| color: white; | ||
| } | ||
| .storybook-button--secondary { | ||
| box-shadow: rgba(0, 0, 0, 0.15) 0px 0px 0px 1px inset; |
Summary by Sourcery
Tune libsql-backed conversational memory behavior and upgrade AI SDK dependencies.
New Features:
otherfield in the memory prompt template.Important Informationsection.Enhancements:
@ai-sdk/reactandaipackages (and override) to the latest patch versions.