Skip to content

LCORE-1583: Allowed passthrough attributes in responses#1483

Open
asimurka wants to merge 1 commit intolightspeed-core:mainfrom
asimurka:allow_responses_attributes
Open

LCORE-1583: Allowed passthrough attributes in responses#1483
asimurka wants to merge 1 commit intolightspeed-core:mainfrom
asimurka:allow_responses_attributes

Conversation

@asimurka
Copy link
Copy Markdown
Contributor

@asimurka asimurka commented Apr 10, 2026

Description

Allows passthrough attribute max_output_tokens which validation was recently fixed on upstream.
Removes @skip tag from e2e test that is testing passthrough attributes.
Prompt attribute is removed as it relies on Prompts API (will be tested separately)
Reasoning attribute has bug on upstream and will be tracked by separate ticket.

Type of change

  • Refactor
  • New feature
  • Bug fix
  • CVE fix
  • Optimization
  • Documentation Update
  • Configuration Update
  • Bump-up service version
  • Bump-up dependent library
  • Bump-up library or tool used for development (does not change the final image)
  • CI configuration change
  • Konflux configuration change
  • Unit tests improvement
  • Integration tests improvement
  • End to end tests improvement
  • Benchmarks improvement

Tools used to create PR

Identify any AI code assistants used in this PR (for transparency and review context)

  • Assisted-by: N/A

Related Tickets & Documents

Checklist before requesting a review

  • I have performed a self-review of my code.
  • PR has passed all pre-merge test jobs.
  • If it is a core feature, I have added thorough tests.

Testing

  • Please provide detailed steps to perform tests related to this code change.
  • How were the fix/results from this change verified? Please provide relevant screenshots or results.

Summary by CodeRabbit

Summary by CodeRabbit

  • Bug Fixes

    • Responses now preserve provided request parameters (e.g., output token limits) instead of clearing them.
  • Tests

    • Updated end-to-end response tests and streaming scenarios; removed outdated/disabled scenarios and cleaned up passthrough parameter checks.

@asimurka asimurka marked this pull request as draft April 10, 2026 13:07
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 10, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 3fa9af7e-ab82-420d-bc10-346a91ef97e6

📥 Commits

Reviewing files that changed from the base of the PR and between 7422fa5 and 688708b.

📒 Files selected for processing (3)
  • src/app/endpoints/responses.py
  • tests/e2e/features/responses.feature
  • tests/e2e/features/responses_streaming.feature
💤 Files with no reviewable changes (2)
  • tests/e2e/features/responses_streaming.feature
  • tests/e2e/features/responses.feature
📜 Recent review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
  • GitHub Check: unit_tests (3.12)
  • GitHub Check: unit_tests (3.13)
  • GitHub Check: Pyright
  • GitHub Check: ruff
  • GitHub Check: mypy
  • GitHub Check: Pylinter
  • GitHub Check: E2E: server mode / ci / group 2
  • GitHub Check: E2E: library mode / ci / group 3
  • GitHub Check: E2E: library mode / ci / group 1
  • GitHub Check: E2E: server mode / ci / group 3
  • GitHub Check: E2E: server mode / ci / group 1
  • GitHub Check: E2E: library mode / ci / group 2
  • GitHub Check: E2E Tests for Lightspeed Evaluation job
  • GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Use absolute imports for internal modules: from authentication import get_auth_dependency
Import FastAPI dependencies with: from fastapi import APIRouter, HTTPException, Request, status, Depends
Import Llama Stack client with: from llama_stack_client import AsyncLlamaStackClient
Check constants.py for shared constants before defining new ones
All modules start with descriptive docstrings explaining purpose
Use logger = get_logger(__name__) from log.py for module logging
Type aliases defined at module level for clarity
Use Final[type] as type hint for all constants
All functions require docstrings with brief descriptions
Complete type annotations for parameters and return types in functions
Use typing_extensions.Self for model validators in Pydantic models
Use modern union type syntax str | int instead of Union[str, int]
Use Optional[Type] for optional type hints
Use snake_case with descriptive, action-oriented function names (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns; return new data structures instead
Use async def for I/O operations and external API calls
Handle APIConnectionError from Llama Stack in error handling
Use standard log levels with clear purposes: debug, info, warning, error
All classes require descriptive docstrings explaining purpose
Use PascalCase for class names with standard suffixes: Configuration, Error/Exception, Resolver, Interface
Use ABC for abstract base classes with @abstractmethod decorators
Use @model_validator and @field_validator for Pydantic model validation
Complete type annotations for all class attributes; use specific types, not Any
Follow Google Python docstring conventions with Parameters, Returns, Raises, and Attributes sections

Files:

  • src/app/endpoints/responses.py
src/app/endpoints/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Use FastAPI HTTPException with appropriate status codes for API endpoints

Files:

  • src/app/endpoints/responses.py
src/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Pydantic models extend ConfigurationBase for config, BaseModel for data models

Files:

  • src/app/endpoints/responses.py
🧠 Learnings (3)
📚 Learning: 2026-04-16T19:08:31.451Z
Learnt from: Lifto
Repo: lightspeed-core/lightspeed-stack PR: 1524
File: src/app/endpoints/responses.py:501-529
Timestamp: 2026-04-16T19:08:31.451Z
Learning: In `src/app/endpoints/responses.py`, `_sanitize_response_dict(response_dict, configured_mcp_labels)` intentionally mutates `response_dict` in-place. The dict is always a fresh throwaway produced by `model_dump()` on the immediately preceding line (both in the streaming `async for` loop and in the non-streaming path); no other reference to it exists. The AGENTS.md guideline "avoid in-place parameter modification" applies to mutating a caller's long-lived/shared data structures, not to ephemeral serialization dicts. Do not flag this as an anti-pattern in future reviews.

Applied to files:

  • src/app/endpoints/responses.py
📚 Learning: 2026-04-15T18:53:57.901Z
Learnt from: Lifto
Repo: lightspeed-core/lightspeed-stack PR: 1510
File: src/models/requests.py:742-769
Timestamp: 2026-04-15T18:53:57.901Z
Learning: In lightspeed-core/lightspeed-stack `src/models/requests.py`, the `ResponsesRequest.validate_body_size` validator uses `json.dumps(values)` with default parameters (ASCII escaping, spaced separators) intentionally. The slight overcount (~0.2% overhead) makes the 65,536-character limit conservatively strict, which is the desired safe direction. An exact wire-format match is impossible since Python's json.dumps output is never byte-identical to the client payload. Do not suggest switching to `ensure_ascii=False` or compact separators for this validator.

Applied to files:

  • src/app/endpoints/responses.py
📚 Learning: 2026-04-06T20:18:07.852Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1463
File: src/app/endpoints/rlsapi_v1.py:266-271
Timestamp: 2026-04-06T20:18:07.852Z
Learning: In the lightspeed-stack codebase, within `src/app/endpoints/` inference/MCP endpoints, treat `tools: Optional[list[Any]]` in MCP tool definitions as an intentional, consistent typing pattern (used across `query`, `responses`, `streaming_query`, `rlsapi_v1`). Do not raise or suggest this as a typing issue during code review; changing it in isolation could break endpoint typing consistency across the codebase.

Applied to files:

  • src/app/endpoints/responses.py
🔇 Additional comments (1)
src/app/endpoints/responses.py (1)

237-243: reasoning passthrough still stripped; contradicts PR intent.

Lines 238-240 continue to null out responses_request.reasoning before the deep copy on Line 242, so the field can never reach ResponsesApiParams.model_validate(...) or client.responses.create(...). Per PR description, reasoning is intentionally deferred due to an upstream bug, but the in-place mutation of the caller's request object prior to copying also violates the coding guideline on avoiding in-place parameter modification. At minimum, move the deep copy above the mutation so the original request is not altered.

As per coding guidelines "Avoid in-place parameter modification anti-patterns; return new data structures instead".

🔧 Suggested ordering fix
-    # Known LLS bug: https://redhat.atlassian.net/browse/LCORE-1583
-    if responses_request.reasoning is not None:
-        logger.warning("reasoning is not yet supported in LCORE and will be ignored")
-        responses_request.reasoning = None
-
-    responses_request = responses_request.model_copy(deep=True)
+    responses_request = responses_request.model_copy(deep=True)
+    # Known LLS bug: reasoning passthrough tracked separately
+    if responses_request.reasoning is not None:
+        logger.warning("reasoning is not yet supported in LCORE and will be ignored")
+        responses_request.reasoning = None

Walkthrough

The handler for the responses endpoint no longer special-cases responses_request.max_output_tokens for LCORE; it only nullifies unsupported responses_request.reasoning, deep-copies the request, and proceeds with existing config and prompt transformations. Two E2E feature tests were updated to remove prompt and reasoning passthrough fields and an associated skip.

Changes

Cohort / File(s) Summary
Endpoint handler
src/app/endpoints/responses.py
Removed LCORE-specific handling for responses_request.max_output_tokens (warning and forced None); retained reasoning nullification, deep-copy, config check, and prompt/instruction transformations.
E2E feature tests
tests/e2e/features/responses.feature, tests/e2e/features/responses_streaming.feature
Removed @skip scenario for LCORE-1583 and deleted prompt and reasoning passthrough fields from the affected request payloads and response assertions.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly references the ticket (LCORE-1583) and accurately describes the main change: allowing passthrough attributes in responses, which matches the core objective of the PR.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
✨ Simplify code
  • Create PR with simplified code

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@asimurka asimurka force-pushed the allow_responses_attributes branch from 7cd8248 to e89b907 Compare April 10, 2026 14:07
@tisnik
Copy link
Copy Markdown
Contributor

tisnik commented Apr 13, 2026

PR Reviewer Guide 🔍

(Review updated until commit e89b907)

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
🧪 PR contains tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

AGENTS.md compliance

The PR title indicates allowing the reasoning attribute, but the handler still unconditionally strips and ignores incoming reasoning, which conflicts with instructions to support it.

if responses_request.reasoning is not None:
    logger.warning("reasoning is not yet supported in LCORE and will be ignored")
    responses_request.reasoning = None
Missing parameter definition

Although support for max_output_tokens was restored in the endpoint, there is no corresponding field in the ResponsesApiParams model, so requests including max_output_tokens will be rejected.

reasoning: Optional[Reasoning] = Field(
    default=None,
    description="Reasoning configuration for the response",
)
safety_identifier: Optional[str] = Field(
    default=None,
    description="Stable identifier for safety monitoring and abuse detection",
)
Duplicate key

The use_as_library_client key is defined twice with conflicting values; only the latter takes effect, making the configuration ambiguous.

use_as_library_client: true
library_client_config_path: run.yaml

@tisnik
Copy link
Copy Markdown
Contributor

tisnik commented Apr 13, 2026

Persistent review updated to latest commit e89b907

@tisnik
Copy link
Copy Markdown
Contributor

tisnik commented Apr 13, 2026

Persistent review updated to latest commit e89b907

@tisnik
Copy link
Copy Markdown
Contributor

tisnik commented Apr 14, 2026

@coderabbitai help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 14, 2026

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

CodeRabbit commands

These commands are invoked using PR/Issue comments.

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai evaluate custom pre-merge check --instructions <custom-checks-instructions> --name <custom-checks-title> [--mode <error|warning>] to validate the custom pre-merge checks instructions. Defaults to error when --mode is omitted.
  • @coderabbitai ignore pre-merge checks to override pre-merge checks and get an approval on PR.
  • @coderabbitai run pre-merge checks to run pre-merge checks on the pull request.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve merge conflict to automatically resolve merge conflicts.
  • @coderabbitai autofix to automatically fix issues identified in unresolved review comments.
  • @coderabbitai resolve to resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai generate configuration to create a PR that adds the current resolved configuration as .coderabbit.yaml (or show it if already present).
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, support, documentation and community

  • Visit our status page to check the current availability of CodeRabbit.
  • Create a ticket on our support page for assistance with any issues or questions.
  • Visit our documentation site for detailed information on how to use CodeRabbit.
  • Join our Discord community to connect with other users and get help from the community.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@lightspeed-stack.yaml`:
- Around line 13-15: The YAML contains a duplicated key use_as_library_client
with conflicting values (false and true); remove one of the entries so the map
contains a single use_as_library_client key with the intended boolean value,
ensuring the remaining value reflects the desired behavior and resolving the
parser/lint error.

In `@src/app/endpoints/responses.py`:
- Around line 165-170: The code currently nulls out responses_request.reasoning
(responses_request.reasoning = None) and mutates the input before copying,
preventing the field from reaching ResponsesApiParams.model_validate(...) and
client.responses.create(...); remove the in-place assignment so reasoning is
preserved, log a warning if present but do not modify responses_request, and
ensure you perform the deep copy (responses_request =
responses_request.model_copy(deep=True)) before any non-mutating checks or logs
so the original object is not altered and the reasoning value is passed through
to downstream validation and client.responses.create.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 8abc4275-a00c-4ee0-8518-6e7140b44683

📥 Commits

Reviewing files that changed from the base of the PR and between 7cd8248 and e89b907.

📒 Files selected for processing (6)
  • lightspeed-stack.yaml
  • run.yaml
  • src/app/endpoints/responses.py
  • src/utils/types.py
  • tests/e2e/features/responses.feature
  • tests/e2e/test_list.txt
💤 Files with no reviewable changes (3)
  • run.yaml
  • tests/e2e/test_list.txt
  • tests/e2e/features/responses.feature
📜 Review details
🧰 Additional context used
📓 Path-based instructions (2)
src/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.py: Use absolute imports for internal modules (e.g., from authentication import get_auth_dependency)
Use from llama_stack_client import AsyncLlamaStackClient for Llama Stack imports
Check constants.py for shared constants before defining new ones
All modules start with descriptive docstrings explaining purpose
Use logger = get_logger(__name__) from log.py for module logging
Type aliases defined at module level for clarity
All functions require docstrings with brief descriptions
Use complete type annotations for function parameters and return types
Use union types with modern syntax: str | int instead of Union[str, int]
Use Optional[Type] for optional types in type annotations
Use snake_case with descriptive, action-oriented names for functions (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns: return new data structures instead of modifying parameters
Use async def for I/O operations and external API calls
Handle APIConnectionError from Llama Stack in error handling
Use logger.debug() for detailed diagnostic information
Use logger.info() for general information about program execution
Use logger.warning() for unexpected events or potential problems
Use logger.error() for serious problems that prevented function execution
All classes require descriptive docstrings explaining purpose
Use PascalCase for class names with descriptive names and standard suffixes: Configuration, Error/Exception, Resolver, Interface
Use complete type annotations for all class attributes; avoid using Any
Follow Google Python docstring conventions for all modules, classes, and functions
Include Parameters:, Returns:, Raises: sections in function docstrings as needed

Files:

  • src/app/endpoints/responses.py
  • src/utils/types.py
src/app/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

src/app/**/*.py: Use from fastapi import APIRouter, HTTPException, Request, status, Depends for FastAPI dependencies
Use FastAPI HTTPException with appropriate status codes for API endpoint error handling

Files:

  • src/app/endpoints/responses.py
🧠 Learnings (3)
📚 Learning: 2026-04-06T20:18:07.852Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1463
File: src/app/endpoints/rlsapi_v1.py:266-271
Timestamp: 2026-04-06T20:18:07.852Z
Learning: In the lightspeed-stack codebase, within `src/app/endpoints/` inference/MCP endpoints, treat `tools: Optional[list[Any]]` in MCP tool definitions as an intentional, consistent typing pattern (used across `query`, `responses`, `streaming_query`, `rlsapi_v1`). Do not raise or suggest this as a typing issue during code review; changing it in isolation could break endpoint typing consistency across the codebase.

Applied to files:

  • src/app/endpoints/responses.py
📚 Learning: 2025-12-18T10:21:09.038Z
Learnt from: are-ces
Repo: lightspeed-core/lightspeed-stack PR: 935
File: run.yaml:114-115
Timestamp: 2025-12-18T10:21:09.038Z
Learning: In Llama Stack version 0.3.x, telemetry provider configuration is not supported under the `providers` section in run.yaml configuration files. Telemetry can be enabled with just `telemetry.enabled: true` without requiring an explicit provider block.

Applied to files:

  • lightspeed-stack.yaml
📚 Learning: 2026-04-05T12:19:36.009Z
Learnt from: CR
Repo: lightspeed-core/lightspeed-stack PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-04-05T12:19:36.009Z
Learning: Applies to src/**/*.py : Use `from llama_stack_client import AsyncLlamaStackClient` for Llama Stack imports

Applied to files:

  • lightspeed-stack.yaml
🪛 YAMLlint (1.38.0)
lightspeed-stack.yaml

[error] 15-15: duplication of key "use_as_library_client" in mapping

(key-duplicates)

🔇 Additional comments (2)
lightspeed-stack.yaml (1)

16-18: Library mode wiring looks correct once the duplicate key is removed.

library_client_config_path: run.yaml plus disabled URL/API key matches the intended library-client startup path.

src/utils/types.py (1)

217-220: Good contract alignment for safety_identifier passthrough.

This field addition cleanly matches the request/response models and allows forwarding without special-case logic.

Comment thread lightspeed-stack.yaml Outdated
Comment thread src/app/endpoints/responses.py
@asimurka asimurka force-pushed the allow_responses_attributes branch 2 times, most recently from 3d52579 to 021e671 Compare April 21, 2026 11:01
"model": "{PROVIDER}/{MODEL}",
"instructions": "You are a helpful assistant.",
"prompt": {"id": "e2e_responses_passthrough_prompt"},
"reasoning": {"effort": "low"},
Copy link
Copy Markdown
Contributor Author

@asimurka asimurka Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reasoning validation is still broken in 0.6.0 and moreover it will most likely need additional logic - based on reasoning capabilities of configured model. So it is not a passthrough attribute as well.

"status": "completed",
"model": "{PROVIDER}/{MODEL}",
"instructions": "You are a helpful assistant.",
"prompt": {"id": "e2e_responses_passthrough_prompt"},
Copy link
Copy Markdown
Contributor Author

@asimurka asimurka Apr 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prompt is not really a "passthrough" parameter. It relies on the underlying Prompts API.
This will be tested in a separate feature file.
https://redhat.atlassian.net/browse/LCORE-1956

@asimurka asimurka changed the title LCORE-1583: Allowed reasoning and max_output_tokens attributes LCORE-1583: Allowed passthrough attributes in responses Apr 21, 2026
@asimurka asimurka force-pushed the allow_responses_attributes branch from 021e671 to 7422fa5 Compare April 21, 2026 12:10
@asimurka asimurka marked this pull request as ready for review April 21, 2026 12:11
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/app/endpoints/responses.py (1)

237-243: ⚠️ Potential issue | 🔴 Critical

reasoning is still being stripped — contradicts the PR intent.

The PR title/commit message says reasoning should be passed through, but lines 238-240 still log a warning and set responses_request.reasoning = None, preventing the field from reaching ResponsesApiParams.model_validate(...) and client.responses.create(...). Additionally, this mutates the input before the model_copy(deep=True) on line 242. Remove the nullification (and move any logging after the deep copy) so reasoning is actually forwarded.

✅ Suggested fix
-    # Known LLS bug: https://redhat.atlassian.net/browse/LCORE-1583
-    if responses_request.reasoning is not None:
-        logger.warning("reasoning is not yet supported in LCORE and will be ignored")
-        responses_request.reasoning = None
-
-    responses_request = responses_request.model_copy(deep=True)
+    responses_request = responses_request.model_copy(deep=True)

As per coding guidelines: "Avoid in-place parameter modification anti-patterns; return new data structures instead".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/app/endpoints/responses.py` around lines 237 - 243, The code currently
nullifies responses_request.reasoning before making a deep copy, preventing the
reasoning field from reaching ResponsesApiParams.model_validate(...) and
client.responses.create(...); remove the in-place assignment
(responses_request.reasoning = None) and instead perform responses_request =
responses_request.model_copy(deep=True) first, then if you still need to warn
preserve the original value (or check the copied object's reasoning) and call
logger.warning(...) after the deep copy so the reasoning field is forwarded
unchanged to ResponsesApiParams.model_validate and client.responses.create.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/app/endpoints/responses.py`:
- Around line 237-243: The code currently nullifies responses_request.reasoning
before making a deep copy, preventing the reasoning field from reaching
ResponsesApiParams.model_validate(...) and client.responses.create(...); remove
the in-place assignment (responses_request.reasoning = None) and instead perform
responses_request = responses_request.model_copy(deep=True) first, then if you
still need to warn preserve the original value (or check the copied object's
reasoning) and call logger.warning(...) after the deep copy so the reasoning
field is forwarded unchanged to ResponsesApiParams.model_validate and
client.responses.create.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

Run ID: 6672ed14-0dcc-46f1-a732-70ecec27dee5

📥 Commits

Reviewing files that changed from the base of the PR and between 7cd8248 and 7422fa5.

📒 Files selected for processing (2)
  • src/app/endpoints/responses.py
  • tests/e2e/features/responses.feature
💤 Files with no reviewable changes (1)
  • tests/e2e/features/responses.feature
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: Konflux kflux-prd-rh02 / lightspeed-stack-on-pull-request
  • GitHub Check: unit_tests (3.13)
  • GitHub Check: mypy
  • GitHub Check: E2E: server mode / ci / group 1
  • GitHub Check: E2E: library mode / ci / group 3
  • GitHub Check: E2E Tests for Lightspeed Evaluation job
  • GitHub Check: E2E: library mode / ci / group 1
  • GitHub Check: E2E: server mode / ci / group 3
  • GitHub Check: E2E: server mode / ci / group 2
  • GitHub Check: E2E: library mode / ci / group 2
🧰 Additional context used
📓 Path-based instructions (3)
**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

**/*.py: Use absolute imports for internal modules: from authentication import get_auth_dependency
Import FastAPI dependencies with: from fastapi import APIRouter, HTTPException, Request, status, Depends
Import Llama Stack client with: from llama_stack_client import AsyncLlamaStackClient
Check constants.py for shared constants before defining new ones
All modules start with descriptive docstrings explaining purpose
Use logger = get_logger(__name__) from log.py for module logging
Type aliases defined at module level for clarity
Use Final[type] as type hint for all constants
All functions require docstrings with brief descriptions
Complete type annotations for parameters and return types in functions
Use typing_extensions.Self for model validators in Pydantic models
Use modern union type syntax str | int instead of Union[str, int]
Use Optional[Type] for optional type hints
Use snake_case with descriptive, action-oriented function names (get_, validate_, check_)
Avoid in-place parameter modification anti-patterns; return new data structures instead
Use async def for I/O operations and external API calls
Handle APIConnectionError from Llama Stack in error handling
Use standard log levels with clear purposes: debug, info, warning, error
All classes require descriptive docstrings explaining purpose
Use PascalCase for class names with standard suffixes: Configuration, Error/Exception, Resolver, Interface
Use ABC for abstract base classes with @abstractmethod decorators
Use @model_validator and @field_validator for Pydantic model validation
Complete type annotations for all class attributes; use specific types, not Any
Follow Google Python docstring conventions with Parameters, Returns, Raises, and Attributes sections

Files:

  • src/app/endpoints/responses.py
src/app/endpoints/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Use FastAPI HTTPException with appropriate status codes for API endpoints

Files:

  • src/app/endpoints/responses.py
src/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Pydantic models extend ConfigurationBase for config, BaseModel for data models

Files:

  • src/app/endpoints/responses.py
🧠 Learnings (3)
📚 Learning: 2026-04-16T19:08:31.451Z
Learnt from: Lifto
Repo: lightspeed-core/lightspeed-stack PR: 1524
File: src/app/endpoints/responses.py:501-529
Timestamp: 2026-04-16T19:08:31.451Z
Learning: In `src/app/endpoints/responses.py`, `_sanitize_response_dict(response_dict, configured_mcp_labels)` intentionally mutates `response_dict` in-place. The dict is always a fresh throwaway produced by `model_dump()` on the immediately preceding line (both in the streaming `async for` loop and in the non-streaming path); no other reference to it exists. The AGENTS.md guideline "avoid in-place parameter modification" applies to mutating a caller's long-lived/shared data structures, not to ephemeral serialization dicts. Do not flag this as an anti-pattern in future reviews.

Applied to files:

  • src/app/endpoints/responses.py
📚 Learning: 2026-04-15T18:53:57.901Z
Learnt from: Lifto
Repo: lightspeed-core/lightspeed-stack PR: 1510
File: src/models/requests.py:742-769
Timestamp: 2026-04-15T18:53:57.901Z
Learning: In lightspeed-core/lightspeed-stack `src/models/requests.py`, the `ResponsesRequest.validate_body_size` validator uses `json.dumps(values)` with default parameters (ASCII escaping, spaced separators) intentionally. The slight overcount (~0.2% overhead) makes the 65,536-character limit conservatively strict, which is the desired safe direction. An exact wire-format match is impossible since Python's json.dumps output is never byte-identical to the client payload. Do not suggest switching to `ensure_ascii=False` or compact separators for this validator.

Applied to files:

  • src/app/endpoints/responses.py
📚 Learning: 2026-04-06T20:18:07.852Z
Learnt from: major
Repo: lightspeed-core/lightspeed-stack PR: 1463
File: src/app/endpoints/rlsapi_v1.py:266-271
Timestamp: 2026-04-06T20:18:07.852Z
Learning: In the lightspeed-stack codebase, within `src/app/endpoints/` inference/MCP endpoints, treat `tools: Optional[list[Any]]` in MCP tool definitions as an intentional, consistent typing pattern (used across `query`, `responses`, `streaming_query`, `rlsapi_v1`). Do not raise or suggest this as a typing issue during code review; changing it in isolation could break endpoint typing consistency across the codebase.

Applied to files:

  • src/app/endpoints/responses.py

@asimurka asimurka requested a review from radofuchs April 22, 2026 06:50
@asimurka asimurka force-pushed the allow_responses_attributes branch from 7422fa5 to 688708b Compare April 22, 2026 08:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants