add explain api#1003
Conversation
Changed Files
|
|
Important Review skippedAuto incremental reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
WalkthroughThis PR extends CAC config evaluation to support detailed reasoning traces. A new ChangesCAC Evaluation Explanation Trace Support
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (3)
smithy/models/config.smithy (2)
206-208: ⚡ Quick winAdd documentation for the
explainquery parameter.The new
explainparameter lacks an@documentationannotation explaining its purpose and how it differs fromshow_reasoning. API consumers will benefit from clear guidance on when to use each option.📝 Suggested documentation
+ `@documentation`("When true, includes detailed reasoning trace with merge strategy, per-layer override details, and per-key value history in the response. Provides more comprehensive debugging information than show_reasoning.") `@httpQuery`("explain") `@notProperty` explain: Boolean🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@smithy/models/config.smithy` around lines 206 - 208, Add an `@documentation` annotation to the explain Boolean member to describe its purpose and how it differs from show_reasoning; update the explain field (symbol: explain) with a short docstring explaining that explain toggles inclusion of a machine-generated explanation of results (e.g., why items matched) while show_reasoning controls whether the model's internal chain-of-thought or stepwise reasoning is returned, and include usage examples or expected output shape if possible to guide API consumers.
202-208: ⚡ Quick winClarify the relationship between
explainandshow_reasoning.Both
explainandshow_reasoningcan be set simultaneously. Based on the implementation inhelpers.rs(line 179),explaintakes precedence. This behavior might be unexpected for API consumers.Consider either:
- Documenting that
explainsupersedesshow_reasoningwhen both are true, or- Adding validation to make them mutually exclusive if that's the intended design.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@smithy/models/config.smithy` around lines 202 - 208, The model exposes two query flags, explain and show_reasoning, but the runtime currently makes explain take precedence; change this by adding server-side validation in the code path that parses/validates the explain and show_reasoning query params so that if both are true you return a 400 validation error (with a clear message that the flags are mutually exclusive), and also update the smithy model comments for the explain and show_reasoning fields to document that they are mutually exclusive (or alternatively, if you prefer to keep current precedence, update the docs to state that explain supersedes show_reasoning); reference the explain and show_reasoning symbols when making these changes and update any tests exercising this behavior.crates/cac_client/src/eval.rs (1)
220-226: ⚡ Quick winSilent failure on malformed transition could hide bugs.
The code silently skips transitions that lack a "key" field via
continue. While defensive, this could mask data corruption or logic errors upstream inbuild_key_transitions.Consider logging a warning when a transition is missing the "key" field to aid debugging.
📝 Suggested improvement
let Some(key) = transition .get("key") .and_then(Value::as_str) .map(ToOwned::to_owned) else { + log::warn!("Skipping transition without 'key' field: {:?}", transition); continue; };🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@crates/cac_client/src/eval.rs` around lines 220 - 226, The code currently silently skips transitions missing a "key" in the block using transition.get("key").and_then(Value::as_str)...else { continue; }, which can hide upstream bugs; update this branch to emit a warning including the offending transition (and any contextual identifiers) before continuing — e.g., replace the else { continue; } with a log warning (using the project's logging/tracing macro such as log::warn! or tracing::warn!) that includes the transition value and mentions build_key_transitions/transition so the malformed data is visible in logs for debugging.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@crates/cac_client/src/eval.rs`:
- Around line 220-226: The code currently silently skips transitions missing a
"key" in the block using transition.get("key").and_then(Value::as_str)...else {
continue; }, which can hide upstream bugs; update this branch to emit a warning
including the offending transition (and any contextual identifiers) before
continuing — e.g., replace the else { continue; } with a log warning (using the
project's logging/tracing macro such as log::warn! or tracing::warn!) that
includes the transition value and mentions build_key_transitions/transition so
the malformed data is visible in logs for debugging.
In `@smithy/models/config.smithy`:
- Around line 206-208: Add an `@documentation` annotation to the explain Boolean
member to describe its purpose and how it differs from show_reasoning; update
the explain field (symbol: explain) with a short docstring explaining that
explain toggles inclusion of a machine-generated explanation of results (e.g.,
why items matched) while show_reasoning controls whether the model's internal
chain-of-thought or stepwise reasoning is returned, and include usage examples
or expected output shape if possible to guide API consumers.
- Around line 202-208: The model exposes two query flags, explain and
show_reasoning, but the runtime currently makes explain take precedence; change
this by adding server-side validation in the code path that parses/validates the
explain and show_reasoning query params so that if both are true you return a
400 validation error (with a clear message that the flags are mutually
exclusive), and also update the smithy model comments for the explain and
show_reasoning fields to document that they are mutually exclusive (or
alternatively, if you prefer to keep current precedence, update the docs to
state that explain supersedes show_reasoning); reference the explain and
show_reasoning symbols when making these changes and update any tests exercising
this behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0ec097c6-d129-42a1-9d5d-bf528b320ead
📒 Files selected for processing (5)
crates/cac_client/src/eval.rscrates/cac_client/src/lib.rscrates/context_aware_config/src/api/config/helpers.rscrates/superposition_types/src/api/config.rssmithy/models/config.smithy
There was a problem hiding this comment.
Pull request overview
Adds an explain option to the resolved-config API to return a richer “explanation/trace” of how overrides were applied (in addition to the existing legacy show_reasoning metadata).
Changes:
- Extend Smithy model + Rust query types to accept
explainas a query parameter. - Route
explain=truerequests to a new CAC client evaluation path that includesmetadata_tracein the resolved config. - Add CAC client internals for collecting per-layer and per-key transition history, plus unit tests.
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| smithy/models/config.smithy | Adds explain query parameter to GetResolvedConfig input model. |
| crates/superposition_types/src/api/config.rs | Adds explain: Option<bool> to ResolveConfigQuery. |
| crates/context_aware_config/src/api/config/helpers.rs | Uses explain to select eval_cac_with_explanation vs legacy reasoning vs normal evaluation. |
| crates/cac_client/src/lib.rs | Re-exports eval_cac_with_explanation. |
| crates/cac_client/src/eval.rs | Implements trace/explanation capture (metadata_trace) and adds unit tests. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| show_reasoning: Boolean | ||
|
|
||
| @httpQuery("explain") | ||
| @notProperty |
| let key_transitions = build_key_transitions( | ||
| &required_overrides, | ||
| overriden_value, | ||
| merge_strategy, | ||
| ); |
| #[test] | ||
| fn eval_cac_with_reasoning_adds_trace_without_breaking_metadata() { | ||
| let config = base_config(); | ||
| let query = Map::from_iter([ | ||
| ("clientId".to_string(), json!("android")), | ||
| ("country".to_string(), json!("IN")), | ||
| ]); | ||
|
|
||
| let resolved = eval_cac_with_explanation(&config, &query, MergeStrategy::MERGE) | ||
| .expect("reasoning config should resolve"); | ||
|
|
| @httpQuery("explain") | ||
| @notProperty | ||
| explain: Boolean | ||
|
|
Problem
We don't have a way to show cascading behavior of resolve api
Solution
Adding layers and per_key_history in resolve , if include_trace is true, you will get back the entire trace.