HYPERFLEET-720 - fix: update adapter configs#40
HYPERFLEET-720 - fix: update adapter configs#40rh-amarin wants to merge 1 commit intoopenshift-hyperfleet:mainfrom
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
Note Currently processing new changes in this PR. This may take a few minutes, please wait... ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (16)
WalkthroughThis pull request standardizes testdata adapter configs and task configs across many adapters by flattening CRD-like top-level structures (removing apiVersion/kind/metadata/spec), promoting params to top-level, and converting keys from camelCase to snake_case (e.g., apiCall → api_call, retryAttempts → retry_attempts, bySelectors → by_selectors, subscriptionId → subscription_id). Maestro/TLS, manifest/discovery, post/payloads, and delete options are reorganized into flatter, snake_case layouts. The deploy script (deploy-scripts/lib/adapter.sh) was refactored for clearer control flow, logging, validation, and Helm invocation handling. Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@testdata/adapter-configs/nodepools-np-configmap/adapter-task-config.yaml`:
- Around line 59-60: The manifest.ref in adapter-task-config.yaml is pointing to
a developer's absolute local path; update the manifest.ref value to the
container-style path used by other adapter configs (replace the /Users/... path
with the container path pattern and filename, e.g. the in-container location of
adapter-task-resource-configmap.yaml) so the manifest references the resource
inside the container image; change the manifest.ref entry (in
adapter-task-config.yaml) to that container path.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fe098303-7816-4831-916c-749a843de84e
📒 Files selected for processing (15)
testdata/adapter-configs/cl-maestro/adapter-config.yamltestdata/adapter-configs/cl-maestro/adapter-task-config.yamltestdata/adapter-configs/cl-maestro/values.yamltestdata/adapter-configs/clusters-cl-deployment/adapter-config.yamltestdata/adapter-configs/clusters-cl-deployment/adapter-task-config.yamltestdata/adapter-configs/clusters-cl-deployment/values.yamltestdata/adapter-configs/clusters-cl-job/adapter-config.yamltestdata/adapter-configs/clusters-cl-job/adapter-task-config.yamltestdata/adapter-configs/clusters-cl-job/values.yamltestdata/adapter-configs/clusters-cl-namespace/adapter-config.yamltestdata/adapter-configs/clusters-cl-namespace/adapter-task-config.yamltestdata/adapter-configs/clusters-cl-namespace/values.yamltestdata/adapter-configs/nodepools-np-configmap/adapter-config.yamltestdata/adapter-configs/nodepools-np-configmap/adapter-task-config.yamltestdata/adapter-configs/nodepools-np-configmap/values.yaml
| spec: | ||
| adapter: | ||
| version: "0.1.0" | ||
| #version: "0.1.0" |
There was a problem hiding this comment.
Does this mean this version isn’t required in the configuration?
There was a problem hiding this comment.
I aligned with the adapter examples making it "0.2.0"
| && resources.namespace0.statusFeedback.values.exists(v, has(v.name) && v.name == "phase" && has(v.fieldValue)) | ||
| ? resources.namespace0.statusFeedback.values.filter(v, v.name == "phase")[0].fieldValue.string | ||
| : "Unknown" | ||
| # Health: Adapter execution status — surfaces errors from any phase |
There was a problem hiding this comment.
Just noticed — this Health condition in cl-maestro uses the newer, richer pattern (checking executionStatus, resourcesSkipped, structured error messages with phase/step/message), but the other four adapters (cl-deployment, cl-job, cl-namespace, np-configmap) all still use the simpler version that just checks adapter.?errorReason / adapter.?errorMessage.
Is that intentional? Totally fine if cl-maestro is meant to be the reference and the others will catch up later — just want to make sure it's not an oversight since they're all being touched in the same PR anyway.
There was a problem hiding this comment.
I will focus here in just the minimal changes for adapting to the new adapter config properties, we can deal with the semantics later if that is ok
|
As the fields format is changed, it requires to update the adapter deployment helm cmd : hyperfleet-e2e/deploy-scripts/lib/adapter.sh Line 170 in d695db9 |
0fcfc57 to
e696f32
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
testdata/adapter-configs/np-configmap/adapter-task-config.yaml (1)
59-60:⚠️ Potential issue | 🔴 CriticalReplace the developer-local manifest path.
Line 60 has reverted to an absolute path from a local workstation. That will break in CI and inside the adapter container; use the in-container path pattern used by the other configs instead.
🐛 Proposed fix
manifest: - ref: /Users/amarin/work/workspaces/github/hyperfleet/hyperfleet-e2e/ae2e/align-adapter-config/testdata/adapter-configs/nodepools-np-configmap/adapter-task-resource-configmap.yaml + ref: "/etc/adapter/configmap.yaml"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testdata/adapter-configs/np-configmap/adapter-task-config.yaml` around lines 59 - 60, The manifest.ref in adapter-task-config.yaml currently points to a developer-local absolute path; change the value of manifest.ref to the in-container relative path pattern used by other configs (e.g., the path under the container workspace or testdata bundle) so the adapter can load the resource in CI and inside the container; locate the manifest.ref entry in the file and replace the absolute /Users/... path with the project-relative in-container path used elsewhere in these configs.testdata/adapter-configs/cl-maestro/adapter-config.yaml (1)
11-23:⚠️ Potential issue | 🔴 CriticalRename the Helm overrides alongside these client keys.
These settings moved to
clients.hyperfleet_api.*andclients.broker.subscription_id. Ifdeploy-scripts/lib/adapter.shstill writes the old camelCase paths, the adapter will boot with literalCHANGE_MEvalues for the API endpoint and broker subscription.Verify that the deploy script now sets the snake_case keys. Expected: matches for
clients.hyperfleet_api.*/clients.broker.subscription_id, and no matches for the old camelCase paths.#!/bin/bash set -euo pipefail file="$(fd '^adapter\.sh$' deploy-scripts -tf | head -n1)" [ -n "$file" ] sed -n '150,220p' "$file" rg -n 'clients\.hyperfleet_api\.base_url|clients\.hyperfleet_api\.retry_attempts|clients\.hyperfleet_api\.retry_backoff|clients\.broker\.subscription_id|hyperfleetApi|baseUrl|retryAttempts|retryBackoff|subscriptionId' "$file"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testdata/adapter-configs/cl-maestro/adapter-config.yaml` around lines 11 - 23, The deploy script deploy-scripts/lib/adapter.sh is still emitting old camelCase config paths (e.g. hyperfleetApi, baseUrl, retryAttempts, retryBackoff, subscriptionId); update it so the Helm/Env overrides it writes use the new snake_case keys clients.hyperfleet_api.* and clients.broker.subscription_id instead, removing any writes or matches for the old camelCase names; search for occurrences of hyperfleetApi, baseUrl, retryAttempts, retryBackoff, subscriptionId in adapter.sh and replace the output keys/templating to emit clients.hyperfleet_api.base_url, clients.hyperfleet_api.retry_attempts, clients.hyperfleet_api.retry_backoff and clients.broker.subscription_id accordingly so the adapter-config.yaml receives the correct values at deploy time.
🧹 Nitpick comments (1)
testdata/adapter-configs/cl-maestro/adapter-task-config.yaml (1)
11-14: Give the event and fetched generations different names.
generationis sourced twice from different places, so it is unclear whetherobserved_generationis tied to the incoming event or the fresh/clustersread. Renaming the captured field would make the intent explicit and avoid subtle drift if those values ever diverge.♻️ Proposed refactor
- - name: "generation" + - name: "clusterGeneration" field: "generation"If the fetched cluster generation is the intended value, reference
clusterGenerationexplicitly where you need it.Also applies to: 33-34, 291-292
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testdata/adapter-configs/cl-maestro/adapter-task-config.yaml` around lines 11 - 14, The config currently defines a field named "generation" sourced from event.generation which conflicts with another "generation" coming from the fetched /clusters read; rename the event-sourced field to make intent explicit (e.g., change name "generation" sourced from "event.generation" to "event_generation" or "incoming_generation") and ensure the fetched cluster value uses a distinct name (e.g., "clusterGeneration" or "observed_generation") wherever referenced (also update the other occurrences noted at lines 33-34 and 291-292) so code that expects the fetched cluster generation can reference clusterGeneration explicitly and avoid ambiguity.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@testdata/adapter-configs/cl-job/adapter-task-config.yaml`:
- Around line 139-157: The condition checks in the expressions are incorrectly
looking for condition.type == "Available" on resources.testJob; change these to
check for Job condition types "Complete" (for success) and "Failed" (for
failure) and update the ternary branches accordingly in the status, reason, and
message expressions (refer to resources.testJob, .status.conditions,
.filter(...) and .exists(...) usages). Specifically: replace the
existence/filter tests that use c.type == "Available" with c.type == "Complete"
for the success path, keep/ensure the failure path uses c.type == "Failed", and
adjust the reason/message text to reflect "testJobComplete"/"testJobFailed" (or
similar) and "testJob completed"/"testJob failed" while preserving the fallback
branches that use resources.?testJob.?status.hasValue() and pending defaults.
In `@testdata/adapter-configs/cl-job/values.yaml`:
- Around line 20-23: The deploy scripts pass camelCase Helm override keys that
no longer match the values file; update the Helm --set arguments in adapter.sh
and sentinel.sh to use the snake_case keys used in values.yaml (project_id,
subscription_id, topic, dead_letter_topic) instead of camelCase (e.g., change
broker.projectId -> broker.project_id, broker.subscriptionId ->
broker.subscription_id, broker.deadLetterTopic -> broker.dead_letter_topic) so
the broker config is properly injected during helm install/upgrade.
In `@testdata/adapter-configs/cl-maestro/adapter-task-config.yaml`:
- Around line 99-105: The hyperfleet.io/cluster-name annotation is using the
wrong template variable (.clusterId) causing incorrect values; update the
annotation under annotations (key hyperfleet.io/cluster-name) to use
.clusterName instead of .clusterId so it matches the captured clusterName value
used elsewhere.
---
Duplicate comments:
In `@testdata/adapter-configs/cl-maestro/adapter-config.yaml`:
- Around line 11-23: The deploy script deploy-scripts/lib/adapter.sh is still
emitting old camelCase config paths (e.g. hyperfleetApi, baseUrl, retryAttempts,
retryBackoff, subscriptionId); update it so the Helm/Env overrides it writes use
the new snake_case keys clients.hyperfleet_api.* and
clients.broker.subscription_id instead, removing any writes or matches for the
old camelCase names; search for occurrences of hyperfleetApi, baseUrl,
retryAttempts, retryBackoff, subscriptionId in adapter.sh and replace the output
keys/templating to emit clients.hyperfleet_api.base_url,
clients.hyperfleet_api.retry_attempts, clients.hyperfleet_api.retry_backoff and
clients.broker.subscription_id accordingly so the adapter-config.yaml receives
the correct values at deploy time.
In `@testdata/adapter-configs/np-configmap/adapter-task-config.yaml`:
- Around line 59-60: The manifest.ref in adapter-task-config.yaml currently
points to a developer-local absolute path; change the value of manifest.ref to
the in-container relative path pattern used by other configs (e.g., the path
under the container workspace or testdata bundle) so the adapter can load the
resource in CI and inside the container; locate the manifest.ref entry in the
file and replace the absolute /Users/... path with the project-relative
in-container path used elsewhere in these configs.
---
Nitpick comments:
In `@testdata/adapter-configs/cl-maestro/adapter-task-config.yaml`:
- Around line 11-14: The config currently defines a field named "generation"
sourced from event.generation which conflicts with another "generation" coming
from the fetched /clusters read; rename the event-sourced field to make intent
explicit (e.g., change name "generation" sourced from "event.generation" to
"event_generation" or "incoming_generation") and ensure the fetched cluster
value uses a distinct name (e.g., "clusterGeneration" or "observed_generation")
wherever referenced (also update the other occurrences noted at lines 33-34 and
291-292) so code that expects the fetched cluster generation can reference
clusterGeneration explicitly and avoid ambiguity.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 73d93cd9-a9b7-4d41-a075-9ff4b507d890
📒 Files selected for processing (15)
testdata/adapter-configs/cl-deployment/adapter-config.yamltestdata/adapter-configs/cl-deployment/adapter-task-config.yamltestdata/adapter-configs/cl-deployment/values.yamltestdata/adapter-configs/cl-job/adapter-config.yamltestdata/adapter-configs/cl-job/adapter-task-config.yamltestdata/adapter-configs/cl-job/values.yamltestdata/adapter-configs/cl-maestro/adapter-config.yamltestdata/adapter-configs/cl-maestro/adapter-task-config.yamltestdata/adapter-configs/cl-maestro/values.yamltestdata/adapter-configs/cl-namespace/adapter-config.yamltestdata/adapter-configs/cl-namespace/adapter-task-config.yamltestdata/adapter-configs/cl-namespace/values.yamltestdata/adapter-configs/np-configmap/adapter-config.yamltestdata/adapter-configs/np-configmap/adapter-task-config.yamltestdata/adapter-configs/np-configmap/values.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
- testdata/adapter-configs/cl-maestro/values.yaml
e696f32 to
beade25
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
testdata/adapter-configs/cl-maestro/adapter-task-config.yaml (1)
99-105:⚠️ Potential issue | 🟡 MinorUse
clusterNameforhyperfleet.io/cluster-name.Line 104 writes the ID into the name annotation even though
clusterNamewas captured above, so anything reading this annotation gets the wrong value.💡 Proposed fix
- hyperfleet.io/cluster-name: "{{ .clusterId }}" + hyperfleet.io/cluster-name: "{{ .clusterName }}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testdata/adapter-configs/cl-maestro/adapter-task-config.yaml` around lines 99 - 105, The annotation hyperfleet.io/cluster-name currently uses "{{ .clusterId }}" but should use the captured clusterName variable; update the annotations block so hyperfleet.io/cluster-name is set to "{{ .clusterName }}" (replace references to .clusterId with .clusterName) to ensure consumers read the actual cluster name.testdata/adapter-configs/cl-job/adapter-task-config.yaml (1)
139-157:⚠️ Potential issue | 🔴 CriticalDerive Job availability from
Complete/Failed, notAvailable.Kubernetes Jobs do not publish an
Availablecondition. This block stays"Unknown", and the downstreamcl-deploymentwait oncl-jobavailability never unblocks.🐛 Proposed fix
- type: "Available" status: expression: | has(resources.testJob) ? - ( resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Available") - ? resources.testJob.status.conditions.filter(c, c.type == "Available")[0].status : "Unknown") + ( resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Complete" && c.status == "True") + ? "True" + : resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Failed" && c.status == "True") + ? "False" + : "Unknown") : "Unknown" reason: expression: | - resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Available") - ? resources.testJob.status.conditions.filter(c, c.type == "Available")[0].reason - : resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Failed") ? "testJobFailed" - : resources.?testJob.?status.hasValue() ? "testJobInProgress" : "testJobPending" + resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Complete" && c.status == "True") + ? "JobComplete" + : resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Failed" && c.status == "True") ? "JobFailed" + : resources.?testJob.?status.hasValue() ? "JobInProgress" : "JobPending" message: expression: | - resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Available") - ? resources.testJob.status.conditions.filter(c, c.type == "Available")[0].message - : resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Failed") ? "testJob failed" - : resources.?testJob.?status.hasValue() ? "testJob in progress" : "testJob is pending" + resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Complete" && c.status == "True") + ? "testJob completed successfully" + : resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Failed" && c.status == "True") ? "testJob failed" + : resources.?testJob.?status.hasValue() ? "testJob in progress" : "testJob is pending"Does the Kubernetes Job status API expose an `Available` condition, or are the standard Job conditions `Complete` and `Failed`?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testdata/adapter-configs/cl-job/adapter-task-config.yaml` around lines 139 - 157, The current expressions look for a condition with type "Available" on resources.testJob (e.g., resources.testJob.status.conditions.filter(..., c.type == "Available")), but Kubernetes Jobs use "Complete" and "Failed"; update the three expressions (status, reason, message) to derive availability from conditions with type "Complete" and "Failed" instead: check for a "Complete" condition first to return Success/complete status and message, then check for a "Failed" condition to return failure status/reason/message (e.g., map to "Failed"/"testJobFailed"), and otherwise fall back to an in-progress/pending value using resources.?testJob.?status.hasValue(); keep existing use of resources.testJob, status.conditions.orValue([]), exists(...) and filter(...) helpers but replace comparisons to c.type == "Available" with c.type == "Complete" and add the analogous checks for c.type == "Failed".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@testdata/adapter-configs/cl-deployment/values.yaml`:
- Around line 17-20: Replace all camelCase Helm keys with snake_case to match
the values.yaml: change occurrences of "projectId" to "project_id",
"subscriptionId" to "subscription_id" and "deadLetterTopic" to
"dead_letter_topic" wherever Helm --set is used in your deploy scripts (the
adapter.sh Helm --set args and the sentinel.sh Helm --set arg) and in the test
payload file that still contains camelCase; ensure the exact string replacements
are applied so Helm creates fields that match the values.yaml keys.
In `@testdata/adapter-configs/cl-maestro/adapter-task-config.yaml`:
- Around line 42-45: The fixture hardcodes placementClusterName expression to
"cluster1", which prevents routing tests from using the MAESTRO_CONSUMER
override; change the expression for the placementClusterName entry to reference
the test environment variable (e.g. use "${MAESTRO_CONSUMER}" or a parameter
placeholder) so test setup can control routing, updating the
placementClusterName mapping in the adapter-task-config.yaml (the
placementClusterName name/expression pair) accordingly.
In `@testdata/adapter-configs/np-configmap/values.yaml`:
- Around line 16-20: Update the Helm value key names used when rendering adapter
values: replace occurrences of broker.googlepubsub.projectId with
broker.googlepubsub.project_id, broker.googlepubsub.subscriptionId with
broker.googlepubsub.subscription_id, and broker.googlepubsub.deadLetterTopic
with broker.googlepubsub.dead_letter_topic so the script passes the renamed
snake_case keys from values.yaml (look for these exact key strings in
deploy-scripts/lib/adapter.sh).
---
Duplicate comments:
In `@testdata/adapter-configs/cl-job/adapter-task-config.yaml`:
- Around line 139-157: The current expressions look for a condition with type
"Available" on resources.testJob (e.g.,
resources.testJob.status.conditions.filter(..., c.type == "Available")), but
Kubernetes Jobs use "Complete" and "Failed"; update the three expressions
(status, reason, message) to derive availability from conditions with type
"Complete" and "Failed" instead: check for a "Complete" condition first to
return Success/complete status and message, then check for a "Failed" condition
to return failure status/reason/message (e.g., map to "Failed"/"testJobFailed"),
and otherwise fall back to an in-progress/pending value using
resources.?testJob.?status.hasValue(); keep existing use of resources.testJob,
status.conditions.orValue([]), exists(...) and filter(...) helpers but replace
comparisons to c.type == "Available" with c.type == "Complete" and add the
analogous checks for c.type == "Failed".
In `@testdata/adapter-configs/cl-maestro/adapter-task-config.yaml`:
- Around line 99-105: The annotation hyperfleet.io/cluster-name currently uses
"{{ .clusterId }}" but should use the captured clusterName variable; update the
annotations block so hyperfleet.io/cluster-name is set to "{{ .clusterName }}"
(replace references to .clusterId with .clusterName) to ensure consumers read
the actual cluster name.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c6c88262-34cf-44a4-9396-f7692577e824
📒 Files selected for processing (15)
testdata/adapter-configs/cl-deployment/adapter-config.yamltestdata/adapter-configs/cl-deployment/adapter-task-config.yamltestdata/adapter-configs/cl-deployment/values.yamltestdata/adapter-configs/cl-job/adapter-config.yamltestdata/adapter-configs/cl-job/adapter-task-config.yamltestdata/adapter-configs/cl-job/values.yamltestdata/adapter-configs/cl-maestro/adapter-config.yamltestdata/adapter-configs/cl-maestro/adapter-task-config.yamltestdata/adapter-configs/cl-maestro/values.yamltestdata/adapter-configs/cl-namespace/adapter-config.yamltestdata/adapter-configs/cl-namespace/adapter-task-config.yamltestdata/adapter-configs/cl-namespace/values.yamltestdata/adapter-configs/np-configmap/adapter-config.yamltestdata/adapter-configs/np-configmap/adapter-task-config.yamltestdata/adapter-configs/np-configmap/values.yaml
🚧 Files skipped from review as they are similar to previous changes (5)
- testdata/adapter-configs/cl-maestro/values.yaml
- testdata/adapter-configs/cl-job/adapter-config.yaml
- testdata/adapter-configs/cl-namespace/values.yaml
- testdata/adapter-configs/cl-deployment/adapter-config.yaml
- testdata/adapter-configs/np-configmap/adapter-config.yaml
beade25 to
a2d479d
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (2)
testdata/adapter-configs/cl-maestro/adapter-task-config.yaml (1)
42-44:⚠️ Potential issue | 🟠 MajorParameterize
placementClusterNameinstead of hardcoding"cluster1"Line 43 hardcodes the Maestro target, which makes routing behavior environment-insensitive and brittle for scenarios that expect dynamic consumer selection.
As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testdata/adapter-configs/cl-maestro/adapter-task-config.yaml` around lines 42 - 44, The placementClusterName parameter is hardcoded to "cluster1"; replace that literal with a parameterized reference so the adapter can be configured per-environment (update the expression for name: "placementClusterName" to read from the template/variable/environment value rather than a fixed string), ensure a sensible default or required flag is documented in the description, and adjust any consuming code/templating that expects this field so it reads the new variable-based value (look for occurrences of placementClusterName in templates and adapter config readers to update usage).testdata/adapter-configs/cl-job/adapter-task-config.yaml (1)
139-157:⚠️ Potential issue | 🔴 CriticalUse Job
Complete/Failedconditions;Availableis invalid for JobsLines 143, 148, and 154 check
c.type == "Available"on a Job. That condition is not produced for Jobs, so this status can stayUnknownand block downstream adapters waiting on job readiness.Proposed fix
- type: "Available" status: expression: | has(resources.testJob) ? - ( resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Available") - ? resources.testJob.status.conditions.filter(c, c.type == "Available")[0].status : "Unknown") + ( resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Complete" && c.status == "True") + ? "True" + : resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Failed" && c.status == "True") + ? "False" + : "Unknown") : "Unknown" reason: expression: | - resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Available") - ? resources.testJob.status.conditions.filter(c, c.type == "Available")[0].reason - : resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Failed") ? "testJobFailed" - : resources.?testJob.?status.hasValue() ? "testJobInProgress" : "testJobPending" + resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Complete" && c.status == "True") + ? "JobComplete" + : resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Failed" && c.status == "True") ? "JobFailed" + : resources.?testJob.?status.hasValue() ? "JobInProgress" : "JobPending" message: expression: | - resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Available") - ? resources.testJob.status.conditions.filter(c, c.type == "Available")[0].message - : resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Failed") ? "testJob failed" - : resources.?testJob.?status.hasValue() ? "testJob in progress" : "testJob is pending" + resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Complete" && c.status == "True") + ? "testJob completed successfully" + : resources.?testJob.?status.?conditions.orValue([]).exists(c, c.type == "Failed" && c.status == "True") ? "testJob failed" + : resources.?testJob.?status.hasValue() ? "testJob in progress" : "testJob is pending"Verification (read-only): first command should return no
Availablechecks fortestJobafter the fix.#!/bin/bash set -euo pipefail # Verify cl-job no longer checks Job condition type "Available" rg -n -C2 'testJob.*Available|c\.type == "Available"|filter\(c, c\.type == "Available"\)' testdata/adapter-configs/cl-job/adapter-task-config.yaml # Show downstream dependency that consumes cl-job reported "Available" condition rg -n -C2 "adapter=='cl-job'|type=='Available'" testdata/adapter-configs/cl-deployment/adapter-task-config.yamlAs per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testdata/adapter-configs/cl-job/adapter-task-config.yaml` around lines 139 - 157, The YAML is incorrectly checking Job condition type "Available" for resources.testJob (expressions using exists(c, c.type == "Available") and filter(c, c.type == "Available")), which Jobs do not emit; update those checks to use Job conditions "Complete" and "Failed" instead: replace exists(... == "Available") and filter(... == "Available") with the appropriate checks for "Complete" (treat as success) and "Failed" (treat as failure), and adjust the reason/message branches so the success branch returns the Complete condition's reason/message, the failure branch returns Failed reason/message (or the provided fallback strings), and keep the existing fallbacks that use resources.?testJob.?status.hasValue() and "Unknown"/"testJobPending" as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@deploy-scripts/lib/adapter.sh`:
- Around line 218-221: The current conditional treats any failure of
discover_adapters as a successful no-op; change the control flow so you
distinguish a discovery error from a valid empty result: call discover_adapters
and check its exit status first (if non-zero) then log an error and return a
non-zero exit code, and only if it succeeds but yields an empty adapters list
log a warning and return 0. Update the block that uses discover_adapters (the
conditional around adapters=$(discover_adapters)) accordingly, and make the
identical fix for the second occurrence around lines 292-295 so discovery
failures are not masked as success.
- Around line 39-49: The adapter name is only checked for emptiness and can
contain path-traversal characters which may reach filesystem operations like rm
-rf "${dest_adapter_dir}"; add a strict character whitelist check for
adapter_name (allow only letters, digits and hyphens) and reject/log and return
non-zero when it fails, using the same validation wherever adapter_name is
parsed (references: variable adapter_name, adapter_configs_dir, adapter_dirs
array population, and the rm -rf call that uses dest_adapter_dir); ensure the
error message mentions invalid characters so callers know why it was rejected.
- Around line 266-267: The helm release check using grep -q "^${release_name}"
can match prefixes (e.g., adapter-x matching adapter-xy); update the condition
in the adapter release check to require an exact release name by anchoring the
match to either end-of-line or the following whitespace, e.g. replace the grep
with grep -E -q "^${release_name}([[:space:]]|$)" (or use grep -F -x if you
prefer exact-line matching) so the test for release_name is exact before calling
log_warning.
---
Duplicate comments:
In `@testdata/adapter-configs/cl-job/adapter-task-config.yaml`:
- Around line 139-157: The YAML is incorrectly checking Job condition type
"Available" for resources.testJob (expressions using exists(c, c.type ==
"Available") and filter(c, c.type == "Available")), which Jobs do not emit;
update those checks to use Job conditions "Complete" and "Failed" instead:
replace exists(... == "Available") and filter(... == "Available") with the
appropriate checks for "Complete" (treat as success) and "Failed" (treat as
failure), and adjust the reason/message branches so the success branch returns
the Complete condition's reason/message, the failure branch returns Failed
reason/message (or the provided fallback strings), and keep the existing
fallbacks that use resources.?testJob.?status.hasValue() and
"Unknown"/"testJobPending" as before.
In `@testdata/adapter-configs/cl-maestro/adapter-task-config.yaml`:
- Around line 42-44: The placementClusterName parameter is hardcoded to
"cluster1"; replace that literal with a parameterized reference so the adapter
can be configured per-environment (update the expression for name:
"placementClusterName" to read from the template/variable/environment value
rather than a fixed string), ensure a sensible default or required flag is
documented in the description, and adjust any consuming code/templating that
expects this field so it reads the new variable-based value (look for
occurrences of placementClusterName in templates and adapter config readers to
update usage).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ae951c1d-fc95-47c9-a3c4-e818a1947fcf
📒 Files selected for processing (16)
deploy-scripts/lib/adapter.shtestdata/adapter-configs/cl-deployment/adapter-config.yamltestdata/adapter-configs/cl-deployment/adapter-task-config.yamltestdata/adapter-configs/cl-deployment/values.yamltestdata/adapter-configs/cl-job/adapter-config.yamltestdata/adapter-configs/cl-job/adapter-task-config.yamltestdata/adapter-configs/cl-job/values.yamltestdata/adapter-configs/cl-maestro/adapter-config.yamltestdata/adapter-configs/cl-maestro/adapter-task-config.yamltestdata/adapter-configs/cl-maestro/values.yamltestdata/adapter-configs/cl-namespace/adapter-config.yamltestdata/adapter-configs/cl-namespace/adapter-task-config.yamltestdata/adapter-configs/cl-namespace/values.yamltestdata/adapter-configs/np-configmap/adapter-config.yamltestdata/adapter-configs/np-configmap/adapter-task-config.yamltestdata/adapter-configs/np-configmap/values.yaml
🚧 Files skipped from review as they are similar to previous changes (3)
- testdata/adapter-configs/cl-deployment/adapter-config.yaml
- testdata/adapter-configs/cl-deployment/values.yaml
- testdata/adapter-configs/np-configmap/adapter-config.yaml
| # Validate adapter name is not empty (prevents issues from trailing commas) | ||
| if [[ -z "${adapter_name}" ]]; then | ||
| log_error "Empty adapter name in CLUSTER_TIER0_ADAPTERS_DEPLOYMENT (check for trailing commas)" >&2 | ||
| return 1 | ||
| fi | ||
| if [[ -d "${adapter_configs_dir}/${adapter_name}" ]]; then | ||
| adapter_dirs+=("clusters|${adapter_name}") | ||
| else | ||
| log_error "Cluster adapter directory not found: ${adapter_configs_dir}/${adapter_name}" >&2 | ||
| return 1 | ||
| fi |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, locate and examine the file
if [[ -f "deploy-scripts/lib/adapter.sh" ]]; then
echo "=== File found. Checking file size ==="
wc -l deploy-scripts/lib/adapter.sh
else
echo "File not found at expected location. Searching..."
find . -name "adapter.sh" -type f 2>/dev/null
fiRepository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 151
🏁 Script executed:
#!/bin/bash
# Read the relevant sections of the file with context
echo "=== Lines 35-75 (covering lines 39-49 and 60-69) ==="
sed -n '35,75p' deploy-scripts/lib/adapter.sh
echo -e "\n=== Lines 120-145 (covering line 136 rm -rf usage) ==="
sed -n '120,145p' deploy-scripts/lib/adapter.sh
echo -e "\n=== Lines 210-230 (covering lines 218-221) ==="
sed -n '210,230p' deploy-scripts/lib/adapter.sh
echo -e "\n=== Lines 260-275 (covering lines 266-267) ==="
sed -n '260,275p' deploy-scripts/lib/adapter.sh
echo -e "\n=== Lines 285-300 (covering lines 292-295) ==="
sed -n '285,300p' deploy-scripts/lib/adapter.shRepository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 4367
🏁 Script executed:
#!/bin/bash
# Check if adapter names flow into filesystem operations
echo "=== Searching for rm -rf operations involving adapter_name ==="
grep -n "rm -rf" deploy-scripts/lib/adapter.sh
echo -e "\n=== Searching for dest_adapter_dir construction ==="
grep -n "dest_adapter_dir" deploy-scripts/lib/adapter.sh
echo -e "\n=== Searching for adapter name validation or sanitization ==="
grep -n "validate_adapter_name\|adapter_name.*=~\|adapter_name.*grep" deploy-scripts/lib/adapter.shRepository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 1083
Harden adapter name validation to prevent path traversal in filesystem operations
Adapter names are only validated for emptiness but not character content. Names containing path traversal sequences (e.g., ../..) pass validation, and the substring safety check on line 131 is insufficient to block them. This allows unsafe paths to flow into rm -rf "${dest_adapter_dir}" on line 136.
Add character validation to restrict adapter names to safe characters (alphanumeric and hyphens):
Suggested fix
+validate_adapter_name() {
+ [[ "$1" =~ ^[a-z0-9]([a-z0-9-]*[a-z0-9])?$ ]]
+}
+
if [[ -n "${cluster_adapters}" ]]; then
IFS=',' read -ra cluster_adapter_array <<< "${cluster_adapters}"
for adapter_name in "${cluster_adapter_array[@]}"; do
adapter_name=$(echo "${adapter_name}" | xargs)
if [[ -z "${adapter_name}" ]]; then
log_error "Empty adapter name in CLUSTER_TIER0_ADAPTERS_DEPLOYMENT (check for trailing commas)" >&2
return 1
fi
+ if ! validate_adapter_name "${adapter_name}"; then
+ log_error "Invalid adapter name '${adapter_name}' in CLUSTER_TIER0_ADAPTERS_DEPLOYMENT"
+ return 1
+ fi
if [[ -d "${adapter_configs_dir}/${adapter_name}" ]]; then
adapter_dirs+=("clusters|${adapter_name}")
else
log_error "Cluster adapter directory not found: ${adapter_configs_dir}/${adapter_name}" >&2
return 1
@@
if [[ -n "${nodepool_adapters}" ]]; then
IFS=',' read -ra nodepool_adapter_array <<< "${nodepool_adapters}"
for adapter_name in "${nodepool_adapter_array[@]}"; do
adapter_name=$(echo "${adapter_name}" | xargs)
if [[ -z "${adapter_name}" ]]; then
log_error "Empty adapter name in NODEPOOL_TIER0_ADAPTERS_DEPLOYMENT (check for trailing commas)" >&2
return 1
fi
+ if ! validate_adapter_name "${adapter_name}"; then
+ log_error "Invalid adapter name '${adapter_name}' in NODEPOOL_TIER0_ADAPTERS_DEPLOYMENT"
+ return 1
+ fi
if [[ -d "${adapter_configs_dir}/${adapter_name}" ]]; then
adapter_dirs+=("nodepools|${adapter_name}")
else
log_error "NodePool adapter directory not found: ${adapter_configs_dir}/${adapter_name}" >&2
return 1Also applies to: 60-69
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy-scripts/lib/adapter.sh` around lines 39 - 49, The adapter name is only
checked for emptiness and can contain path-traversal characters which may reach
filesystem operations like rm -rf "${dest_adapter_dir}"; add a strict character
whitelist check for adapter_name (allow only letters, digits and hyphens) and
reject/log and return non-zero when it fails, using the same validation wherever
adapter_name is parsed (references: variable adapter_name, adapter_configs_dir,
adapter_dirs array population, and the rm -rf call that uses dest_adapter_dir);
ensure the error message mentions invalid characters so callers know why it was
rejected.
| if ! adapters=$(discover_adapters); then | ||
| log_warning "No adapters found to deploy" | ||
| return 0 | ||
| fi |
There was a problem hiding this comment.
Do not treat discovery errors as success/no-op
Both flows return success when discover_adapters fails, which hides real config errors (missing dirs, malformed adapter list) and can make CI pass with adapters silently skipped.
Suggested control-flow fix
- if ! adapters=$(discover_adapters); then
- log_warning "No adapters found to deploy"
- return 0
- fi
+ if ! adapters=$(discover_adapters); then
+ log_error "Adapter discovery failed; aborting deployment"
+ return 1
+ fi
@@
- if ! adapters=$(discover_adapters); then
- log_warning "No adapters found to uninstall"
- return 0
- fi
+ if ! adapters=$(discover_adapters); then
+ log_error "Adapter discovery failed; aborting uninstall"
+ return 1
+ fiAlso applies to: 292-295
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy-scripts/lib/adapter.sh` around lines 218 - 221, The current
conditional treats any failure of discover_adapters as a successful no-op;
change the control flow so you distinguish a discovery error from a valid empty
result: call discover_adapters and check its exit status first (if non-zero)
then log an error and return a non-zero exit code, and only if it succeeds but
yields an empty adapters list log a warning and return 0. Update the block that
uses discover_adapters (the conditional around adapters=$(discover_adapters))
accordingly, and make the identical fix for the second occurrence around lines
292-295 so discovery failures are not masked as success.
| if ! helm list -n "${NAMESPACE}" 2>/dev/null | grep -q "^${release_name}"; then | ||
| log_warning "Release '${release_name}' not found in namespace '${NAMESPACE}'" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and examine the adapter.sh file
find . -name "adapter.sh" -type fRepository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 110
🏁 Script executed:
# Now let's read the relevant section of the file, specifically around lines 266-267
wc -l ./deploy-scripts/lib/adapter.shRepository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 114
🏁 Script executed:
# Read lines 260-280 to get context around the target lines
sed -n '260,280p' ./deploy-scripts/lib/adapter.shRepository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 896
🏁 Script executed:
# Let's check if helm list -q is a valid flag and look for other helm list usages in the codebase
rg "helm list" ./deploy-scripts/ -A 1Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 975
🏁 Script executed:
# Let's check what helm list output looks like - examine the api.sh implementation more closely
sed -n '1,100p' ./deploy-scripts/lib/api.sh | grep -A 10 "helm list"Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 61
🏁 Script executed:
# Let's find the exact line in api.sh that uses helm list
rg "helm list" ./deploy-scripts/lib/api.sh -B 2 -A 3Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 300
🏁 Script executed:
# Let's also check the sentinel.sh file to see if it has the same pattern
rg "helm list" ./deploy-scripts/lib/sentinel.sh -B 2 -A 3Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 291
🏁 Script executed:
# Let's verify helm list -q is a valid flag by checking helm documentation or other usages
# First, let's see if there's any helm command documentation in the repo
find . -name "*.md" -type f -exec grep -l "helm list" {} \;Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 105
🏁 Script executed:
# Check the README for helm list documentation
grep -B 3 -A 3 "helm list" ./deploy-scripts/README.mdRepository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 219
🏁 Script executed:
# Let's verify the exact helm list output format by checking if there are any test files or examples
find . -name "*.sh" -type f -exec grep -l "helm list" {} \; | head -5Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 202
🏁 Script executed:
# Let's read the entire context around the api.sh implementation to understand the pattern better
rg -B 10 "helm list" ./deploy-scripts/lib/api.sh | head -20Repository: openshift-hyperfleet/hyperfleet-e2e
Length of output: 300
Use exact Helm release matching
grep -q "^${release_name}" lacks an end-of-line anchor, causing false positives. For example, release name adapter-x will match both adapter-x and adapter-xy. Use exact matching on the release name.
Suggested fix
- if ! helm list -n "${NAMESPACE}" 2>/dev/null | grep -q "^${release_name}"; then
+ if ! helm list -n "${NAMESPACE}" -q 2>/dev/null | grep -Fxq -- "${release_name}"; then
log_warning "Release '${release_name}' not found in namespace '${NAMESPACE}'"
return 0
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if ! helm list -n "${NAMESPACE}" 2>/dev/null | grep -q "^${release_name}"; then | |
| log_warning "Release '${release_name}' not found in namespace '${NAMESPACE}'" | |
| if ! helm list -n "${NAMESPACE}" -q 2>/dev/null | grep -Fxq -- "${release_name}"; then | |
| log_warning "Release '${release_name}' not found in namespace '${NAMESPACE}'" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deploy-scripts/lib/adapter.sh` around lines 266 - 267, The helm release check
using grep -q "^${release_name}" can match prefixes (e.g., adapter-x matching
adapter-xy); update the condition in the adapter release check to require an
exact release name by anchoring the match to either end-of-line or the following
whitespace, e.g. replace the grep with grep -E -q
"^${release_name}([[:space:]]|$)" (or use grep -F -x if you prefer exact-line
matching) so the test for release_name is exact before calling log_warning.
| has(resources.testJob) ? "True" : "False" | ||
| reason: | ||
| expression: | | ||
| has(resources.testJob.spec) |
There was a problem hiding this comment.
Category: Bug
The reason expression on line 130 checks has(resources.testJob.spec) while the status
expression on line 127 checks has(resources.testJob). These would produce contradictory
results when the resource exists but .spec is not yet populated — status says "True", reason
says "JobPending".
All other adapters in this PR use has(resources.) consistently for both fields. Looks
like an accidental change — should this be has(resources.testJob) to match?
| has(resources.testJob.spec) | |
| has(resources.testJob) |
a2d479d to
4a2cc0f
Compare
|
Caution Failed to replace (edit) comment. This is likely due to insufficient permissions or the comment being deleted. Error details |
https://issues.redhat.com/browse/HYPERFLEET-720
Updates the adapter configs for the e2e tests following the config standard
This should be merged only after merging openshift-hyperfleet/hyperfleet-adapter#67
Summary by CodeRabbit