ci: add security and quality hardening workflows#135
ci: add security and quality hardening workflows#135kaio6fellipe wants to merge 17 commits intomainfrom
Conversation
Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
…orts formatter Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
|
Lambda integration test executed with success... client_payload:
|
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 37 minutes and 19 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
📝 WalkthroughWalkthroughAdds repository ownership; introduces multiple CI/security workflows (CodeQL, Scorecard, gitleaks, govulncheck); tightens workflow token permissions (move to job-level scopes); expands Go linting rules; and refactors CLI and internal packages (command bootstrap, GitHub GraphQL, requirement checks, runner, stacks/HCL parsing) with lint suppressions and small formatting edits. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI (runCommand)
participant Runner as Runner
participant Git as Git (repo)
participant GH as GitHub Client (REST/GraphQL)
participant Notifier as Notifier/Comments
CLI->>CLI: loadAndValidateConfig
CLI->>GH: checkPRRequirements (may return nil)
CLI->>Runner: Execute (delegates to executeStep)
Runner->>Git: changed stacks / exec steps
Runner-->>CLI: execution results
CLI->>GH: setPendingCommitStatus
CLI->>Notifier: postResultsAndStatus (comment/status/auto-merge)
GH->>GH: graphQLRequest / queryPRNodeID (for auto-merge)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
OpenSSF Scorecard — 8/10 ✅
|
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (4)
.golangci.yml (1)
43-52: Redundantreviveexclusion for test files.At Line 48,
reviveis already fully disabled for_test.go. The second rule at Lines 49–52 (scoped totext: "exported") is dead config — it can never match anything that wasn’t already filtered by the first rule. Drop it to avoid future confusion.Proposed diff
- path: _test\.go linters: - errcheck - gosec - gocyclo - revive - - path: _test\.go - text: "exported" - linters: - - revive🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.golangci.yml around lines 43 - 52, Remove the redundant revive exclusion block in .golangci.yml: the first rule that disables revive for path pattern `_test\.go` already covers all test files, so delete the second rule that also targets `_test\.go` with `text: "exported"` and lists `revive`; locate the two blocks referencing `_test\.go` and remove the one containing `text: "exported"` and the `revive` linter entry to avoid dead config and confusion..github/workflows/scorecard.yml (1)
11-11: Tighten workflow-level permissions to{}.The job already declares the exact permissions it needs (Lines 17–21), so the workflow-level default can be empty rather than
read-all. This matches the least-privilege pattern used incodeql.ymlandsecurity.ymlin this same PR, and is the convention OSSF Scorecard itself recommends (the "Token-Permissions" check flagsread-allat the top level).Proposed diff
-permissions: read-all +permissions: {}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/scorecard.yml at line 11, The top-level workflow permissions are set to "read-all", which is overly broad; change the top-level "permissions:" entry to an empty object "{}" so the workflow defaults to no permissions and let the job-level permissions (the explicit permissions declared for the job) remain as-is; update the top-level `permissions` key in the workflow YAML to {} to enforce least-privilege while preserving the job-level permission block..github/workflows/scorecard-pr.yml (1)
114-122: Avoidbc— use shell arithmetic orawk.
bcis available onubuntu-latesttoday, but it’s not POSIX-required and has been dropped from base images of other runners in the past. Since the threshold is a single-decimal value, a tinyawkcall is both robust and one fewer implicit dependency:Proposed diff
- if [ "$(echo "${SCORE} < ${THRESHOLD}" | bc -l)" -eq 1 ]; then + if awk -v s="${SCORE}" -v t="${THRESHOLD}" 'BEGIN { exit !(s < t) }'; then echo "::error::OpenSSF Scorecard score ${SCORE} is below threshold ${THRESHOLD}" exit 1 fiAdditionally, if
SCOREcomes back asnull(jq returned no score) Line 118 will evaluatenull < 7.0and still pass withbc. Consider validatingSCOREis numeric before the comparison.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/scorecard-pr.yml around lines 114 - 122, The current workflow uses bc to compare SCORE and THRESHOLD (SCORE comes from steps.scorecard.outputs.score and THRESHOLD from env.SCORE_THRESHOLD), which adds a non-POSIX dependency and also lets non-numeric values like "null" slip through; replace the bc invocation with a robust numeric check using awk (or integer-scaled shell arithmetic): first validate SCORE is numeric (e.g., ensure it matches a numeric pattern or awk's tonumber succeeds) and fail early if not, then compare SCORE and THRESHOLD with a single awk expression (or multiply by 10 and use shell integer comparison) and call echo "::error::..." and exit 1 when SCORE < THRESHOLD; update the conditional around SCORE/THRESHOLD accordingly so both numeric validation and the comparison are handled without bc..github/workflows/security.yml (1)
44-48:govulncheckwill fail the workflow on any advisory, including transitive/unreachable ones.
govulncheck ./...exits non-zero for every finding, regardless of call-graph reachability of vulnerable symbols. Since this runs on every PR tomain, unrelated PRs may be blocked whenever a new advisory drops upstream. Consider either:
- accepting the fail-fast behavior (current state) and committing to prompt dep bumps via Renovate, or
- decoupling it (e.g.,
continue-on-error: trueon PRs, fail only on schedule/push tomain), or- scoping to main packages instead of
./...if test-only deps trigger noise.No change required — just flagging the operational trade-off.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/security.yml around lines 44 - 48, The workflow currently runs govulncheck with "govulncheck ./..." in the "Run govulncheck" step which will fail the job for any advisory (including transitive/unreachable findings); update that step to one of the operational options: add continue-on-error: true to the "Run govulncheck" step for PR triggers and keep strict fails on push/schedule, or change the run command to target only main packages (e.g., replace "./..." with your module's main package patterns) to reduce noise, or move the govulncheck invocation to a scheduled/push-only job so PRs aren't blocked; reference the step named "Run govulncheck" and the tool "govulncheck" when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/scorecard-pr.yml:
- Around line 91-96: Guard against null comment bodies and paginate results when
searching for an existing marker: when iterating the array returned by
github.rest.issues.listComments (the variable comments) change the find
predicate that uses c.body.includes(marker) to first ensure c.body is a string
(e.g., typeof c.body === "string" && c.body.includes(marker)) so it won't throw
on null bodies; also switch to github.paginate(github.rest.issues.listComments,
...) so the full comment history is searched and the existing variable detection
(existing) won't miss prior bot comments on long-lived PRs.
- Around line 1-27: The workflow triggers on pull_request and thus the comment
step will 403 for fork PRs; update the scorecard-check job so the step that
posts a comment only runs for same-repo PRs by adding a conditional that
compares the PR head repo full name to the current repo (i.e., run the "Comment
on PR" step only when github.event.pull_request.head.repo.full_name ==
github.repository), or alternatively split the responsibilities into two
workflows: keep the scoring run on pull_request and move only the
comment/PR-write step into a separate workflow triggered by pull_request_target
to safely obtain write permissions.
- Around line 65-80: The table row breaks when c.reason contains newlines or
backticks; in the checks mapping sanitize c.reason before truncation by taking
(c.reason || ''), first replace pipe characters as already done, then normalize
newlines to spaces (e.g. replace(/\r?\n|\r/g, ' ')), optionally collapse
repeated whitespace, and remove or escape backticks (e.g. replace(/`/g, '') or
replace(/`/g, '\\`')), assign this to raw and then perform the length
check/truncation for reason so the produced `raw`/`reason` values (used in the
`checks` template) never contain newlines or unescaped backticks.
In @.github/workflows/security.yml:
- Around line 25-28: The Gitleaks Action usage
gitleaks/gitleaks-action@ff98106e4c7b2bc287b24eaf42907196329070c7 requires an
organization-level GITLEAKS_LICENSE secret; fix by either adding
GITLEAKS_LICENSE to repository secrets and exposing it in the job env
(GITLEAKS_LICENSE: ${{ secrets.GITLEAKS_LICENSE }}) where the action is invoked,
or replace the action invocation with a step that runs the gitleaks
binary/container directly (e.g., invoking the gitleaks CLI or Docker image) so
no license secret is required; update the workflow step that currently names
"Run Gitleaks" accordingly.
---
Nitpick comments:
In @.github/workflows/scorecard-pr.yml:
- Around line 114-122: The current workflow uses bc to compare SCORE and
THRESHOLD (SCORE comes from steps.scorecard.outputs.score and THRESHOLD from
env.SCORE_THRESHOLD), which adds a non-POSIX dependency and also lets
non-numeric values like "null" slip through; replace the bc invocation with a
robust numeric check using awk (or integer-scaled shell arithmetic): first
validate SCORE is numeric (e.g., ensure it matches a numeric pattern or awk's
tonumber succeeds) and fail early if not, then compare SCORE and THRESHOLD with
a single awk expression (or multiply by 10 and use shell integer comparison) and
call echo "::error::..." and exit 1 when SCORE < THRESHOLD; update the
conditional around SCORE/THRESHOLD accordingly so both numeric validation and
the comparison are handled without bc.
In @.github/workflows/scorecard.yml:
- Line 11: The top-level workflow permissions are set to "read-all", which is
overly broad; change the top-level "permissions:" entry to an empty object "{}"
so the workflow defaults to no permissions and let the job-level permissions
(the explicit permissions declared for the job) remain as-is; update the
top-level `permissions` key in the workflow YAML to {} to enforce
least-privilege while preserving the job-level permission block.
In @.github/workflows/security.yml:
- Around line 44-48: The workflow currently runs govulncheck with "govulncheck
./..." in the "Run govulncheck" step which will fail the job for any advisory
(including transitive/unreachable findings); update that step to one of the
operational options: add continue-on-error: true to the "Run govulncheck" step
for PR triggers and keep strict fails on push/schedule, or change the run
command to target only main packages (e.g., replace "./..." with your module's
main package patterns) to reduce noise, or move the govulncheck invocation to a
scheduled/push-only job so PRs aren't blocked; reference the step named "Run
govulncheck" and the tool "govulncheck" when making the change.
In @.golangci.yml:
- Around line 43-52: Remove the redundant revive exclusion block in
.golangci.yml: the first rule that disables revive for path pattern `_test\.go`
already covers all test files, so delete the second rule that also targets
`_test\.go` with `text: "exported"` and lists `revive`; locate the two blocks
referencing `_test\.go` and remove the one containing `text: "exported"` and the
`revive` linter entry to avoid dead config and confusion.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f896a312-4a4c-4e8c-bdd4-4a7e2863c734
📒 Files selected for processing (6)
.github/CODEOWNERS.github/workflows/codeql.yml.github/workflows/scorecard-pr.yml.github/workflows/scorecard.yml.github/workflows/security.yml.golangci.yml
|
You are seeing this message because GitHub Code Scanning has recently been set up for this repository, or this pull request contains the workflow file for the Code Scanning tool. What Enabling Code Scanning Means:
For more information about GitHub Code Scanning, check out the documentation. |
The gitleaks GitHub Action requires a paid license for organization repositories. Replace with direct CLI binary installation. Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Convert all workflows to permissions: {} at workflow level with
per-job grants. This maximizes the OpenSSF Scorecard Token-Permissions
check (0/10 -> 10/10).
Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Fix goimports import grouping in lock packages, rename unused Cobra params to _, add doc comment for WorkflowStatusInProgress, and annotate intentional gosec findings (G204, G301, G304, G306) with nolint. Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Extract loadAndValidateConfig, checkPRRequirements, and postResultsAndStatus helpers. Pure structural refactoring with no behavioral changes. Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Extract validateRepository, validateRequirements, validateBranchAndWorkflow, and validateWorkflowSteps helpers. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
…utoMerge, ParseStackHcl Extract executeStep, graphQLRequest, queryPRNodeID, extractHclStringAttribute, and extractHclListAttribute helpers. Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
…verStackHclOrdered Extract checkSingleRequirement, walkAndCollectStacks, and resolveAndBuildEntries helpers. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Extract setPendingCommitStatus to reduce runCommand complexity, annotate G704 SSRF findings, add doc comments for exported constants, and apply De Morgan's law in checkSingleRequirement. Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
|
Lambda integration test executed with success... client_payload:
|
🌊 Neptune Plan ResultsTerraform Stacks: Neptune completed the plan with status: ✅
Command ✅ Click to see the command outputCommand ✅ Click to see the command outputCommand ✅ Click to see the command outputCommand ✅ Click to see the command outputTo apply these changes, comment: |
🌊 Neptune Apply ResultsTerraform Stacks: Neptune completed the apply with status: ✅
Command ✅ Click to see the command outputCommand ✅ Click to see the command output |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/config/validate.go (1)
101-119:⚠️ Potential issue | 🟠 MajorReject empty phase maps and empty step lists before iterating.
Because the plan/apply checks run inside the phase loop, a workflow with
wf.Phases == nilor an empty phase map passes validation. Similarly, a phase with zero steps bypasses the Line 117 check.Proposed fix
func validateWorkflowSteps(cfg *domain.NeptuneConfig) error { for _, wf := range cfg.Workflows.Workflows { + if _, hasPlan := wf.Phases["plan"]; !hasPlan { + return errors.New("phases should include at least plan and apply phases") + } + if _, hasApply := wf.Phases["apply"]; !hasApply { + return errors.New("phases should include at least plan and apply phases") + } for phaseName, phase := range wf.Phases { for _, dep := range phase.DependsOn { if _, ok := wf.Phases[dep]; !ok { return fmt.Errorf("phase %s depends on %s, but %s is not a valid workflow phase", phaseName, dep, dep) } } - if _, hasPlan := wf.Phases["plan"]; !hasPlan { - return errors.New("phases should include at least plan and apply phases") - } - if _, hasApply := wf.Phases["apply"]; !hasApply { - return errors.New("phases should include at least plan and apply phases") - } + if len(phase.Steps) == 0 { + return fmt.Errorf("phase %s must include at least one step", phaseName) + } for _, step := range phase.Steps {As per coding guidelines,
internal/configvalidation must check “presence of both plan and apply phases.”🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/config/validate.go` around lines 101 - 119, In validateWorkflowSteps, ensure you reject workflows with nil or empty phase maps and phases with empty step lists before iterating: first check cfg.Workflows.Workflows entries for wf.Phases == nil or len(wf.Phases) == 0 and return an error stating both plan and apply are required; then validate presence of "plan" and "apply" in wf.Phases (using wf.Phases["plan"] and wf.Phases["apply"]) before looping phase.Steps; inside the phase loop, verify phase.Steps is non-nil and len(phase.Steps) > 0 and return a clear error if empty, then continue the existing dependency and step.Run checks.
🧹 Nitpick comments (2)
internal/github/graphql.go (1)
27-67: Minor: prefer surfacing HTTP status before JSON decode for unauthenticated/5xx responses.On non-2xx responses (e.g., 401/403/429/5xx from GitHub or a proxy) the body may not be valid GraphQL JSON, so you'll return
decode GraphQL response: ...and lose the actual status. Checkingresp.StatusCode(or at least including it in the decode-error path) would make auto-merge failures easier to diagnose. Current ordering is acceptable for the GraphQL happy path, so treating this as optional.♻️ Suggested tweak
var gqlResp graphQLResponse if err := json.NewDecoder(resp.Body).Decode(&gqlResp); err != nil { - return nil, fmt.Errorf("decode GraphQL response: %w", err) + return nil, fmt.Errorf("decode GraphQL response (status %d): %w", resp.StatusCode, err) } if len(gqlResp.Errors) > 0 { return nil, fmt.Errorf("GraphQL errors: %s", gqlResp.Errors[0].Message) } if resp.StatusCode != http.StatusOK { return nil, fmt.Errorf("GraphQL status %d", resp.StatusCode) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/github/graphql.go` around lines 27 - 67, The current graphQLRequest decodes the response body before checking resp.StatusCode, which can hide non-2xx HTTP status codes (401/403/5xx) when the body isn't valid GraphQL JSON; change graphQLRequest to check resp.StatusCode immediately after receiving resp and return an error that includes the numeric status (and optionally a small snippet of the body) if it is not http.StatusOK, only then attempt json.NewDecoder to populate graphQLResponse for successful 200 responses; reference graphQLRequest, resp.StatusCode and graphQLResponse when locating the change.cmd/command.go (1)
107-128: Drop the unusedenvreturn fromloadAndValidateConfig.
runCommanddiscards the second return value (cfg, _ := loadAndValidateConfig(cmd)), and no other caller exists. Simplify the signature to return only*domain.NeptuneConfig.♻️ Proposed refactor
-func loadAndValidateConfig(cmd *cobra.Command) (*domain.NeptuneConfig, map[string]string) { +func loadAndValidateConfig(cmd *cobra.Command) *domain.NeptuneConfig { env, err := config.LoadEnv() if err != nil { log.For("cli").Error("Error", "err", err) os.Exit(1) } cfg, err := loadConfig(env) if err != nil { log.For("cli").Error("Error", "err", err) os.Exit(1) } if err := config.Validate(cfg); err != nil { notifyAndExit(cfg, err.Error(), 1) } level, err := effectiveLogLevel(cmd, cfg.LogLevel) if err != nil { log.For("cli").Error("Error", "err", err) os.Exit(1) } log.Init(level) - return cfg, env + return cfg }And at the call site:
- cfg, _ := loadAndValidateConfig(cmd) + cfg := loadAndValidateConfig(cmd)Also applies to: 217-217
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/command.go` around lines 107 - 128, The function loadAndValidateConfig currently returns (*domain.NeptuneConfig, map[string]string) but the env map is never used; change loadAndValidateConfig to return only *domain.NeptuneConfig by removing the env variable from the signature and all related return values, and update its callers (e.g., runCommand which currently does cfg, _ := loadAndValidateConfig(cmd)) to use cfg := loadAndValidateConfig(cmd); keep all internal logic (LoadEnv, loadConfig, Validate, effectiveLogLevel, log.Init) intact but drop the env return and any variable assigned solely to support that return.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/config/validate.go`:
- Around line 21-24: The Validate function dereferences cfg without checking for
nil, causing a panic; update Validate (in internal/config/validate.go) to first
check if cfg == nil and return a descriptive error suitable for GitHub comments
(e.g., "config is nil or failed to load") instead of proceeding to call
validateRepository, then continue with existing validation flow for
cfg.Repository; ensure the error type/message matches other validation errors
returned by Validate.
In `@internal/run/runner.go`:
- Around line 58-63: The current code prefixes both stdout and stderr together
causing empty stdout to become a fake `"[stack] "` entry; update the logic in
runner.go around the handling of runOut.Output and runOut.Error so each stream
is prefixed independently only when non-empty (i.e., add separate checks and
only prepend "["+stack+"] " to runOut.Output if runOut.Output != "" and
separately to runOut.Error if runOut.Error != ""), leaving the other field
untouched.
In `@internal/stacks/stackhcl.go`:
- Around line 47-57: The extractHclListAttribute loop currently skips empty
strings (s == "") which silently drops entries; change extractHclListAttribute
to treat empty string elements as invalid by returning an error instead of
omitting them: when v.Type() == cty.String and v.AsString() yields "", return a
descriptive fmt.Errorf (mentioning the attribute name variable name and that
empty strings are not allowed) so depends_on = ["", "foo"] fails rather than
producing ["foo"]; update any tests or callers that rely on silent dropping
accordingly.
---
Outside diff comments:
In `@internal/config/validate.go`:
- Around line 101-119: In validateWorkflowSteps, ensure you reject workflows
with nil or empty phase maps and phases with empty step lists before iterating:
first check cfg.Workflows.Workflows entries for wf.Phases == nil or
len(wf.Phases) == 0 and return an error stating both plan and apply are
required; then validate presence of "plan" and "apply" in wf.Phases (using
wf.Phases["plan"] and wf.Phases["apply"]) before looping phase.Steps; inside the
phase loop, verify phase.Steps is non-nil and len(phase.Steps) > 0 and return a
clear error if empty, then continue the existing dependency and step.Run checks.
---
Nitpick comments:
In `@cmd/command.go`:
- Around line 107-128: The function loadAndValidateConfig currently returns
(*domain.NeptuneConfig, map[string]string) but the env map is never used; change
loadAndValidateConfig to return only *domain.NeptuneConfig by removing the env
variable from the signature and all related return values, and update its
callers (e.g., runCommand which currently does cfg, _ :=
loadAndValidateConfig(cmd)) to use cfg := loadAndValidateConfig(cmd); keep all
internal logic (LoadEnv, loadConfig, Validate, effectiveLogLevel, log.Init)
intact but drop the env return and any variable assigned solely to support that
return.
In `@internal/github/graphql.go`:
- Around line 27-67: The current graphQLRequest decodes the response body before
checking resp.StatusCode, which can hide non-2xx HTTP status codes (401/403/5xx)
when the body isn't valid GraphQL JSON; change graphQLRequest to check
resp.StatusCode immediately after receiving resp and return an error that
includes the numeric status (and optionally a small snippet of the body) if it
is not http.StatusOK, only then attempt json.NewDecoder to populate
graphQLResponse for successful 200 responses; reference graphQLRequest,
resp.StatusCode and graphQLResponse when locating the change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9d245054-8f00-4b4d-8476-a6d4e26f5758
📒 Files selected for processing (28)
.github/workflows/cross-repo-test.yml.github/workflows/docs.yml.github/workflows/e2e.yml.github/workflows/integration.yml.github/workflows/label-old-prs.yml.github/workflows/labeler.yml.github/workflows/lint.yml.github/workflows/neptune.yml.github/workflows/release.yml.github/workflows/scorecard.yml.github/workflows/security.yml.github/workflows/test.ymlcmd/command.gocmd/stacks.gocmd/unlock.gocmd/version.gointernal/config/validate.gointernal/domain/lock.gointernal/git/config.gointernal/git/rebase.gointernal/github/graphql.gointernal/github/requirements.gointernal/lock/gcs.gointernal/lock/s3.gointernal/notifications/github/comment.gointernal/run/runner.gointernal/stacks/local.gointernal/stacks/stackhcl.go
✅ Files skipped from review due to trivial changes (10)
- internal/lock/gcs.go
- cmd/version.go
- internal/git/rebase.go
- internal/lock/s3.go
- internal/domain/lock.go
- cmd/stacks.go
- internal/notifications/github/comment.go
- internal/stacks/local.go
- internal/git/config.go
- cmd/unlock.go
🚧 Files skipped from review as they are similar to previous changes (2)
- .github/workflows/security.yml
- .github/workflows/scorecard.yml
|
Lambda integration test executed with success... client_payload:
|
🌊 Neptune Plan ResultsTerraform Stacks: Neptune completed the plan with status: ✅
Command ✅ Click to see the command outputCommand ✅ Click to see the command outputCommand ✅ Click to see the command outputCommand ✅ Click to see the command outputTo apply these changes, comment: |
🌊 Neptune Apply ResultsTerraform Stacks: Neptune completed the apply with status: ✅
Command ✅ Click to see the command outputCommand ✅ Click to see the command output |
Test files use fake RSA private key stubs ("key", "x") for unit tests.
Allowlist all *_test.go files to prevent false positives.
Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
1130ad6 to
e4c4a3c
Compare
|
Lambda integration test executed with success... client_payload:
|
🌊 Neptune Plan ResultsTerraform Stacks: Neptune completed the plan with status: ✅
Command ✅ Click to see the command outputCommand ✅ Click to see the command outputCommand ✅ Click to see the command outputCommand ✅ Click to see the command outputTo apply these changes, comment: |
🌊 Neptune Apply ResultsTerraform Stacks: Neptune completed the apply with status: ✅
Command ✅ Click to see the command outputCommand ✅ Click to see the command output |
Fork PRs don't have write access to the GITHUB_TOKEN, so the comment step would fail with 403. Skip it for forks — the scorecard check and threshold enforcement still run. Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
|
Lambda integration test executed with success... client_payload:
|
🌊 Neptune Plan ResultsTerraform Stacks: Neptune completed the plan with status: ✅
Command ✅ Click to see the command outputCommand ✅ Click to see the command outputCommand ✅ Click to see the command outputCommand ✅ Click to see the command outputTo apply these changes, comment: |
🌊 Neptune Apply ResultsTerraform Stacks: Neptune completed the apply with status: ✅
Command ✅ Click to see the command outputCommand ✅ Click to see the command output |
Summary
@kaio6fellipeas default ownerContext
Replicates security and quality configurations from event-driven-bookinfo. All actions SHA-pinned. Least-privilege permissions throughout. Keeps existing Renovate config (no Dependabot).
Spec:
docs/superpowers/specs/2026-04-20-security-quality-hardening-design.mdin code-agent-hub.Test plan
🤖 Generated with Claude Code
Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com
Summary by CodeRabbit
Chores
Refactor