Skip to content

ci: add security and quality hardening workflows#135

Open
kaio6fellipe wants to merge 17 commits intomainfrom
ci/security-quality-hardening
Open

ci: add security and quality hardening workflows#135
kaio6fellipe wants to merge 17 commits intomainfrom
ci/security-quality-hardening

Conversation

@kaio6fellipe
Copy link
Copy Markdown
Collaborator

@kaio6fellipe kaio6fellipe commented Apr 20, 2026

Summary

  • Add CodeQL SAST workflow for Go analysis (weekly + push/PR to main)
  • Add OSSF Scorecard supply chain analysis (weekly + push to main)
  • Add Scorecard PR Check with 7.0 threshold enforcement and PR comments
  • Add security scanning workflow (gitleaks secret scan + govulncheck v1.1.4)
  • Add CODEOWNERS with @kaio6fellipe as default owner
  • Enhance golangci-lint config: add gosec, revive, gocyclo, misspell, unconvert linters and goimports formatter

Context

Replicates security and quality configurations from event-driven-bookinfo. All actions SHA-pinned. Least-privilege permissions throughout. Keeps existing Renovate config (no Dependabot).

Spec: docs/superpowers/specs/2026-04-20-security-quality-hardening-design.md in code-agent-hub.

Test plan

  • CodeQL workflow triggers on push to main and runs successfully
  • Scorecard workflow runs on manual dispatch
  • Scorecard PR check posts comment and enforces threshold on this PR
  • Gitleaks scan passes on this PR
  • Govulncheck scan passes on this PR
  • golangci-lint with new linters passes (or surfaces known violations to address separately)

🤖 Generated with Claude Code

Co-Authored-By: Claude Opus 4.6 noreply@anthropic.com

Summary by CodeRabbit

  • Chores

    • Added repository code ownership and multiple security CI workflows (CodeQL, Scorecard on PR/weekly, secret/vulnerability scans).
    • Scoped and tightened CI token permissions to minimize default access.
    • Added gitleaks configuration and expanded linter/formatter rules for improved code quality.
  • Refactor

    • Internal command, runner, stack parsing, validation, and GitHub integration logic reorganized for clearer structure and reuse.

Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
…orts formatter

Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
@github-actions
Copy link
Copy Markdown

Lambda integration test executed with success...

client_payload:

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 20, 2026

Warning

Rate limit exceeded

@kaio6fellipe has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 37 minutes and 19 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 37 minutes and 19 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 354b8117-f66b-4318-b206-8481b61217b9

📥 Commits

Reviewing files that changed from the base of the PR and between 1130ad6 and f41bee9.

📒 Files selected for processing (2)
  • .github/workflows/scorecard-pr.yml
  • .gitleaks.toml
📝 Walkthrough

Walkthrough

Adds repository ownership; introduces multiple CI/security workflows (CodeQL, Scorecard, gitleaks, govulncheck); tightens workflow token permissions (move to job-level scopes); expands Go linting rules; and refactors CLI and internal packages (command bootstrap, GitHub GraphQL, requirement checks, runner, stacks/HCL parsing) with lint suppressions and small formatting edits.

Changes

Cohort / File(s) Summary
Repository governance
.github/CODEOWNERS
Adds a CODEOWNERS file assigning ownership of all files to @kaio6fellipe.
New security & quality workflows
.github/workflows/codeql.yml, .github/workflows/scorecard.yml, .github/workflows/scorecard-pr.yml, .github/workflows/security.yml
Adds CodeQL, scheduled and PR Scorecard workflows (including PR score enforcement/commenting), and a Security Scanning workflow running gitleaks and govulncheck.
Workflow permissions changes
.github/workflows/...
.github/workflows/cross-repo-test.yml, .github/workflows/docs.yml, .github/workflows/e2e.yml, .github/workflows/integration.yml, .github/workflows/label-old-prs.yml, .github/workflows/labeler.yml, .github/workflows/lint.yml, .github/workflows/neptune.yml, .github/workflows/release.yml, .github/workflows/test.yml
Replaced many workflow-level permissions with permissions: {} and restored required scopes on specific jobs to limit token privileges.
Lint/config
.golangci.yml, .gitleaks.toml
Enables additional golangci-lint checks and goimports formatter configuration; adds a gitleaks config with a test-only RSA allowlist.
CLI bootstrap & PR/status refactor
cmd/command.go, cmd/stacks.go, cmd/unlock.go, cmd/version.go
Centralized config loading/validation and PR requirement/status/comment/auto-merge logic into helpers; minor Cobra RunE signature cleanups and lint suppression comments.
GitHub client & requirements
internal/github/graphql.go, internal/github/requirements.go
Split GraphQL auto-merge into reusable request/query helpers; added graphQLRequest and queryPRNodeID; centralized single-requirement check helper replacing inline switch logic.
Runner and step execution
internal/run/runner.go
Introduced executeStep to encapsulate per-step/stack execution and simplified Execute flow; added //nolint:gosec on exec usage.
Stacks discovery & HCL parsing
internal/stacks/local.go, internal/stacks/stackhcl.go
Split stack discovery into collection and resolution helpers; added HCL attribute extractor helpers and refactored ParseStackHcl; added some //nolint:gosec annotations.
Validation, git, lock, notifications, formatting edits
internal/config/validate.go, internal/domain/lock.go, internal/git/..., internal/lock/gcs.go, internal/lock/s3.go, internal/notifications/github/comment.go
Decomposed validation into helpers, added doc comments, multiple //nolint:gosec suppressions, and minor import/blank-line formatting changes.

Sequence Diagram(s)

sequenceDiagram
  participant CLI as CLI (runCommand)
  participant Runner as Runner
  participant Git as Git (repo)
  participant GH as GitHub Client (REST/GraphQL)
  participant Notifier as Notifier/Comments

  CLI->>CLI: loadAndValidateConfig
  CLI->>GH: checkPRRequirements (may return nil)
  CLI->>Runner: Execute (delegates to executeStep)
  Runner->>Git: changed stacks / exec steps
  Runner-->>CLI: execution results
  CLI->>GH: setPendingCommitStatus
  CLI->>Notifier: postResultsAndStatus (comment/status/auto-merge)
  GH->>GH: graphQLRequest / queryPRNodeID (for auto-merge)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐇
I hopped through diffs with whiskers bright,
CI candles lit the nightly night,
Linters sang, the Scorecard chimed,
Securely threaded, one commit at a time,
A carrot-coded cheer — hop on, delight! 🥕

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Description check ❓ Inconclusive The PR description covers the key changes (CodeQL, Scorecard, security scanning, CODEOWNERS, linter enhancements) and includes context and a test plan, but lacks detailed sections matching the template structure (What/Why/Who clearly separated, linked issues, and checklist completion). Structure the description using the template sections explicitly: clarify 'What' and 'Why', confirm documentation/test status in the checklist, and optionally link related issues or design specs more formally.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'ci: add security and quality hardening workflows' accurately and concisely summarizes the main change—adding multiple security and quality workflows and configurations.
Docstring Coverage ✅ Passed Docstring coverage is 92.31% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch ci/security-quality-hardening

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown

github-actions bot commented Apr 20, 2026

OpenSSF Scorecard — 8/10 ✅

Check Score Details
Binary-Artifacts 10/10 no binaries found in the repo
CI-Tests 10/10 13 out of 13 merged PRs checked by a CI test -- score normalized to 10
Code-Review 0/10 Found 0/30 approved changesets -- score normalized to 0
Dangerous-Workflow 10/10 no dangerous workflow patterns detected
License 10/10 license file detected
Pinned-Dependencies 9/10 dependency not pinned by hash detected -- score normalized to 9
Security-Policy 10/10 security policy file detected
Token-Permissions 10/10 GitHub workflow tokens follow principle of least privilege
Vulnerabilities 6/10 4 existing vulnerabilities detected

Threshold: 7 | Full report

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (4)
.golangci.yml (1)

43-52: Redundant revive exclusion for test files.

At Line 48, revive is already fully disabled for _test.go. The second rule at Lines 49–52 (scoped to text: "exported") is dead config — it can never match anything that wasn’t already filtered by the first rule. Drop it to avoid future confusion.

Proposed diff
       - path: _test\.go
         linters:
           - errcheck
           - gosec
           - gocyclo
           - revive
-      - path: _test\.go
-        text: "exported"
-        linters:
-          - revive
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.golangci.yml around lines 43 - 52, Remove the redundant revive exclusion
block in .golangci.yml: the first rule that disables revive for path pattern
`_test\.go` already covers all test files, so delete the second rule that also
targets `_test\.go` with `text: "exported"` and lists `revive`; locate the two
blocks referencing `_test\.go` and remove the one containing `text: "exported"`
and the `revive` linter entry to avoid dead config and confusion.
.github/workflows/scorecard.yml (1)

11-11: Tighten workflow-level permissions to {}.

The job already declares the exact permissions it needs (Lines 17–21), so the workflow-level default can be empty rather than read-all. This matches the least-privilege pattern used in codeql.yml and security.yml in this same PR, and is the convention OSSF Scorecard itself recommends (the "Token-Permissions" check flags read-all at the top level).

Proposed diff
-permissions: read-all
+permissions: {}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/scorecard.yml at line 11, The top-level workflow
permissions are set to "read-all", which is overly broad; change the top-level
"permissions:" entry to an empty object "{}" so the workflow defaults to no
permissions and let the job-level permissions (the explicit permissions declared
for the job) remain as-is; update the top-level `permissions` key in the
workflow YAML to {} to enforce least-privilege while preserving the job-level
permission block.
.github/workflows/scorecard-pr.yml (1)

114-122: Avoid bc — use shell arithmetic or awk.

bc is available on ubuntu-latest today, but it’s not POSIX-required and has been dropped from base images of other runners in the past. Since the threshold is a single-decimal value, a tiny awk call is both robust and one fewer implicit dependency:

Proposed diff
-          if [ "$(echo "${SCORE} < ${THRESHOLD}" | bc -l)" -eq 1 ]; then
+          if awk -v s="${SCORE}" -v t="${THRESHOLD}" 'BEGIN { exit !(s < t) }'; then
             echo "::error::OpenSSF Scorecard score ${SCORE} is below threshold ${THRESHOLD}"
             exit 1
           fi

Additionally, if SCORE comes back as null (jq returned no score) Line 118 will evaluate null < 7.0 and still pass with bc. Consider validating SCORE is numeric before the comparison.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/scorecard-pr.yml around lines 114 - 122, The current
workflow uses bc to compare SCORE and THRESHOLD (SCORE comes from
steps.scorecard.outputs.score and THRESHOLD from env.SCORE_THRESHOLD), which
adds a non-POSIX dependency and also lets non-numeric values like "null" slip
through; replace the bc invocation with a robust numeric check using awk (or
integer-scaled shell arithmetic): first validate SCORE is numeric (e.g., ensure
it matches a numeric pattern or awk's tonumber succeeds) and fail early if not,
then compare SCORE and THRESHOLD with a single awk expression (or multiply by 10
and use shell integer comparison) and call echo "::error::..." and exit 1 when
SCORE < THRESHOLD; update the conditional around SCORE/THRESHOLD accordingly so
both numeric validation and the comparison are handled without bc.
.github/workflows/security.yml (1)

44-48: govulncheck will fail the workflow on any advisory, including transitive/unreachable ones.

govulncheck ./... exits non-zero for every finding, regardless of call-graph reachability of vulnerable symbols. Since this runs on every PR to main, unrelated PRs may be blocked whenever a new advisory drops upstream. Consider either:

  • accepting the fail-fast behavior (current state) and committing to prompt dep bumps via Renovate, or
  • decoupling it (e.g., continue-on-error: true on PRs, fail only on schedule/push to main), or
  • scoping to main packages instead of ./... if test-only deps trigger noise.

No change required — just flagging the operational trade-off.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/security.yml around lines 44 - 48, The workflow currently
runs govulncheck with "govulncheck ./..." in the "Run govulncheck" step which
will fail the job for any advisory (including transitive/unreachable findings);
update that step to one of the operational options: add continue-on-error: true
to the "Run govulncheck" step for PR triggers and keep strict fails on
push/schedule, or change the run command to target only main packages (e.g.,
replace "./..." with your module's main package patterns) to reduce noise, or
move the govulncheck invocation to a scheduled/push-only job so PRs aren't
blocked; reference the step named "Run govulncheck" and the tool "govulncheck"
when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/scorecard-pr.yml:
- Around line 91-96: Guard against null comment bodies and paginate results when
searching for an existing marker: when iterating the array returned by
github.rest.issues.listComments (the variable comments) change the find
predicate that uses c.body.includes(marker) to first ensure c.body is a string
(e.g., typeof c.body === "string" && c.body.includes(marker)) so it won't throw
on null bodies; also switch to github.paginate(github.rest.issues.listComments,
...) so the full comment history is searched and the existing variable detection
(existing) won't miss prior bot comments on long-lived PRs.
- Around line 1-27: The workflow triggers on pull_request and thus the comment
step will 403 for fork PRs; update the scorecard-check job so the step that
posts a comment only runs for same-repo PRs by adding a conditional that
compares the PR head repo full name to the current repo (i.e., run the "Comment
on PR" step only when github.event.pull_request.head.repo.full_name ==
github.repository), or alternatively split the responsibilities into two
workflows: keep the scoring run on pull_request and move only the
comment/PR-write step into a separate workflow triggered by pull_request_target
to safely obtain write permissions.
- Around line 65-80: The table row breaks when c.reason contains newlines or
backticks; in the checks mapping sanitize c.reason before truncation by taking
(c.reason || ''), first replace pipe characters as already done, then normalize
newlines to spaces (e.g. replace(/\r?\n|\r/g, ' ')), optionally collapse
repeated whitespace, and remove or escape backticks (e.g. replace(/`/g, '') or
replace(/`/g, '\\`')), assign this to raw and then perform the length
check/truncation for reason so the produced `raw`/`reason` values (used in the
`checks` template) never contain newlines or unescaped backticks.

In @.github/workflows/security.yml:
- Around line 25-28: The Gitleaks Action usage
gitleaks/gitleaks-action@ff98106e4c7b2bc287b24eaf42907196329070c7 requires an
organization-level GITLEAKS_LICENSE secret; fix by either adding
GITLEAKS_LICENSE to repository secrets and exposing it in the job env
(GITLEAKS_LICENSE: ${{ secrets.GITLEAKS_LICENSE }}) where the action is invoked,
or replace the action invocation with a step that runs the gitleaks
binary/container directly (e.g., invoking the gitleaks CLI or Docker image) so
no license secret is required; update the workflow step that currently names
"Run Gitleaks" accordingly.

---

Nitpick comments:
In @.github/workflows/scorecard-pr.yml:
- Around line 114-122: The current workflow uses bc to compare SCORE and
THRESHOLD (SCORE comes from steps.scorecard.outputs.score and THRESHOLD from
env.SCORE_THRESHOLD), which adds a non-POSIX dependency and also lets
non-numeric values like "null" slip through; replace the bc invocation with a
robust numeric check using awk (or integer-scaled shell arithmetic): first
validate SCORE is numeric (e.g., ensure it matches a numeric pattern or awk's
tonumber succeeds) and fail early if not, then compare SCORE and THRESHOLD with
a single awk expression (or multiply by 10 and use shell integer comparison) and
call echo "::error::..." and exit 1 when SCORE < THRESHOLD; update the
conditional around SCORE/THRESHOLD accordingly so both numeric validation and
the comparison are handled without bc.

In @.github/workflows/scorecard.yml:
- Line 11: The top-level workflow permissions are set to "read-all", which is
overly broad; change the top-level "permissions:" entry to an empty object "{}"
so the workflow defaults to no permissions and let the job-level permissions
(the explicit permissions declared for the job) remain as-is; update the
top-level `permissions` key in the workflow YAML to {} to enforce
least-privilege while preserving the job-level permission block.

In @.github/workflows/security.yml:
- Around line 44-48: The workflow currently runs govulncheck with "govulncheck
./..." in the "Run govulncheck" step which will fail the job for any advisory
(including transitive/unreachable findings); update that step to one of the
operational options: add continue-on-error: true to the "Run govulncheck" step
for PR triggers and keep strict fails on push/schedule, or change the run
command to target only main packages (e.g., replace "./..." with your module's
main package patterns) to reduce noise, or move the govulncheck invocation to a
scheduled/push-only job so PRs aren't blocked; reference the step named "Run
govulncheck" and the tool "govulncheck" when making the change.

In @.golangci.yml:
- Around line 43-52: Remove the redundant revive exclusion block in
.golangci.yml: the first rule that disables revive for path pattern `_test\.go`
already covers all test files, so delete the second rule that also targets
`_test\.go` with `text: "exported"` and lists `revive`; locate the two blocks
referencing `_test\.go` and remove the one containing `text: "exported"` and the
`revive` linter entry to avoid dead config and confusion.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f896a312-4a4c-4e8c-bdd4-4a7e2863c734

📥 Commits

Reviewing files that changed from the base of the PR and between a812697 and c74f206.

📒 Files selected for processing (6)
  • .github/CODEOWNERS
  • .github/workflows/codeql.yml
  • .github/workflows/scorecard-pr.yml
  • .github/workflows/scorecard.yml
  • .github/workflows/security.yml
  • .golangci.yml

Comment thread .github/workflows/scorecard-pr.yml
Comment thread .github/workflows/scorecard-pr.yml
Comment thread .github/workflows/scorecard-pr.yml
Comment thread .github/workflows/security.yml Outdated
@github-advanced-security
Copy link
Copy Markdown

You are seeing this message because GitHub Code Scanning has recently been set up for this repository, or this pull request contains the workflow file for the Code Scanning tool.

What Enabling Code Scanning Means:

  • The 'Security' tab will display more code scanning analysis results (e.g., for the default branch).
  • Depending on your configuration and choice of analysis tool, future pull requests will be annotated with code scanning analysis results.
  • You will be able to see the analysis results for the pull request's branch on this overview once the scans have completed and the checks have passed.

For more information about GitHub Code Scanning, check out the documentation.

kaio6fellipe and others added 8 commits April 20, 2026 04:16
The gitleaks GitHub Action requires a paid license for organization
repositories. Replace with direct CLI binary installation.

Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Convert all workflows to permissions: {} at workflow level with
per-job grants. This maximizes the OpenSSF Scorecard Token-Permissions
check (0/10 -> 10/10).

Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Fix goimports import grouping in lock packages, rename unused Cobra
params to _, add doc comment for WorkflowStatusInProgress, and annotate
intentional gosec findings (G204, G301, G304, G306) with nolint.

Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Extract loadAndValidateConfig, checkPRRequirements, and
postResultsAndStatus helpers. Pure structural refactoring with
no behavioral changes.

Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Extract validateRepository, validateRequirements,
validateBranchAndWorkflow, and validateWorkflowSteps helpers.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
…utoMerge, ParseStackHcl

Extract executeStep, graphQLRequest, queryPRNodeID,
extractHclStringAttribute, and extractHclListAttribute helpers.

Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
…verStackHclOrdered

Extract checkSingleRequirement, walkAndCollectStacks, and
resolveAndBuildEntries helpers.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
Extract setPendingCommitStatus to reduce runCommand complexity,
annotate G704 SSRF findings, add doc comments for exported constants,
and apply De Morgan's law in checkSingleRequirement.

Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
@github-actions
Copy link
Copy Markdown

Lambda integration test executed with success...

client_payload:

@github-actions
Copy link
Copy Markdown

🌊 Neptune Plan Results

Terraform Stacks: stack-a, stack-b

Neptune completed the plan with status:

For more details, see the GitHub Actions run

Command ✅ terraform init -input=false (stack: stack-a)

Click to see the command output
stderr:


stdout:
[stack-a] Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/null versions matching "~> 3.0"...
- Finding hashicorp/local versions matching "~> 2.0"...
- Installing hashicorp/null v3.2.4...
- Installed hashicorp/null v3.2.4 (signed by HashiCorp)
- Installing hashicorp/local v2.8.0...
- Installed hashicorp/local v2.8.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Command ✅ terraform init -input=false (stack: stack-b)

Click to see the command output
stderr:


stdout:
[stack-b] Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/local versions matching "~> 2.0"...
- Finding hashicorp/null versions matching "~> 3.0"...
- Installing hashicorp/local v2.8.0...
- Installed hashicorp/local v2.8.0 (signed by HashiCorp)
- Installing hashicorp/null v3.2.4...
- Installed hashicorp/null v3.2.4 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Command ✅ terraform plan -input=false -out=tfplan (stack: stack-a)

Click to see the command output
stderr:


stdout:
[stack-a] Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # local_file.out will be created
  + resource "local_file" "out" {
      + content              = "stack-a output"
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "./out.txt"
      + id                   = (known after apply)
    }

  # null_resource.stack_a will be created
  + resource "null_resource" "stack_a" {
      + id       = (known after apply)
      + triggers = {
          + "stack" = "stack-a"
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

Command ✅ terraform plan -input=false -out=tfplan (stack: stack-b)

Click to see the command output
stderr:


stdout:
[stack-b] Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # local_file.out will be created
  + resource "local_file" "out" {
      + content              = "stack-b output"
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "./out.txt"
      + id                   = (known after apply)
    }

  # null_resource.stack_b will be created
  + resource "null_resource" "stack_b" {
      + id       = (known after apply)
      + triggers = {
          + "stack" = "stack-b"
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

To apply these changes, comment:

@neptbot apply

@github-actions
Copy link
Copy Markdown

🌊 Neptune Apply Results

Terraform Stacks: stack-a, stack-b

Neptune completed the apply with status:

For more details, see the GitHub Actions run

Command ✅ terraform apply -input=false tfplan (stack: stack-a)

Click to see the command output
stderr:


stdout:
[stack-a] null_resource.stack_a: Creating...
null_resource.stack_a: Creation complete after 0s [id=8756971114946707542]
local_file.out: Creating...
local_file.out: Creation complete after 0s [id=a57cdd3e7db0022635e46b54e3ae6357944d9ed4]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Command ✅ terraform apply -input=false tfplan (stack: stack-b)

Click to see the command output
stderr:


stdout:
[stack-b] local_file.out: Creating...
null_resource.stack_b: Creating...
null_resource.stack_b: Creation complete after 0s [id=770126907783639099]
local_file.out: Creation complete after 0s [id=2d8eff4b1ddeb10a7cfb78093bc91edb5b3318f8]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
internal/config/validate.go (1)

101-119: ⚠️ Potential issue | 🟠 Major

Reject empty phase maps and empty step lists before iterating.

Because the plan/apply checks run inside the phase loop, a workflow with wf.Phases == nil or an empty phase map passes validation. Similarly, a phase with zero steps bypasses the Line 117 check.

Proposed fix
 func validateWorkflowSteps(cfg *domain.NeptuneConfig) error {
 	for _, wf := range cfg.Workflows.Workflows {
+		if _, hasPlan := wf.Phases["plan"]; !hasPlan {
+			return errors.New("phases should include at least plan and apply phases")
+		}
+		if _, hasApply := wf.Phases["apply"]; !hasApply {
+			return errors.New("phases should include at least plan and apply phases")
+		}
 		for phaseName, phase := range wf.Phases {
 			for _, dep := range phase.DependsOn {
 				if _, ok := wf.Phases[dep]; !ok {
 					return fmt.Errorf("phase %s depends on %s, but %s is not a valid workflow phase", phaseName, dep, dep)
 				}
 			}
-			if _, hasPlan := wf.Phases["plan"]; !hasPlan {
-				return errors.New("phases should include at least plan and apply phases")
-			}
-			if _, hasApply := wf.Phases["apply"]; !hasApply {
-				return errors.New("phases should include at least plan and apply phases")
-			}
+			if len(phase.Steps) == 0 {
+				return fmt.Errorf("phase %s must include at least one step", phaseName)
+			}
 			for _, step := range phase.Steps {

As per coding guidelines, internal/config validation must check “presence of both plan and apply phases.”

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/config/validate.go` around lines 101 - 119, In
validateWorkflowSteps, ensure you reject workflows with nil or empty phase maps
and phases with empty step lists before iterating: first check
cfg.Workflows.Workflows entries for wf.Phases == nil or len(wf.Phases) == 0 and
return an error stating both plan and apply are required; then validate presence
of "plan" and "apply" in wf.Phases (using wf.Phases["plan"] and
wf.Phases["apply"]) before looping phase.Steps; inside the phase loop, verify
phase.Steps is non-nil and len(phase.Steps) > 0 and return a clear error if
empty, then continue the existing dependency and step.Run checks.
🧹 Nitpick comments (2)
internal/github/graphql.go (1)

27-67: Minor: prefer surfacing HTTP status before JSON decode for unauthenticated/5xx responses.

On non-2xx responses (e.g., 401/403/429/5xx from GitHub or a proxy) the body may not be valid GraphQL JSON, so you'll return decode GraphQL response: ... and lose the actual status. Checking resp.StatusCode (or at least including it in the decode-error path) would make auto-merge failures easier to diagnose. Current ordering is acceptable for the GraphQL happy path, so treating this as optional.

♻️ Suggested tweak
 	var gqlResp graphQLResponse
 	if err := json.NewDecoder(resp.Body).Decode(&gqlResp); err != nil {
-		return nil, fmt.Errorf("decode GraphQL response: %w", err)
+		return nil, fmt.Errorf("decode GraphQL response (status %d): %w", resp.StatusCode, err)
 	}
 	if len(gqlResp.Errors) > 0 {
 		return nil, fmt.Errorf("GraphQL errors: %s", gqlResp.Errors[0].Message)
 	}
 	if resp.StatusCode != http.StatusOK {
 		return nil, fmt.Errorf("GraphQL status %d", resp.StatusCode)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/github/graphql.go` around lines 27 - 67, The current graphQLRequest
decodes the response body before checking resp.StatusCode, which can hide
non-2xx HTTP status codes (401/403/5xx) when the body isn't valid GraphQL JSON;
change graphQLRequest to check resp.StatusCode immediately after receiving resp
and return an error that includes the numeric status (and optionally a small
snippet of the body) if it is not http.StatusOK, only then attempt
json.NewDecoder to populate graphQLResponse for successful 200 responses;
reference graphQLRequest, resp.StatusCode and graphQLResponse when locating the
change.
cmd/command.go (1)

107-128: Drop the unused env return from loadAndValidateConfig.

runCommand discards the second return value (cfg, _ := loadAndValidateConfig(cmd)), and no other caller exists. Simplify the signature to return only *domain.NeptuneConfig.

♻️ Proposed refactor
-func loadAndValidateConfig(cmd *cobra.Command) (*domain.NeptuneConfig, map[string]string) {
+func loadAndValidateConfig(cmd *cobra.Command) *domain.NeptuneConfig {
 	env, err := config.LoadEnv()
 	if err != nil {
 		log.For("cli").Error("Error", "err", err)
 		os.Exit(1)
 	}
 	cfg, err := loadConfig(env)
 	if err != nil {
 		log.For("cli").Error("Error", "err", err)
 		os.Exit(1)
 	}
 	if err := config.Validate(cfg); err != nil {
 		notifyAndExit(cfg, err.Error(), 1)
 	}
 	level, err := effectiveLogLevel(cmd, cfg.LogLevel)
 	if err != nil {
 		log.For("cli").Error("Error", "err", err)
 		os.Exit(1)
 	}
 	log.Init(level)
-	return cfg, env
+	return cfg
 }

And at the call site:

-	cfg, _ := loadAndValidateConfig(cmd)
+	cfg := loadAndValidateConfig(cmd)

Also applies to: 217-217

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/command.go` around lines 107 - 128, The function loadAndValidateConfig
currently returns (*domain.NeptuneConfig, map[string]string) but the env map is
never used; change loadAndValidateConfig to return only *domain.NeptuneConfig by
removing the env variable from the signature and all related return values, and
update its callers (e.g., runCommand which currently does cfg, _ :=
loadAndValidateConfig(cmd)) to use cfg := loadAndValidateConfig(cmd); keep all
internal logic (LoadEnv, loadConfig, Validate, effectiveLogLevel, log.Init)
intact but drop the env return and any variable assigned solely to support that
return.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/config/validate.go`:
- Around line 21-24: The Validate function dereferences cfg without checking for
nil, causing a panic; update Validate (in internal/config/validate.go) to first
check if cfg == nil and return a descriptive error suitable for GitHub comments
(e.g., "config is nil or failed to load") instead of proceeding to call
validateRepository, then continue with existing validation flow for
cfg.Repository; ensure the error type/message matches other validation errors
returned by Validate.

In `@internal/run/runner.go`:
- Around line 58-63: The current code prefixes both stdout and stderr together
causing empty stdout to become a fake `"[stack] "` entry; update the logic in
runner.go around the handling of runOut.Output and runOut.Error so each stream
is prefixed independently only when non-empty (i.e., add separate checks and
only prepend "["+stack+"] " to runOut.Output if runOut.Output != "" and
separately to runOut.Error if runOut.Error != ""), leaving the other field
untouched.

In `@internal/stacks/stackhcl.go`:
- Around line 47-57: The extractHclListAttribute loop currently skips empty
strings (s == "") which silently drops entries; change extractHclListAttribute
to treat empty string elements as invalid by returning an error instead of
omitting them: when v.Type() == cty.String and v.AsString() yields "", return a
descriptive fmt.Errorf (mentioning the attribute name variable name and that
empty strings are not allowed) so depends_on = ["", "foo"] fails rather than
producing ["foo"]; update any tests or callers that rely on silent dropping
accordingly.

---

Outside diff comments:
In `@internal/config/validate.go`:
- Around line 101-119: In validateWorkflowSteps, ensure you reject workflows
with nil or empty phase maps and phases with empty step lists before iterating:
first check cfg.Workflows.Workflows entries for wf.Phases == nil or
len(wf.Phases) == 0 and return an error stating both plan and apply are
required; then validate presence of "plan" and "apply" in wf.Phases (using
wf.Phases["plan"] and wf.Phases["apply"]) before looping phase.Steps; inside the
phase loop, verify phase.Steps is non-nil and len(phase.Steps) > 0 and return a
clear error if empty, then continue the existing dependency and step.Run checks.

---

Nitpick comments:
In `@cmd/command.go`:
- Around line 107-128: The function loadAndValidateConfig currently returns
(*domain.NeptuneConfig, map[string]string) but the env map is never used; change
loadAndValidateConfig to return only *domain.NeptuneConfig by removing the env
variable from the signature and all related return values, and update its
callers (e.g., runCommand which currently does cfg, _ :=
loadAndValidateConfig(cmd)) to use cfg := loadAndValidateConfig(cmd); keep all
internal logic (LoadEnv, loadConfig, Validate, effectiveLogLevel, log.Init)
intact but drop the env return and any variable assigned solely to support that
return.

In `@internal/github/graphql.go`:
- Around line 27-67: The current graphQLRequest decodes the response body before
checking resp.StatusCode, which can hide non-2xx HTTP status codes (401/403/5xx)
when the body isn't valid GraphQL JSON; change graphQLRequest to check
resp.StatusCode immediately after receiving resp and return an error that
includes the numeric status (and optionally a small snippet of the body) if it
is not http.StatusOK, only then attempt json.NewDecoder to populate
graphQLResponse for successful 200 responses; reference graphQLRequest,
resp.StatusCode and graphQLResponse when locating the change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9d245054-8f00-4b4d-8476-a6d4e26f5758

📥 Commits

Reviewing files that changed from the base of the PR and between c74f206 and bb206ec.

📒 Files selected for processing (28)
  • .github/workflows/cross-repo-test.yml
  • .github/workflows/docs.yml
  • .github/workflows/e2e.yml
  • .github/workflows/integration.yml
  • .github/workflows/label-old-prs.yml
  • .github/workflows/labeler.yml
  • .github/workflows/lint.yml
  • .github/workflows/neptune.yml
  • .github/workflows/release.yml
  • .github/workflows/scorecard.yml
  • .github/workflows/security.yml
  • .github/workflows/test.yml
  • cmd/command.go
  • cmd/stacks.go
  • cmd/unlock.go
  • cmd/version.go
  • internal/config/validate.go
  • internal/domain/lock.go
  • internal/git/config.go
  • internal/git/rebase.go
  • internal/github/graphql.go
  • internal/github/requirements.go
  • internal/lock/gcs.go
  • internal/lock/s3.go
  • internal/notifications/github/comment.go
  • internal/run/runner.go
  • internal/stacks/local.go
  • internal/stacks/stackhcl.go
✅ Files skipped from review due to trivial changes (10)
  • internal/lock/gcs.go
  • cmd/version.go
  • internal/git/rebase.go
  • internal/lock/s3.go
  • internal/domain/lock.go
  • cmd/stacks.go
  • internal/notifications/github/comment.go
  • internal/stacks/local.go
  • internal/git/config.go
  • cmd/unlock.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • .github/workflows/security.yml
  • .github/workflows/scorecard.yml

Comment thread internal/config/validate.go
Comment thread internal/run/runner.go
Comment thread internal/stacks/stackhcl.go
@github-actions
Copy link
Copy Markdown

Lambda integration test executed with success...

client_payload:

@github-actions
Copy link
Copy Markdown

🌊 Neptune Plan Results

Terraform Stacks: stack-a, stack-b

Neptune completed the plan with status:

For more details, see the GitHub Actions run

Command ✅ terraform init -input=false (stack: stack-a)

Click to see the command output
stderr:


stdout:
[stack-a] Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/null versions matching "~> 3.0"...
- Finding hashicorp/local versions matching "~> 2.0"...
- Installing hashicorp/null v3.2.4...
- Installed hashicorp/null v3.2.4 (signed by HashiCorp)
- Installing hashicorp/local v2.8.0...
- Installed hashicorp/local v2.8.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Command ✅ terraform init -input=false (stack: stack-b)

Click to see the command output
stderr:


stdout:
[stack-b] Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/null versions matching "~> 3.0"...
- Finding hashicorp/local versions matching "~> 2.0"...
- Installing hashicorp/null v3.2.4...
- Installed hashicorp/null v3.2.4 (signed by HashiCorp)
- Installing hashicorp/local v2.8.0...
- Installed hashicorp/local v2.8.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Command ✅ terraform plan -input=false -out=tfplan (stack: stack-a)

Click to see the command output
stderr:


stdout:
[stack-a] Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # local_file.out will be created
  + resource "local_file" "out" {
      + content              = "stack-a output"
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "./out.txt"
      + id                   = (known after apply)
    }

  # null_resource.stack_a will be created
  + resource "null_resource" "stack_a" {
      + id       = (known after apply)
      + triggers = {
          + "stack" = "stack-a"
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

Command ✅ terraform plan -input=false -out=tfplan (stack: stack-b)

Click to see the command output
stderr:


stdout:
[stack-b] Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # local_file.out will be created
  + resource "local_file" "out" {
      + content              = "stack-b output"
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "./out.txt"
      + id                   = (known after apply)
    }

  # null_resource.stack_b will be created
  + resource "null_resource" "stack_b" {
      + id       = (known after apply)
      + triggers = {
          + "stack" = "stack-b"
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

To apply these changes, comment:

@neptbot apply

@github-actions
Copy link
Copy Markdown

🌊 Neptune Apply Results

Terraform Stacks: stack-a, stack-b

Neptune completed the apply with status:

For more details, see the GitHub Actions run

Command ✅ terraform apply -input=false tfplan (stack: stack-a)

Click to see the command output
stderr:


stdout:
[stack-a] null_resource.stack_a: Creating...
null_resource.stack_a: Creation complete after 0s [id=2603312870217486968]
local_file.out: Creating...
local_file.out: Creation complete after 0s [id=a57cdd3e7db0022635e46b54e3ae6357944d9ed4]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Command ✅ terraform apply -input=false tfplan (stack: stack-b)

Click to see the command output
stderr:


stdout:
[stack-b] null_resource.stack_b: Creating...
null_resource.stack_b: Creation complete after 0s [id=5944507945580223394]
local_file.out: Creating...
local_file.out: Creation complete after 0s [id=2d8eff4b1ddeb10a7cfb78093bc91edb5b3318f8]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Test files use fake RSA private key stubs ("key", "x") for unit tests.
Allowlist all *_test.go files to prevent false positives.

Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
@kaio6fellipe kaio6fellipe force-pushed the ci/security-quality-hardening branch from 1130ad6 to e4c4a3c Compare April 20, 2026 13:21
@github-actions
Copy link
Copy Markdown

Lambda integration test executed with success...

client_payload:

@github-actions
Copy link
Copy Markdown

🌊 Neptune Plan Results

Terraform Stacks: stack-a, stack-b

Neptune completed the plan with status:

For more details, see the GitHub Actions run

Command ✅ terraform init -input=false (stack: stack-a)

Click to see the command output
stderr:


stdout:
[stack-a] Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/null versions matching "~> 3.0"...
- Finding hashicorp/local versions matching "~> 2.0"...
- Installing hashicorp/null v3.2.4...
- Installed hashicorp/null v3.2.4 (signed by HashiCorp)
- Installing hashicorp/local v2.8.0...
- Installed hashicorp/local v2.8.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Command ✅ terraform init -input=false (stack: stack-b)

Click to see the command output
stderr:


stdout:
[stack-b] Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/local versions matching "~> 2.0"...
- Finding hashicorp/null versions matching "~> 3.0"...
- Installing hashicorp/local v2.8.0...
- Installed hashicorp/local v2.8.0 (signed by HashiCorp)
- Installing hashicorp/null v3.2.4...
- Installed hashicorp/null v3.2.4 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Command ✅ terraform plan -input=false -out=tfplan (stack: stack-a)

Click to see the command output
stderr:


stdout:
[stack-a] Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # local_file.out will be created
  + resource "local_file" "out" {
      + content              = "stack-a output"
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "./out.txt"
      + id                   = (known after apply)
    }

  # null_resource.stack_a will be created
  + resource "null_resource" "stack_a" {
      + id       = (known after apply)
      + triggers = {
          + "stack" = "stack-a"
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

Command ✅ terraform plan -input=false -out=tfplan (stack: stack-b)

Click to see the command output
stderr:


stdout:
[stack-b] Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # local_file.out will be created
  + resource "local_file" "out" {
      + content              = "stack-b output"
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "./out.txt"
      + id                   = (known after apply)
    }

  # null_resource.stack_b will be created
  + resource "null_resource" "stack_b" {
      + id       = (known after apply)
      + triggers = {
          + "stack" = "stack-b"
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

To apply these changes, comment:

@neptbot apply

@github-actions
Copy link
Copy Markdown

🌊 Neptune Apply Results

Terraform Stacks: stack-a, stack-b

Neptune completed the apply with status:

For more details, see the GitHub Actions run

Command ✅ terraform apply -input=false tfplan (stack: stack-a)

Click to see the command output
stderr:


stdout:
[stack-a] null_resource.stack_a: Creating...
local_file.out: Creating...
null_resource.stack_a: Creation complete after 0s [id=6907285088179767625]
local_file.out: Creation complete after 0s [id=a57cdd3e7db0022635e46b54e3ae6357944d9ed4]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Command ✅ terraform apply -input=false tfplan (stack: stack-b)

Click to see the command output
stderr:


stdout:
[stack-b] null_resource.stack_b: Creating...
local_file.out: Creating...
null_resource.stack_b: Creation complete after 0s [id=1467427234591663862]
local_file.out: Creation complete after 0s [id=2d8eff4b1ddeb10a7cfb78093bc91edb5b3318f8]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Fork PRs don't have write access to the GITHUB_TOKEN, so the comment
step would fail with 403. Skip it for forks — the scorecard check and
threshold enforcement still run.

Signed-off-by: Kaio Fellipe <kaio6fellipe@gmail.com>
@github-actions
Copy link
Copy Markdown

Lambda integration test executed with success...

client_payload:

@github-actions
Copy link
Copy Markdown

🌊 Neptune Plan Results

Terraform Stacks: stack-a, stack-b

Neptune completed the plan with status:

For more details, see the GitHub Actions run

Command ✅ terraform init -input=false (stack: stack-a)

Click to see the command output
stderr:


stdout:
[stack-a] Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/null versions matching "~> 3.0"...
- Finding hashicorp/local versions matching "~> 2.0"...
- Installing hashicorp/local v2.8.0...
- Installed hashicorp/local v2.8.0 (signed by HashiCorp)
- Installing hashicorp/null v3.2.4...
- Installed hashicorp/null v3.2.4 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Command ✅ terraform init -input=false (stack: stack-b)

Click to see the command output
stderr:


stdout:
[stack-b] Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/null versions matching "~> 3.0"...
- Finding hashicorp/local versions matching "~> 2.0"...
- Installing hashicorp/local v2.8.0...
- Installed hashicorp/local v2.8.0 (signed by HashiCorp)
- Installing hashicorp/null v3.2.4...
- Installed hashicorp/null v3.2.4 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Command ✅ terraform plan -input=false -out=tfplan (stack: stack-a)

Click to see the command output
stderr:


stdout:
[stack-a] Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # local_file.out will be created
  + resource "local_file" "out" {
      + content              = "stack-a output"
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "./out.txt"
      + id                   = (known after apply)
    }

  # null_resource.stack_a will be created
  + resource "null_resource" "stack_a" {
      + id       = (known after apply)
      + triggers = {
          + "stack" = "stack-a"
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

Command ✅ terraform plan -input=false -out=tfplan (stack: stack-b)

Click to see the command output
stderr:


stdout:
[stack-b] Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # local_file.out will be created
  + resource "local_file" "out" {
      + content              = "stack-b output"
      + content_base64sha256 = (known after apply)
      + content_base64sha512 = (known after apply)
      + content_md5          = (known after apply)
      + content_sha1         = (known after apply)
      + content_sha256       = (known after apply)
      + content_sha512       = (known after apply)
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "./out.txt"
      + id                   = (known after apply)
    }

  # null_resource.stack_b will be created
  + resource "null_resource" "stack_b" {
      + id       = (known after apply)
      + triggers = {
          + "stack" = "stack-b"
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Saved the plan to: tfplan

To perform exactly these actions, run the following command to apply:
    terraform apply "tfplan"

To apply these changes, comment:

@neptbot apply

@github-actions
Copy link
Copy Markdown

🌊 Neptune Apply Results

Terraform Stacks: stack-a, stack-b

Neptune completed the apply with status:

For more details, see the GitHub Actions run

Command ✅ terraform apply -input=false tfplan (stack: stack-a)

Click to see the command output
stderr:


stdout:
[stack-a] null_resource.stack_a: Creating...
null_resource.stack_a: Creation complete after 0s [id=1084977630623992541]
local_file.out: Creating...
local_file.out: Creation complete after 0s [id=a57cdd3e7db0022635e46b54e3ae6357944d9ed4]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Command ✅ terraform apply -input=false tfplan (stack: stack-b)

Click to see the command output
stderr:


stdout:
[stack-b] null_resource.stack_b: Creating...
local_file.out: Creating...
null_resource.stack_b: Creation complete after 0s [id=891254168306432343]
local_file.out: Creation complete after 0s [id=2d8eff4b1ddeb10a7cfb78093bc91edb5b3318f8]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants