Sync fork repo#18
Conversation
* docs(readme): HDD requirements Update requirements for full and archive nodes * docs(README): Wording * docs(README): Bump up requirements
…peer connectivity and otherwise use default --nat value (#272)
* Add Reth Node Configuration * Parameterize features * update images to use alpine / static versions * Restructure client files into separate directories * Allow images to work w/ devnets * bump reth commit * fix the image name to include base-org * Allow overriding via env var
* Update Reth for Fjord fixes * Remove arm reth build
Just updating node support landing site on Discord for users with issues
* chore: update reth (v1.0.5), op-node (v1.9.0) * bump rust container version
use env var for GETH_DATA_DIR if set
* feat(docs): Add additional recommended specs for running a node * docs(readme): Drop page anchor to snapshot section
* feat: add nethermind to base * feat: add supported clients to readme * fix: add missing flags on nethermind config * Update version * Update commit * Update commit * Update nethermind/Dockerfile Co-authored-by: Michael de Hoog <michael.dehoog@gmail.com> * feat: add qemu to nethermind docker build * fix: update release/1.29.0 commit * fix: build on nethermind * fix: build on nethermind, buildx command * fix: build on nethermind, labels breaking buildx * fix: build on nethermind, labels breaking buildx again * fix: build on nethermind, buildx * fix: build on nethermind, buildx labels * fix: build on nethermind, buildx labels, now as cmd * fix: build on nethermind, buildx labels * fix: update with sed targetarch * fix: upgrade build action on nethermind to use buildx * fix: update dockerfile * fix: update commit release 1.29.0 * fix: update commit release 1.29.0 and solve docker build issues * fix: update pr action * feat: add workflow dispatch * feat: update nethermind to latest version * Update docker.yml * Update Nethermind version * More explicit version for base .NET container * Less explicit version for runtime * Remove Qemu pr.yml * Update docker.yml * Update pr.yml --------- Co-authored-by: Ben Adams <thundercat@illyriad.co.uk> Co-authored-by: Michael de Hoog <michael.dehoog@gmail.com>
* op-node(1.9.4), op-geth(v1.101411.0), op-reth(1.1.0) * Update Rust version
* Upgrade: op-node(v1.10.2) op-geth(v1.101411.4) and reth(v1.1.4) for Holocene * Install just in Dockerfilefiles
* Update Nethermind to 1.30.3 (and op node) * Update .NET * Update other version changes * Retrigger
* add pruning args to env file * add disclaimers * update
Signed-off-by: StepSecurity Bot <bot@stepsecurity.io> Co-authored-by: stepsecurity-app[bot] <188008098+stepsecurity-app[bot]@users.noreply.github.com>
Co-authored-by: danyalprout <672580+danyalprout@users.noreply.github.com>
Nethermind 1.36.0 requires .NET SDK 10.0.100 (via global.json), but the Dockerfile was still using 9.0 images, causing the build to fail with exit code 145.
* chore: updated op-geth, optimism, base * chore: revert op-geth to v1.101608.0 and op-node to op-node/v1.16.6 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: cody-wang-cb <156240243+cody-wang-cb@users.noreply.github.com> Co-authored-by: Cody Wang <cody.wang@coinbase.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
* chore: updated op-geth, optimism, base * chore: revert op-geth to v1.101608.0 and op-node to op-node/v1.16.6 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: cody-wang-cb <156240243+cody-wang-cb@users.noreply.github.com> Co-authored-by: Cody Wang <cody.wang@coinbase.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: danyalprout <672580+danyalprout@users.noreply.github.com>
Co-authored-by: danyalprout <672580+danyalprout@users.noreply.github.com>
Co-authored-by: danyalprout <672580+danyalprout@users.noreply.github.com>
* bump: base/base to 0.7.4 * update tag commit
NuGet's vulnerability database was updated after Nethermind 1.36.2's release to flag Microsoft.AspNetCore.DataProtection 10.0.1 as critically vulnerable (GHSA-9mv3-2cwr-p262). Since Nethermind treats warnings as errors, dotnet restore fails with NU1904 on every PR. Disable NuGet audit at build time with -p:NuGetAudit=false to unblock CI. The fix is merged upstream (NethermindEth/nethermind#11331) and included in 1.37.0 (pre-release). Once a stable Nethermind release ships with the patched dependency, we bump NETHERMIND_TAG and remove this flag.
* fix: default CLIENT to reth in .env The default .env ships CLIENT=geth, which combined with USE_BASE_CONSENSUS=true crash-loops on a clean checkout. This contradicts the README (reth is documented as default) and the Azul migration guide (geth + base-consensus is unsupported). Change the CLIENT fallback from geth to reth so that a clean checkout works out of the box. Fixes #1010 * fix: update default client from geth to reth in docker-compose.yml --------- Co-authored-by: Serhat Dolmac <srhtsrht17@gmail.com>
Co-authored-by: Codex <noreply@openai.com>
📝 WalkthroughWalkthroughThis PR refactors the node infrastructure to support multiple execution clients (Reth, Geth, Nethermind) through parameterized Docker builds and environment-driven configuration. It introduces a dependency updater tool for automated version management and restructures CI/CD workflows for multi-architecture publishing and dependency automation. ChangesMulti-Client Architecture
Dependency Update Automation
CI/CD Workflow Automation
Documentation & Metadata
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 3❌ Failed checks (2 warnings, 1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.12.1)level=error msg="Running error: context loading failed: no go files to analyze: running Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Comment |
There was a problem hiding this comment.
Actionable comments posted: 18
🧹 Nitpick comments (1)
reth/Dockerfile (1)
59-61: 💤 Low valueAdd
--no-install-recommendsto apt-get.Similar to
geth/Dockerfile, adding--no-install-recommendsreduces image size. Note that line 61 correctly has the asterisk for apt lists cleanup (unlikegeth/Dockerfile).Proposed fix
RUN apt-get update && \ - apt-get install -y jq curl supervisor && \ + apt-get install -y --no-install-recommends jq curl supervisor && \ rm -rf /var/lib/apt/lists/*🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@reth/Dockerfile` around lines 59 - 61, The RUN instruction installs packages without --no-install-recommends which increases image size; update the Dockerfile's RUN that currently calls "apt-get install -y jq curl supervisor" to include --no-install-recommends (i.e., "apt-get install -y --no-install-recommends jq curl supervisor") while keeping the apt-get update and the subsequent "rm -rf /var/lib/apt/lists/*" cleanup intact so the image size is reduced without changing other behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In @.env:
- Around line 1-2: Replace the self-referential default for CLIENT with a plain
default by changing the CLIENT assignment to a fixed value (set CLIENT=reth) and
keep HOST_DATA_DIR referencing CLIENT (HOST_DATA_DIR=./${CLIENT}-data) so shell
or docker compose can still override CLIENT at runtime; update the lines that
define CLIENT and HOST_DATA_DIR accordingly.
In @.env.mainnet:
- Line 45: Remove the hard-coded JWT secret value for
BASE_NODE_L2_ENGINE_AUTH_RAW and replace it with a placeholder or instruction to
load a per-deployment secret (e.g., set
BASE_NODE_L2_ENGINE_AUTH_RAW=<REPLACE_WITH_SECRET>), and update any
documentation or startup scripts to obtain the real value from environment
injection or a secret manager so every deployment uses a unique engine auth
secret rather than the checked-in token.
In @.env.sepolia:
- Around line 43-45: Replace the checked-in engine JWT value in
BASE_NODE_L2_ENGINE_AUTH_RAW with a non-secret placeholder and stop committing
real tokens; remove the literal "688f5d..." value, document that
BASE_NODE_L2_ENGINE_AUTH_RAW must be injected at deploy-time via a secret
manager or CI/CD environment variable, and ensure the runtime reads the actual
auth token from the file referenced by BASE_NODE_L2_ENGINE_AUTH or from an env
var instead of this repo value so that the unique symbol
BASE_NODE_L2_ENGINE_AUTH_RAW is no longer a shared secret in source control.
In @.github/workflows/docker.yml:
- Around line 11-17: The NAMESPACE environment variable is hard-coded to
"ghcr.io/base" which breaks GHCR publishing for forks; update the env block to
set NAMESPACE using the GitHub repository owner expression (replace the current
NAMESPACE value with a value derived from ${{ github.repository_owner }} so the
GHCR namespace uses the fork owner's account), ensuring other image name vars
(GETH_IMAGE_NAME, RETH_IMAGE_NAME, NETHERMIND_IMAGE_NAME) remain unchanged and
continue to reference NAMESPACE when publishing.
In @.github/workflows/pr.yml:
- Around line 26-29: The Checkout steps currently use
github.event.pull_request.head.sha which is undefined for workflow_dispatch;
update each Checkout step (the three "Checkout" uses of actions/checkout in the
geth, reth, and nethermind jobs) to provide a fallback ref such as github.sha or
github.ref when github.event.pull_request.head.sha is missing (e.g., set ref to
github.event.pull_request.head.sha || github.sha or similar expression supported
by GitHub Actions). Ensure all three checkout steps are changed consistently so
manual dispatches use the fallback.
In `@base-consensus-entrypoint`:
- Around line 4-22: The get_public_ip function currently overwrites
operator-supplied BASE_NODE_P2P_ADVERTISE_IP, hard-fails on HTTP plaintext
lookups, and uses insecure providers; change logic so the caller's
BASE_NODE_P2P_ADVERTISE_IP is honored first (only run autodetection when that
var is unset or a new opt-in var like BASE_NODE_P2P_ADVERTISE_IP_AUTO is true),
replace the plaintext providers with HTTPS endpoints, do not exit/startup-fail
if autodetect fails—return non-zero or empty and let caller fall back to
configured defaults, and add a short timeout/retry and basic validation (keep
the IPv4 regex) inside get_public_ip to make detection optional and resilient.
In `@dependency_updater/dependency_updater.go`:
- Around line 284-296: The branch-tracking path in dependency_updater.go does
not validate the configured branch or the API response before indexing
branchCommit[0]; update the logic in the block that calls
client.Repositories.ListCommits (used with dependencies[dependencyType].Branch)
to first ensure dependencies[dependencyType].Branch is non-empty (return an
error if missing) and after the ListCommits call verify that branchCommit is
non-nil and len(branchCommit) > 0 before accessing branchCommit[0].SHA (return a
clear error if the commit list is empty); make these checks around the existing
branchCommit, commit, and ListCommits usage to prevent panics and incorrect
fallbacks.
- Around line 156-158: The git commit is being executed in the process CWD
instead of the target repository; set the command's working directory to
repoPath before running it. Locate the exec.Command("git", "commit", "-am",
commitTitle, "-m", commitDescription) call (the cmd variable) and assign cmd.Dir
= repoPath, then run cmd.Run() and propagate the error as before.
- Around line 90-100: The GitHub client is created without an HTTP timeout and
the retry loop uses context.Background() instead of the propagated ctx; fix by
constructing an http.Client with a reasonable Timeout and pass it into
github.NewClient(...) (so the client variable uses a timed HTTP client) and
replace retry.Do0(context.Background(), ...) with retry.Do0(ctx, ...) so the
retry loop respects cancellations; update any places that create or use
client/WithAuthToken(token) to use the new client variable and ensure ctx is
used for API calls and retries (references: github.NewClient,
client.WithAuthToken, ctx, retry.Do0).
In `@geth/Dockerfile`:
- Around line 30-32: Update the RUN that performs "apt-get update && apt-get
install -y jq curl supervisor" to add the "--no-install-recommends" flag to
apt-get install and change the cleanup path from "/var/lib/apt/lists" to
"/var/lib/apt/lists/*" so the apt cache is removed correctly; i.e., modify the
RUN line that installs jq, curl, and supervisor to use "apt-get install -y
--no-install-recommends" and end with "rm -rf /var/lib/apt/lists/*".
In `@geth/geth-entrypoint`:
- Line 29: The mkdir invocation in geth-entrypoint uses an unquoted variable;
update the command that calls mkdir -p with the GETH_DATA_DIR variable so the
variable name GETH_DATA_DIR is wrapped in double quotes (i.e., quote the
$GETH_DATA_DIR argument) to safely handle paths containing spaces and special
characters.
- Line 31: The script writes BASE_NODE_L2_ENGINE_AUTH_RAW directly to
$BASE_NODE_L2_ENGINE_AUTH without validation; update the entrypoint to validate
BASE_NODE_L2_ENGINE_AUTH_RAW before writing (e.g., check it is set/non-empty and
optionally valid format) and fail fast with a clear error if missing/invalid;
modify the block that currently runs echo "$BASE_NODE_L2_ENGINE_AUTH_RAW" >
"$BASE_NODE_L2_ENGINE_AUTH" so it first verifies BASE_NODE_L2_ENGINE_AUTH_RAW
(and decodes it if the project expects encoded content) and only then writes the
file, and ensure any failure logs an explanatory message and exits.
In `@nethermind/Dockerfile`:
- Around line 34-51: The image currently runs supervisord and the services as
root; create a non-root user and run everything as that user: add steps in the
Dockerfile to create a dedicated user/group (e.g., nethermind), chown the app,
log and any expected mount dirs (data, JWT) to that user, and set USER to that
account before the final CMD; also update supervisord.conf (and/or the program
entries) so each program (op-node, execution-entrypoint, consensus-entrypoint)
runs as that non-root user (use the user= directive or ensure entrypoints drop
privileges), and ensure entrypoints do not require root-only operations or
adjust ownership/permissions accordingly so supervisord can launch them without
root.
In `@nethermind/nethermind-entrypoint`:
- Line 49: The command argument uses the unquoted environment variable
--Optimism.SequencerUrl=$OP_SEQUENCER_HTTP which can cause word-splitting for
URLs with special characters; update the invocation to quote the variable (e.g.,
--Optimism.SequencerUrl="$OP_SEQUENCER_HTTP") so the full value is passed as a
single argument, ensuring the change is applied where
--Optimism.SequencerUrl=$OP_SEQUENCER_HTTP appears.
In `@op-node-entrypoint`:
- Around line 25-28: The validation message in the conditional that checks
OP_NODE_NETWORK and OP_NODE_ROLLUP_CONFIG is misleading; update the echo in the
if block that currently runs when [[ -z "$OP_NODE_NETWORK" && -z
"$OP_NODE_ROLLUP_CONFIG" ]] to mention both env vars (e.g., "expected
OP_NODE_NETWORK or OP_NODE_ROLLUP_CONFIG to be set") so the error reflects the
actual logic and clearly tells users that at least one of OP_NODE_NETWORK or
OP_NODE_ROLLUP_CONFIG must be provided.
- Around line 46-47: Check that BASE_NODE_L2_ENGINE_AUTH_RAW is set and
non-empty before writing it to BASE_NODE_L2_ENGINE_AUTH and exporting
OP_NODE_L2_ENGINE_AUTH; add a guard in op-node-entrypoint (e.g., test -n
"$BASE_NODE_L2_ENGINE_AUTH_RAW" or use parameter expansion) that prints a clear
error and exits non-zero if the variable is missing, then perform the echo to
"$BASE_NODE_L2_ENGINE_AUTH" and export OP_NODE_L2_ENGINE_AUTH only after
validation.
In `@reth/reth-entrypoint`:
- Around line 137-146: The current startup flags expose the RPC publicly by
binding to 0.0.0.0 while setting --http.corsdomain="*" and --ws.origins="*",
which allows any website to access sensitive APIs; update the flags so either
(A) restrict origins to a specific allowed domain(s) instead of "*" (change
--http.corsdomain and --ws.origins), or (B) bind RPC to localhost (set
--http.addr and --ws.addr to 127.0.0.1) if you must keep "*" origins for local
dev, and additionally remove or restrict sensitive APIs (omit debug, txpool,
miner from --http.api and --ws.api) when the service is public; apply these
changes where the flags are defined (--http.corsdomain, --ws.origins,
--http.addr, --ws.addr, --http.api, --ws.api).
- Around line 24-29: The script prints the full RETH_FB_WEBSOCKET_URL which may
contain credentials; stop logging the full URL. Keep setting
ADDITIONAL_ARGS="... --websocket-url=$RETH_FB_WEBSOCKET_URL" but replace the
echo that prints $RETH_FB_WEBSOCKET_URL with a generic message (e.g., "Enabling
Flashblocks support (websocket URL provided)") or log a redacted form (only
hostname) instead; update the echo lines around the RETH_FB_WEBSOCKET_URL check
and leave ADDITIONAL_ARGS and the conditional logic unchanged.
---
Nitpick comments:
In `@reth/Dockerfile`:
- Around line 59-61: The RUN instruction installs packages without
--no-install-recommends which increases image size; update the Dockerfile's RUN
that currently calls "apt-get install -y jq curl supervisor" to include
--no-install-recommends (i.e., "apt-get install -y --no-install-recommends jq
curl supervisor") while keeping the apt-get update and the subsequent "rm -rf
/var/lib/apt/lists/*" cleanup intact so the image size is reduced without
changing other behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro Plus
Run ID: cb89cedf-31eb-4e49-8045-805fc8703bda
⛔ Files ignored due to path filters (1)
dependency_updater/go.sumis excluded by!**/*.sum
📒 Files selected for processing (38)
.dockerignore.env.env.mainnet.env.sepolia.github/workflows/docker.yml.github/workflows/pr.yml.github/workflows/stale.yml.github/workflows/update-dependencies.yml.gitignoreCONTRIBUTING.mdDockerfileLICENSEREADME.mdSECURITY.mdbase-consensus-entrypointconsensus-entrypointdependency_updater/dependency_updater.godependency_updater/go.moddependency_updater/version.godependency_updater/version_test.godocker-compose.ymlgeth/Dockerfilegeth/geth-entrypointgo.modlogo.webpmainnet/genesis-l2.jsonmainnet/rollup.jsonnethermind/Dockerfilenethermind/nethermind-entrypointop-node-entrypointreth/Dockerfilereth/README.mdreth/reth-entrypointsepolia/genesis-l2.jsonsepolia/rollup.jsonsupervisord.confversions.envversions.json
💤 Files with no reviewable changes (3)
- mainnet/rollup.json
- sepolia/rollup.json
- Dockerfile
| CLIENT=${CLIENT:-reth} | ||
| HOST_DATA_DIR=./${CLIENT}-data |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For Docker Compose and dotenv parsers, is CLIENT=${CLIENT:-reth}in a.envfile portable/recommended, and doesHOST_DATA_DIR=./${CLIENT}-datareliably expand from previously defined variables in.env?
💡 Result:
- Is
CLIENT=${CLIENT:-reth}in a.envfile portable/recommended? - For Docker Compose specifically, the${VAR:-default}syntax is supported for Compose interpolation in values (Docker documents show the Bash-like default forms${VAR:-default}and${VAR-default}) [1][2]. Docker also documents that.envis used to define variables “made available for interpolation” when runningdocker compose up[1][3]. So for Compose,CLIENT=${CLIENT:-reth}is valid and works as expected for interpolation. - However, portability across “dotenv parsers” in general is not guaranteed. Many dotenv loaders parse.envas literal key/value pairs and do not implement shell-style parameter expansion (or implement only a subset). Docker Compose’s behavior is a Docker Compose CLI feature; Docker explicitly notes interpolation in.envis not supported when usingdocker run --env-file[4]. Therefore, you should treatCLIENT=${CLIENT:-reth}as “Compose-compatible”, not universally portable across every dotenv implementation. - Recommendation: If you need this defaulting/expansion logic, put it in the Compose interpolation layer (i.e., in Compose YAML fields that support${...}interpolation), or ensure you use a dotenv parser that explicitly supports${VAR:-default}expansion. Avoid assuming all dotenv parsers expand${...}in.envfiles. 2) DoesHOST_DATA_DIR=./${CLIENT}-datareliably expand from previously defined variables in.env? - For Docker Compose.envusage: yes, in the sense that Compose treats.envvariables as available for interpolation in subsequent substitutions during Compose processing (the.envfile is loaded for “interpolation” before Compose resolves${...}expressions) [1][3]. SoCLIENTdefined earlier in the same.envshould be resolvable when Compose later interpolates${CLIENT}insideHOST_DATA_DIR=./${CLIENT}-data[1][2]. - Important nuance: this expansion is part of Docker Compose interpolation processing; it is not the same as how theenv_file:directive injects environment variables into containers. Docker Compose docs/behavior distinguish interpolation (Compose YAML CLI processing) from container environment loading. As a practical consequence, don’t rely onenv_file:alone for interpolation; only Compose’s interpolation step will evaluate${...}expressions in the places it supports [4][5]. Bottom line -CLIENT=${CLIENT:-reth}andHOST_DATA_DIR=./${CLIENT}-dataare supported with Docker Compose interpolation rules (including${VAR:-default}) [1][2]. - They are not universally portable across arbitrary dotenv parsers; only use them if you know the parser supports Bash-like${...}expansion in.envfiles [4].
Citations:
- 1: https://docs.docker.com/compose/how-tos/environment-variables/variable-interpolation/
- 2: https://docs.docker.com/reference/compose-file/interpolation/
- 3: https://docs.docker.com/compose/how-tos/environment-variables/variable-interpolation
- 4: https://docs.docker.com/compose/environment-variables/set-environment-variables
- 5: https://stackoverflow.com/questions/48211138/docker-variable-expansion-in-compose-environment
🏁 Script executed:
# First, let's understand the repository structure and find Docker Compose files
git ls-files | head -20Repository: LiskHQ/lisk-node
Length of output: 491
🏁 Script executed:
# Look for docker-compose files or similar
fd -e yml -e yaml | grep -i compose | head -10Repository: LiskHQ/lisk-node
Length of output: 79
🏁 Script executed:
# Check the full .env file
cat .envRepository: LiskHQ/lisk-node
Length of output: 138
🏁 Script executed:
# Search for linting configuration that might flag this
fd -e ".*rc*" -e "package.json" -e "pyproject.toml" -e ".lintrc*" | head -20Repository: LiskHQ/lisk-node
Length of output: 42
🏁 Script executed:
# Search for references to CLIENT variable in the codebase
rg "CLIENT" --type-list | head -10
rg "CLIENT" -l | head -20Repository: LiskHQ/lisk-node
Length of output: 512
🏁 Script executed:
# Examine docker-compose.yml to understand how CLIENT and HOST_DATA_DIR are used
cat docker-compose.ymlRepository: LiskHQ/lisk-node
Length of output: 1181
🏁 Script executed:
# Search for any lint or dotenv configuration (including hidden files)
find . -maxdepth 2 -name "*lint*" -o -name "*dotenv*" -o -name ".*rc" -o -name ".env*" 2>/dev/null | grep -v node_modulesRepository: LiskHQ/lisk-node
Length of output: 319
🏁 Script executed:
# Check README files to understand CLIENT variable usage
cat README.md | grep -A 5 -B 5 "CLIENT" | head -30Repository: LiskHQ/lisk-node
Length of output: 790
🏁 Script executed:
# Search more broadly for lint tools in dependencies or comments
rg "lint|dotenv|env-cli" --type-list 2>/dev/null || true
rg "CLIENT\s*=" | head -20Repository: LiskHQ/lisk-node
Length of output: 294
🏁 Script executed:
# Check GitHub workflows for any linting of .env files
cat .github/workflows/*.yml | grep -A 5 -B 5 "lint\|env" | head -40Repository: LiskHQ/lisk-node
Length of output: 1470
🏁 Script executed:
# Check markdownlint configuration to see if it applies to .env
cat coderabbit-0.markdownlint-cli2.jsonc 2>/dev/null | head -20Repository: LiskHQ/lisk-node
Length of output: 462
🏁 Script executed:
# Search for any dotenv or env linting in the codebase
rg "dotenv|env.*lint" -iRepository: LiskHQ/lisk-node
Length of output: 42
Simplify to a plain default value.
The self-referential CLIENT=${CLIENT:-reth} syntax is non-portable across generic dotenv parsers. While Docker Compose does support ${VAR:-default} interpolation, a plain CLIENT=reth is simpler and equally effective here, as shell environment variables will still override it at runtime (as documented in the README: CLIENT=reth docker compose up).
Suggested patch
-CLIENT=${CLIENT:-reth}
+CLIENT=reth
HOST_DATA_DIR=./${CLIENT}-data📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| CLIENT=${CLIENT:-reth} | |
| HOST_DATA_DIR=./${CLIENT}-data | |
| CLIENT=reth | |
| HOST_DATA_DIR=./${CLIENT}-data |
🧰 Tools
🪛 dotenv-linter (4.0.0)
[warning] 1-1: [SubstitutionKey] The CLIENT key is not assigned properly
(SubstitutionKey)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.env around lines 1 - 2, Replace the self-referential default for CLIENT
with a plain default by changing the CLIENT assignment to a fixed value (set
CLIENT=reth) and keep HOST_DATA_DIR referencing CLIENT
(HOST_DATA_DIR=./${CLIENT}-data) so shell or docker compose can still override
CLIENT at runtime; update the lines that define CLIENT and HOST_DATA_DIR
accordingly.
| OP_NODE_NETWORK=base-mainnet | ||
| BASE_NODE_L2_ENGINE_RPC=ws://execution:8551 | ||
| BASE_NODE_L2_ENGINE_AUTH=/tmp/engine-auth-jwt | ||
| BASE_NODE_L2_ENGINE_AUTH_RAW=688f5d737bad920bdfb2fc2f488d6b6209eebda1dae949a8de91398d932c517a |
There was a problem hiding this comment.
Do not ship a fixed engine auth secret in the tracked mainnet config.
A checked-in JWT secret makes every deployment share the same engine credential unless operators remember to override it. This weakens the auth boundary for the engine API and is easy to avoid with a per-deployment secret.
Suggested fix
-BASE_NODE_L2_ENGINE_AUTH_RAW=688f5d737bad920bdfb2fc2f488d6b6209eebda1dae949a8de91398d932c517a
+BASE_NODE_L2_ENGINE_AUTH_RAW=<set-a-unique-32-byte-hex-secret-in-a-local-override>📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| BASE_NODE_L2_ENGINE_AUTH_RAW=688f5d737bad920bdfb2fc2f488d6b6209eebda1dae949a8de91398d932c517a | |
| BASE_NODE_L2_ENGINE_AUTH_RAW=<set-a-unique-32-byte-hex-secret-in-a-local-override> |
🧰 Tools
🪛 Betterleaks (1.1.2)
[high] 45-45: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🪛 dotenv-linter (4.0.0)
[warning] 45-45: [UnorderedKey] The BASE_NODE_L2_ENGINE_AUTH_RAW key should go before the BASE_NODE_L2_ENGINE_RPC key
(UnorderedKey)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.env.mainnet at line 45, Remove the hard-coded JWT secret value for
BASE_NODE_L2_ENGINE_AUTH_RAW and replace it with a placeholder or instruction to
load a per-deployment secret (e.g., set
BASE_NODE_L2_ENGINE_AUTH_RAW=<REPLACE_WITH_SECRET>), and update any
documentation or startup scripts to obtain the real value from environment
injection or a secret manager so every deployment uses a unique engine auth
secret rather than the checked-in token.
| BASE_NODE_L2_ENGINE_RPC=http://execution:8551 | ||
| BASE_NODE_L2_ENGINE_AUTH=/tmp/engine-auth-jwt | ||
| BASE_NODE_L2_ENGINE_AUTH_RAW=688f5d737bad920bdfb2fc2f488d6b6209eebda1dae949a8de91398d932c517a |
There was a problem hiding this comment.
Do not ship a shared engine JWT in source control.
This value becomes the auth secret for the engine API. Keeping one real token in a checked-in .env means every deployment shares the same credential, so any leak or exposed authrpc endpoint loses isolation across nodes.
Suggested fix
BASE_NODE_L2_ENGINE_RPC=http://execution:8551
BASE_NODE_L2_ENGINE_AUTH=/tmp/engine-auth-jwt
-BASE_NODE_L2_ENGINE_AUTH_RAW=688f5d737bad920bdfb2fc2f488d6b6209eebda1dae949a8de91398d932c517a
+# Generate a unique 32-byte hex value per deployment.
+BASE_NODE_L2_ENGINE_AUTH_RAW=<generate-unique-32-byte-hex>🧰 Tools
🪛 Betterleaks (1.1.2)
[high] 45-45: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🪛 dotenv-linter (4.0.0)
[warning] 44-44: [UnorderedKey] The BASE_NODE_L2_ENGINE_AUTH key should go before the BASE_NODE_L2_ENGINE_RPC key
(UnorderedKey)
[warning] 45-45: [UnorderedKey] The BASE_NODE_L2_ENGINE_AUTH_RAW key should go before the BASE_NODE_L2_ENGINE_RPC key
(UnorderedKey)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.env.sepolia around lines 43 - 45, Replace the checked-in engine JWT value
in BASE_NODE_L2_ENGINE_AUTH_RAW with a non-secret placeholder and stop
committing real tokens; remove the literal "688f5d..." value, document that
BASE_NODE_L2_ENGINE_AUTH_RAW must be injected at deploy-time via a secret
manager or CI/CD environment variable, and ensure the runtime reads the actual
auth token from the file referenced by BASE_NODE_L2_ENGINE_AUTH or from an env
var instead of this repo value so that the unique symbol
BASE_NODE_L2_ENGINE_AUTH_RAW is no longer a shared secret in source control.
| env: | ||
| REGISTRY: ghcr.io | ||
| IMAGE_NAME: ${{ github.repository }} | ||
| NAMESPACE: ghcr.io/base | ||
| GETH_DEPRECATED_IMAGE_NAME: node | ||
| GETH_IMAGE_NAME: node-geth | ||
| RETH_IMAGE_NAME: node-reth | ||
| NETHERMIND_IMAGE_NAME: node-nethermind |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For GitHub Actions / GHCR docs: can GITHUB_TOKENfrom a fork publish toghcr.io/, and is ${{ github.repository_owner }} the recommended way to derive the image namespace?
💡 Result:
Yes, but only to the namespace that GitHub Packages/ghcr has granted to that workflow’s token (and forks have limited default access). 1) Can GITHUB_TOKEN from a fork publish to ghcr.io/<different-org>? - In general, the GITHUB_TOKEN that GitHub Actions creates is scoped to the repository that contains the workflow, and for forks it is only granted read access to the parent repository by default [1]. - For publishing (write access) to a package namespace in GHCR, the workflow must have packages: write permissions on the job, and the target GHCR package (namespace/repository) must allow Actions from the repository to manage it (“Manage Actions access”) / be linked appropriately; otherwise you can get permission_denied: write_package [2]. - The GitHub Docs explicitly note that using the GITHUB_TOKEN is the “easiest way” to publish from a workflow when the repository is linked/connected to the package, and also warns that permissions depend on that linking/connection [3]. So: if your fork workflow tries to push to a GHCR namespace (e.g., another org) that the GITHUB_TOKEN does not have write permission for, it will fail. If you configure the package/repository access so that Actions from the forked repository are granted write access to the target container image/package, then pushing can work; otherwise, you’ll need a different credential (for example, a PAT or GitHub App) [1][2]. 2) Is ${{ github.repository_owner }} the recommended way to derive the image namespace? - github.repository_owner is the repository owner username in the Actions context [4]. It is therefore a reasonable variable to use when you want the namespace to match the repo that triggered/contains the workflow. - GitHub’s own GHCR/Docker publishing examples show using github.repository (which includes owner) as the basis for ghcr.io/<owner>/<image> style tags [5]. (They also show github.actor for the login username when using docker/login-action.) [5] However, for “recommended” namespace selection for GHCR: - The container registry docs emphasize that you must publish to the namespace/org you intend to scope the image under (you can choose whether it’s scoped to a personal account or an organization) [6]. - Since forks can have restricted token access to the parent/org, ${{ github.repository_owner }} will reflect the fork’s or repository’s owner depending on the run context, which may or may not be the org you actually have permission to publish into. In cross-org scenarios, it’s better to explicitly set the GHCR NAMESPACE/image prefix to the org you want, rather than assuming the workflow’s repository owner is writable. Practical recommendation: - For “push image under the same owner as the repo running the workflow”: ${{ github.repository_owner }} (or ${{ github.repository }} for OWNER/REPO) is appropriate [4][5]. - For “push image under a different org”: set an explicit GHCR_NAMESPACE (e.g. my-target-org) and ensure the target GHCR package/repo has granted Actions access for your fork workflow; do not rely on github.repository_owner alone [1][2][6]. Code sketch (explicit namespace for cross-org): - name: Set image tag run: | echo "IMAGE=ghcr.io/${{ env.GHCR_NAMESPACE }}/my-image:${{ github.sha }}" >> $GITHUB_ENV
Citations:
- 1: https://docs.github.com/en/packages/managing-github-packages-using-github-actions-workflows/publishing-and-installing-a-package-with-github-actions
- 2: https://stackoverflow.com/questions/70646920/github-token-permission-denied-write-package-when-build-and-push-docker-in-githu
- 3: https://docs.github.com/en/packages/guides/pushing-and-pulling-docker-images
- 4: https://docs.github.com/en/actions/reference/contexts-reference
- 5: https://docs.github.com/en/actions/how-tos/use-cases-and-examples/publishing-packages/publishing-docker-images
- 6: https://docs.github.com/enterprise-cloud@latest/packages/working-with-a-github-packages-registry/working-with-the-container-registry
Use ${{ github.repository_owner }} to derive the GHCR namespace.
This workflow hard-codes NAMESPACE: ghcr.io/base, but in a fork context the GITHUB_TOKEN only has write access to the fork owner's namespace. This causes release publishing to fail when the build completes. Use ${{ github.repository_owner }} instead to publish under the fork owner's account.
Suggested fix
- NAMESPACE: ghcr.io/base
+ NAMESPACE: ghcr.io/${{ github.repository_owner }}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.github/workflows/docker.yml around lines 11 - 17, The NAMESPACE environment
variable is hard-coded to "ghcr.io/base" which breaks GHCR publishing for forks;
update the env block to set NAMESPACE using the GitHub repository owner
expression (replace the current NAMESPACE value with a value derived from ${{
github.repository_owner }} so the GHCR namespace uses the fork owner's account),
ensuring other image name vars (GETH_IMAGE_NAME, RETH_IMAGE_NAME,
NETHERMIND_IMAGE_NAME) remain unchanged and continue to reference NAMESPACE when
publishing.
| - name: Checkout | ||
| uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 # v3.6.0 | ||
| with: | ||
| ref: ${{ github.event.pull_request.head.sha }} |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For GitHub Actions docs: is github.event.pull_request.head.shaavailable duringworkflow_dispatch, and is github.sha the correct fallback ref for manual runs?
💡 Result:
For workflow_dispatch, github.event.pull_request.head.sha is not available (because the workflow_dispatch event payload does not include a pull_request object). GitHub’s “Events that trigger workflows” docs explicitly note that github.event.pull_request.head.sha is for pull_request events, and describe pull_request-specific SHA behavior separately from workflow_dispatch [1]. For manual workflow runs, github.sha is the correct fallback ref/commit SHA context to use: for workflow_dispatch, GitHub documents that GITHUB_SHA is “Last commit on the GITHUB_REF branch or tag”, and GITHUB_REF is “Branch or tag that received dispatch” [1]. In contrast, github.event.pull_request.head.sha is only a pull_request-specific head-commit SHA [1]. Practical rule: - If github.event_name == 'pull_request', you can use github.event.pull_request.head.sha to refer to the PR head commit [1]. - If github.event_name == 'workflow_dispatch', use github.sha (i.e., the dispatch’s ref/commit) rather than github.event.pull_request.head.sha [1].
Citations:
🏁 Script executed:
cat -n .github/workflows/pr.ymlRepository: LiskHQ/lisk-node
Length of output: 4073
Use a fallback checkout ref for manual runs.
The workflow is configured to trigger on both pull_request and workflow_dispatch (lines 3-5), but all three checkout steps read github.event.pull_request.head.sha, which is only available during PR events. Manual dispatches will fail to check out because this variable is undefined.
Apply this fix to all three jobs (geth, reth, and nethermind):
Suggested fix
- ref: ${{ github.event.pull_request.head.sha }}
+ ref: ${{ github.event.pull_request.head.sha || github.sha }}Applies to lines 29, 62, and 93.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Checkout | |
| uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 # v3.6.0 | |
| with: | |
| ref: ${{ github.event.pull_request.head.sha }} | |
| - name: Checkout | |
| uses: actions/checkout@f43a0e5ff2bd294095638e18286ca9a3d1956744 # v3.6.0 | |
| with: | |
| ref: ${{ github.event.pull_request.head.sha || github.sha }} |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In @.github/workflows/pr.yml around lines 26 - 29, The Checkout steps currently
use github.event.pull_request.head.sha which is undefined for workflow_dispatch;
update each Checkout step (the three "Checkout" uses of actions/checkout in the
geth, reth, and nethermind jobs) to provide a fallback ref such as github.sha or
github.ref when github.event.pull_request.head.sha is missing (e.g., set ref to
github.event.pull_request.head.sha || github.sha or similar expression supported
by GitHub Actions). Ensure all three checkout steps are changed consistently so
manual dispatches use the fallback.
| exec ./nethermind \ | ||
| --config="$OP_NODE_NETWORK" \ | ||
| --datadir="$NETHERMIND_DATA_DIR" \ | ||
| --Optimism.SequencerUrl=$OP_SEQUENCER_HTTP \ |
There was a problem hiding this comment.
Quote the $OP_SEQUENCER_HTTP variable.
The variable is used unquoted in the command arguments. While URLs typically don't contain spaces, quoting prevents potential word-splitting issues if the URL contains special characters.
Proposed fix
- --Optimism.SequencerUrl=$OP_SEQUENCER_HTTP \
+ --Optimism.SequencerUrl="$OP_SEQUENCER_HTTP" \📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| --Optimism.SequencerUrl=$OP_SEQUENCER_HTTP \ | |
| --Optimism.SequencerUrl="$OP_SEQUENCER_HTTP" \ |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@nethermind/nethermind-entrypoint` at line 49, The command argument uses the
unquoted environment variable --Optimism.SequencerUrl=$OP_SEQUENCER_HTTP which
can cause word-splitting for URLs with special characters; update the invocation
to quote the variable (e.g., --Optimism.SequencerUrl="$OP_SEQUENCER_HTTP") so
the full value is passed as a single argument, ensuring the change is applied
where --Optimism.SequencerUrl=$OP_SEQUENCER_HTTP appears.
| if [[ -z "$OP_NODE_NETWORK" && -z "$OP_NODE_ROLLUP_CONFIG" ]]; then | ||
| echo "expected OP_NODE_NETWORK to be set" 1>&2 | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Error message doesn't match validation logic.
The condition allows the script to proceed if either OP_NODE_NETWORK or OP_NODE_ROLLUP_CONFIG is set, but the error message only mentions OP_NODE_NETWORK. This could confuse users when OP_NODE_ROLLUP_CONFIG is a valid alternative.
Proposed fix
if [[ -z "$OP_NODE_NETWORK" && -z "$OP_NODE_ROLLUP_CONFIG" ]]; then
- echo "expected OP_NODE_NETWORK to be set" 1>&2
+ echo "expected OP_NODE_NETWORK or OP_NODE_ROLLUP_CONFIG to be set" 1>&2
exit 1
fi📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [[ -z "$OP_NODE_NETWORK" && -z "$OP_NODE_ROLLUP_CONFIG" ]]; then | |
| echo "expected OP_NODE_NETWORK to be set" 1>&2 | |
| exit 1 | |
| fi | |
| if [[ -z "$OP_NODE_NETWORK" && -z "$OP_NODE_ROLLUP_CONFIG" ]]; then | |
| echo "expected OP_NODE_NETWORK or OP_NODE_ROLLUP_CONFIG to be set" 1>&2 | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@op-node-entrypoint` around lines 25 - 28, The validation message in the
conditional that checks OP_NODE_NETWORK and OP_NODE_ROLLUP_CONFIG is misleading;
update the echo in the if block that currently runs when [[ -z
"$OP_NODE_NETWORK" && -z "$OP_NODE_ROLLUP_CONFIG" ]] to mention both env vars
(e.g., "expected OP_NODE_NETWORK or OP_NODE_ROLLUP_CONFIG to be set") so the
error reflects the actual logic and clearly tells users that at least one of
OP_NODE_NETWORK or OP_NODE_ROLLUP_CONFIG must be provided.
| echo "$BASE_NODE_L2_ENGINE_AUTH_RAW" > "$BASE_NODE_L2_ENGINE_AUTH" | ||
| export OP_NODE_L2_ENGINE_AUTH=$BASE_NODE_L2_ENGINE_AUTH |
There was a problem hiding this comment.
Missing validation for BASE_NODE_L2_ENGINE_AUTH_RAW.
The script writes BASE_NODE_L2_ENGINE_AUTH_RAW to a file without first validating that the variable is set. With set -eu, an unset variable will cause the script to fail with an unclear error. Other entrypoints in this PR (e.g., nethermind-entrypoint) explicitly validate this variable.
Proposed fix
+if [[ -z "${BASE_NODE_L2_ENGINE_AUTH_RAW:-}" ]]; then
+ echo "expected BASE_NODE_L2_ENGINE_AUTH_RAW to be set" 1>&2
+ exit 1
+fi
echo "$BASE_NODE_L2_ENGINE_AUTH_RAW" > "$BASE_NODE_L2_ENGINE_AUTH"
export OP_NODE_L2_ENGINE_AUTH=$BASE_NODE_L2_ENGINE_AUTH📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| echo "$BASE_NODE_L2_ENGINE_AUTH_RAW" > "$BASE_NODE_L2_ENGINE_AUTH" | |
| export OP_NODE_L2_ENGINE_AUTH=$BASE_NODE_L2_ENGINE_AUTH | |
| if [[ -z "${BASE_NODE_L2_ENGINE_AUTH_RAW:-}" ]]; then | |
| echo "expected BASE_NODE_L2_ENGINE_AUTH_RAW to be set" 1>&2 | |
| exit 1 | |
| fi | |
| echo "$BASE_NODE_L2_ENGINE_AUTH_RAW" > "$BASE_NODE_L2_ENGINE_AUTH" | |
| export OP_NODE_L2_ENGINE_AUTH=$BASE_NODE_L2_ENGINE_AUTH |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@op-node-entrypoint` around lines 46 - 47, Check that
BASE_NODE_L2_ENGINE_AUTH_RAW is set and non-empty before writing it to
BASE_NODE_L2_ENGINE_AUTH and exporting OP_NODE_L2_ENGINE_AUTH; add a guard in
op-node-entrypoint (e.g., test -n "$BASE_NODE_L2_ENGINE_AUTH_RAW" or use
parameter expansion) that prints a clear error and exits non-zero if the
variable is missing, then perform the echo to "$BASE_NODE_L2_ENGINE_AUTH" and
export OP_NODE_L2_ENGINE_AUTH only after validation.
| if [[ -n "${RETH_FB_WEBSOCKET_URL:-}" ]]; then | ||
| ADDITIONAL_ARGS="$ADDITIONAL_ARGS --websocket-url=$RETH_FB_WEBSOCKET_URL" | ||
| echo "Enabling Flashblocks support with endpoint: $RETH_FB_WEBSOCKET_URL" | ||
| else | ||
| echo "Running in vanilla node mode (no Flashblocks URL provided)" | ||
| fi |
There was a problem hiding this comment.
Do not log the full Flashblocks websocket URL.
These endpoints commonly carry credentials or signed query params. Writing the full URL to stdout leaks that material into container logs.
Suggested fix
- echo "Enabling Flashblocks support with endpoint: $RETH_FB_WEBSOCKET_URL"
+ echo "Enabling Flashblocks support"🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@reth/reth-entrypoint` around lines 24 - 29, The script prints the full
RETH_FB_WEBSOCKET_URL which may contain credentials; stop logging the full URL.
Keep setting ADDITIONAL_ARGS="... --websocket-url=$RETH_FB_WEBSOCKET_URL" but
replace the echo that prints $RETH_FB_WEBSOCKET_URL with a generic message
(e.g., "Enabling Flashblocks support (websocket URL provided)") or log a
redacted form (only hostname) instead; update the echo lines around the
RETH_FB_WEBSOCKET_URL check and leave ADDITIONAL_ARGS and the conditional logic
unchanged.
| --ws \ | ||
| --ws.origins="*" \ | ||
| --ws.addr=0.0.0.0 \ | ||
| --ws.port="$WS_PORT" \ | ||
| --ws.api=web3,debug,eth,net,txpool \ | ||
| --http \ | ||
| --http.corsdomain="*" \ | ||
| --http.addr=0.0.0.0 \ | ||
| --http.port="$RPC_PORT" \ | ||
| --http.api=web3,debug,eth,net,txpool,miner \ |
There was a problem hiding this comment.
Avoid wildcard CORS and WS origins on a public RPC bind.
Binding RPC to 0.0.0.0 while also allowing * browser origins exposes the node API to any website if the port is reachable from a user's browser. That is especially risky with debug, txpool, and miner enabled.
Suggested direction
- --ws.origins="*" \
+ --ws.origins="${RETH_WS_ORIGINS:-http://localhost}" \
@@
- --http.corsdomain="*" \
+ --http.corsdomain="${RETH_HTTP_CORS:-http://localhost}" \🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@reth/reth-entrypoint` around lines 137 - 146, The current startup flags
expose the RPC publicly by binding to 0.0.0.0 while setting
--http.corsdomain="*" and --ws.origins="*", which allows any website to access
sensitive APIs; update the flags so either (A) restrict origins to a specific
allowed domain(s) instead of "*" (change --http.corsdomain and --ws.origins), or
(B) bind RPC to localhost (set --http.addr and --ws.addr to 127.0.0.1) if you
must keep "*" origins for local dev, and additionally remove or restrict
sensitive APIs (omit debug, txpool, miner from --http.api and --ws.api) when the
service is public; apply these changes where the flags are defined
(--http.corsdomain, --ws.origins, --http.addr, --ws.addr, --http.api, --ws.api).
Merge latest changes from upstream repo
Summary by CodeRabbit
New Features
Documentation
Chores