Conversation
📝 WalkthroughWalkthroughConvert the repository into a Rust workspace, add domain/queue/rpc-client/webhook-server/worker crates, implement payload normalizers, queue backends, an HTTP backend client, Axum webhook server, worker loop with retries, Docker/dev scripts, pre-commit hook, and example Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant WebhookServer as Webhook Server
participant Queue as Event Queue
participant Worker
participant Backend as Backend API
Client->>WebhookServer: POST /webhook/:provider (raw JSON + signature)
WebhookServer->>WebhookServer: verify signature (provider-specific)
WebhookServer->>WebhookServer: normalize_webhook_payload(raw)
alt normalization succeeds
WebhookServer->>Queue: publish(TransactionEvent)
Queue-->>WebhookServer: OK
WebhookServer-->>Client: 200 OK
else normalization fails
WebhookServer-->>Client: 400 Bad Request
end
Worker->>Queue: consume()
Queue-->>Worker: TransactionEvent
Worker->>Backend: POST /webhooks/payment (serialized TransactionEvent, Idempotency-Key)
alt backend OK
Backend-->>Worker: 200 OK
else backend failure
Worker->>Worker: retry with exponential backoff (up to 5 attempts)
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~65 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 13
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (8)
.githooks/pre-commit-1-5 (1)
1-5:⚠️ Potential issue | 🟡 MinorHook activation is undocumented — developers won't benefit from it without manual setup.
Git doesn't use
.githooks/automatically; each developer must run:git config core.hooksPath .githooksThis step should be mentioned in the README (e.g., in a "Development Setup" section) or in a one-time
make setup/scripts/setup.shtarget, otherwise the hook silently does nothing for anyone who doesn't know to configure it.Would you like me to open a new issue to track adding a setup script or README section that configures
core.hooksPath?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.githooks/pre-commit around lines 1 - 5, The pre-commit hook script (REPO_ROOT variable in the script) will not run for developers unless core.hooksPath is configured; add a documented setup step and/or an automated setup target that runs git config core.hooksPath .githooks so the hook is activated. Update the repository README (Development Setup) to describe the one-time command and add a new make target or scripts/setup.sh that executes git config core.hooksPath .githooks (and optionally checks/creates the .githooks directory), and mention this change near any existing contributor/dev onboarding notes..env.example-1-5 (1)
1-5:⚠️ Potential issue | 🟡 MinorTwo
.env.exampledefaults may trip up local developers.
APP_ENV=staging— Developers who copy this file and run locally without changingAPP_ENVwill unknowingly point at a live staging environment. Adevelopmentorlocaldefault is safer.REDIS_URL=redis://redis:6379— The hostnameredisis the Docker Compose service name and won't resolve outside a Compose network. Add a comment indicating the local equivalent (redis://localhost:6379).📝 Suggested fix
-APP_ENV=staging +APP_ENV=development BACKEND_STAGING_URL=https://staging.ourpocket.com BACKEND_PROD_URL=https://api.ourpocket.com -REDIS_URL=redis://redis:6379 +# For Docker Compose: redis://redis:6379 +REDIS_URL=redis://localhost:6379 REDIS_QUEUE_KEY=ourpocket:transactions🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.env.example around lines 1 - 5, Change the potentially dangerous defaults in .env.example: set APP_ENV to a local-safe value like "development" (instead of "staging") so local copies don't point at staging, and update the REDIS_URL entry to include a clarifying inline comment that the Compose service name "redis" only resolves in Docker (e.g., indicate the local equivalent is "redis://localhost:6379"); keep the REDIS_QUEUE_KEY as-is but ensure the comment is adjacent to REDIS_URL so developers see the intended local override.README.md-12-12 (1)
12-12:⚠️ Potential issue | 🟡 MinorInaccurate description — coverage script fails (not skips) when tool is missing.
The phrasing "if installed" implies a graceful no-op, but
scripts/coverage.shactually exits with a non-zero error code whencargo-llvm-covis absent.📝 Suggested wording fix
- `./scripts/coverage.sh` runs coverage with `cargo llvm-cov` if installed. + `./scripts/coverage.sh` runs coverage with `cargo llvm-cov` (requires `cargo install cargo-llvm-cov`).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 12, The README line incorrectly implies ./scripts/coverage.sh is a no-op when cargo-llvm-cov is absent; update the wording to state the script requires cargo-llvm-cov and will exit with a non-zero error if the tool is not installed (referencing ./scripts/coverage.sh and cargo-llvm-cov) — e.g., change "runs coverage with `cargo llvm-cov` if installed" to a sentence that clearly says the script requires `cargo-llvm-cov` and will fail/exit with an error when it is missing.crates/webhook-server/Cargo.toml-1-4 (1)
1-4:⚠️ Potential issue | 🟡 MinorSet
rust-version = "1.85"in Cargo.toml per Rust 2024 edition migration guidance.While the current Docker image (
rust:slim-trixie, Rust 1.93.1) already exceeds the minimum requirement foredition = "2024"(Rust 1.85.0+), the Rust project recommends explicitly settingrust-version = "1.85"inCargo.tomlto document the MSRV and prevent accidental use of older toolchains. Additionally, consider pinning the Docker base image tag (e.g.,rust:1.93.1-slim-trixieinstead of the floatingrust:slim-trixie) for reproducible builds.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/webhook-server/Cargo.toml` around lines 1 - 4, Add the minimum supported Rust version to Cargo.toml by adding rust-version = "1.85" inside the [package] table (alongside name, version, edition) so the manifest documents the MSRV; also update any Docker base-image references elsewhere to a pinned tag (e.g., change rust:slim-trixie to rust:1.93.1-slim-trixie) for reproducible builds.crates/domain/src/paystack.rs-56-87 (1)
56-87:⚠️ Potential issue | 🟡 MinorTest uses a string
"id"value, masking the real-world type mismatch.The test payload has
"id": "cust-psk-1"(string), but real Paystack webhooks send"id": 46123456(integer). Add a test with a numericcustomer.idto verify the production path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/domain/src/paystack.rs` around lines 56 - 87, Add a new test variant to validate numeric customer IDs by copying or extending normalize_paystack_basic_payload to pass a payload where "customer": { "id": 46123456 } (an integer) and asserting normalize_paystack_payload returns the same event fields (transaction_ref, user_id, application_id, status, amount, category) while ensuring user_id handling in normalize_paystack_payload accepts numeric IDs (convert to string if necessary). Target the test function name (e.g., normalize_paystack_numeric_customer_id) and the normalize_paystack_payload function to verify/adjust parsing logic so it correctly handles integer customer.id values.crates/domain/src/paystack.rs-11-15 (1)
11-15:⚠️ Potential issue | 🟡 MinorThe fallback
data["data"]["reference"]is unreachable.
datais already resolved fromjson.get("data")(Line 9), sodata["data"]["reference"]looks upjson["data"]["data"]["reference"]— a path that doesn't exist in Paystack payloads. This fallback will never match.Proposed fix — remove the dead fallback or replace with the intended path
let transaction_ref = data["reference"] .as_str() - .or_else(|| data["data"]["reference"].as_str()) .unwrap_or("") .to_string();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/domain/src/paystack.rs` around lines 11 - 15, The fallback data["data"]["reference"] in the transaction_ref extraction is unreachable because data is already json.get("data"); remove that dead fallback and instead try the top-level reference field from the original JSON (e.g., check json["reference"] as a second option) or simply drop the extra lookup; update the transaction_ref construction (the let transaction_ref = ... expression) to use data["reference"].as_str().or_else(|| json["reference"].as_str()).unwrap_or("").to_string() (or just data["reference"].as_str().unwrap_or("").to_string() if you prefer to remove the top-level check).crates/domain/src/transaction.rs-14-26 (1)
14-26:⚠️ Potential issue | 🟡 Minor
normalize_payloadis exported but unused in the webhook dispatch flow.
normalize_webhook_payloadinlib.rsdispatches tonormalize_flutterwave_payloadornormalize_paystack_payload— it never callsnormalize_payload. The default case returns an error instead of falling back to this function. If this is intended as a generic/fallback normalizer, wire it as the default case in the match. Otherwise, remove the function and its re-export, as it's only referenced in its own test.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/domain/src/transaction.rs` around lines 14 - 26, The exported function normalize_payload is never used by normalize_webhook_payload (which only dispatches to normalize_flutterwave_payload or normalize_paystack_payload) so either wire normalize_payload as the default/fallback normalizer in the match inside normalize_webhook_payload in lib.rs or remove normalize_payload and its re-export and associated test; to wire it, update the default arm of the match in normalize_webhook_payload to call normalize_payload(raw) (ensure signature matches and import visibility), otherwise delete the normalize_payload function and its pub re-export so there are no unused exports or dead tests referencing it.crates/queue/src/lib.rs-158-202 (1)
158-202:⚠️ Potential issue | 🟡 MinorRedis test silently skips all post-connection failures, masking real bugs.
The early-return pattern at lines 185–199 means that if
publishorconsumereturns an error for any reason other than Redis unavailability (e.g., serialization failure, wrong key, BRPOP error), the test silently passes without reaching theassert_eq!. Only the initial availability probes (lines 164–172) should return early; subsequent steps should useexpectorunwrap:💡 Suggested fix
- if queue1.publish(event.clone()).await.is_err() { - return; - } + queue1.publish(event.clone()).await.expect("publish should succeed"); drop(queue1); let queue2 = match RedisQueue::new(&redis_url, queue_key) { Ok(q) => q, Err(_) => return, }; - let consumed = match queue2.consume().await { - Ok(ev) => ev, - Err(_) => return, - }; + let consumed = queue2.consume().await.expect("consume should succeed");
🧹 Nitpick comments (12)
docker-compose.yml (1)
25-28: Consider adding a healthcheck for Redis to ensure dependent services start only when Redis is ready.
depends_ononly waits for the container to start, not for Redis to be accepting connections. A healthcheck avoids race conditions on startup.♻️ Proposed improvement
redis: image: redis:7-alpine ports: - "6379:6379" + healthcheck: + test: ["CMD", "redis-cli", "ping"] + interval: 5s + timeout: 3s + retries: 5Then update
depends_onin webhook-server and worker to use condition:depends_on: redis: condition: service_healthy🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker-compose.yml` around lines 25 - 28, Add a Docker healthcheck for the redis service in docker-compose.yml so other services wait for Redis to accept connections; update the redis service block (service name "redis") to include a healthcheck using a small command like "redis-cli ping" with sensible interval/retries/timeouts, and then change the depends_on clauses in the "webhook-server" and "worker" service definitions to use condition: service_healthy for redis so they only start when the redis healthcheck passes.scripts/start-all.sh (1)
4-8: Add a signal trap to clean up child processes.If the script is interrupted (Ctrl-C) or terminated, the backgrounded
cargo runprocesses will be orphaned. Add atrapto forward signals.♻️ Proposed fix
cd "$(dirname "$0")/.." + +cleanup() { + kill "$SERVER_PID" "$WORKER_PID" 2>/dev/null + wait "$SERVER_PID" "$WORKER_PID" 2>/dev/null +} +trap cleanup EXIT INT TERM + cargo run -p webhook-server & SERVER_PID=$! cargo run -p worker & WORKER_PID=$! wait "$SERVER_PID" "$WORKER_PID"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/start-all.sh` around lines 4 - 8, Add a signal trap that forwards termination signals to the background cargo processes and ensures they are cleaned up: after launching the processes and setting SERVER_PID and WORKER_PID, define a cleanup function that checks those variables and sends a TERM (and if necessary a KILL) to "$SERVER_PID" and "$WORKER_PID" and then waits for them, and register that function with trap for INT, TERM, and EXIT so signals (Ctrl-C) are forwarded and children are not orphaned; update the script around the SERVER_PID/WORKER_PID logic and the final wait call to use this cleanup trap.crates/rpc-client/Cargo.toml (1)
6-15: Consider removingasync-trait—native async traits are sufficient for this codebase.Since Rust 1.75+,
async fnin traits is natively supported in all contexts. Theasync-traitcrate is required only fordyn-dispatched trait objects. TheBackendClienttrait is used exclusively via static dispatch (generic bounds likeC: BackendClient), never asBox<dyn>or trait objects. With Rust 2024 edition, you can replace the#[async_trait]macros on the trait definition and implementations with native async syntax—no functional change, just simpler code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc-client/Cargo.toml` around lines 6 - 15, Remove the unnecessary async-trait dependency from crates/rpc-client/Cargo.toml and replace all uses of the #[async_trait] macro on the BackendClient trait and its implementations with native async trait syntax: remove the attribute, declare async fns directly in the trait (async fn ...) and in each impl use native async fn implementations for the same method names; ensure there are no Box<dyn BackendClient> usages (only generic C: BackendClient) before removing the crate.crates/rpc-client/src/lib.rs (1)
57-66:TransactionRequestDtoduplicatesTransactionEventfield-for-field.
TransactionEventalready derivesSerialize. Unless the wire format intentionally diverges from the domain model, you can serializeTransactionEventdirectly and remove the DTO + manual mapping.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc-client/src/lib.rs` around lines 57 - 66, TransactionRequestDto duplicates TransactionEvent field-for-field; remove TransactionRequestDto and stop manual mapping by serializing TransactionEvent directly wherever TransactionRequestDto is constructed or passed (search for uses of TransactionRequestDto and the mapping code that builds it), ensure TransactionEvent derives Serialize (it already does) and update any function signatures or RPC payload types expecting TransactionRequestDto to accept/serialize TransactionEvent instead, adjusting tests and callers accordingly.crates/domain/src/lib.rs (2)
9-11: JSON is parsed twice — once here for provider routing, then again inside each normalizer.Both
normalize_webhook_payloadand the provider-specific functions callserde_json::from_str(raw). Consider refactoring the normalizers to accept a pre-parsedserde_json::Valueto avoid the redundant parse. Not urgent, but worth a follow-up if payloads are large.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/domain/src/lib.rs` around lines 9 - 11, The code currently parses the raw webhook JSON twice: in normalize_webhook_payload and again inside each provider-specific normalizer; refactor by parsing raw once into a serde_json::Value in normalize_webhook_payload and change provider normalizer function signatures (e.g., any functions named like normalize_{provider}_payload or similar) to accept a &serde_json::Value (or owned Value) instead of &str, then call those normalizers with the pre-parsed Value and remove their internal serde_json::from_str calls; update all callers (including normalize_webhook_payload’s provider routing) to pass the Value and adjust error handling/return types accordingly.
9-17: Include the unrecognized provider value in the error for debuggability.When provider detection fails, the error
"unsupported provider"gives no hint about what was actually received. This makes troubleshooting bad payloads harder.Proposed fix
- _ => Err(anyhow::Error::msg("unsupported provider")), + other => Err(anyhow::anyhow!("unsupported provider: {other:?}")),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/domain/src/lib.rs` around lines 9 - 17, The error returned from normalize_webhook_payload when the provider is not matched should include the actual provider string for debugging; update the final match arm in normalize_webhook_payload to return an anyhow error that interpolates the provider value (use the existing provider variable, which is lowercased) into the message instead of the static "unsupported provider" text so logs show e.g. "unsupported provider: {provider}".crates/domain/src/transaction.rs (1)
3-12:f64for monetaryamountcan lose precision on large values.Floating-point cannot represent all decimal currency amounts exactly. For a pass-through relay this is likely acceptable, but if any arithmetic (discounts, fees, rounding) is added later, values like
0.1or amounts exceeding 2^53 will be imprecise. Consider using an integer (cents/kobo) or a decimal type if exact monetary arithmetic is anticipated.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/domain/src/transaction.rs` around lines 3 - 12, The TransactionEvent struct uses f64 for the amount field which can lose precision for currency; change the type of amount in TransactionEvent to an exact representation (e.g., i64 representing minor units/cents or rust_decimal::Decimal) and update serializers/deserializers and any callers/converters that construct or consume TransactionEvent to use the new type; specifically, modify the amount field on TransactionEvent, adjust serde annotations or feature flags if using rust_decimal (e.g., serde feature) or ensure conversion helpers exist where amounts are produced/parsed so existing code compiles and preserves exact monetary values.crates/domain/src/flutterwave.rs (1)
6-50: Consider extracting shared normalization logic between Flutterwave and Paystack.Both normalizers follow an identical structure: parse JSON → resolve
data→ extract fields with fallbacks → buildTransactionEvent. The only differences are the field names. A shared helper (or a config-driven normalizer) could reduce duplication. Not blocking, but worth considering as more providers are added.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/domain/src/flutterwave.rs` around lines 6 - 50, The Flutterwave normalizer (normalize_flutterwave_payload) duplicates logic used by the Paystack normalizer: parse JSON, resolve `data`, extract fields with fallbacks, and construct TransactionEvent; refactor by extracting shared helpers (e.g., a resolve_data(&Value) -> &Value, extract_str_field(&Value, keys: &[&str]) -> String, extract_f64_field(&Value, keys: &[&str]) -> f64) and a small builder function that takes the resolved data and a mapping of provider-specific keys to populate TransactionEvent; then update normalize_flutterwave_payload (and the Paystack normalizer) to use these helpers to reduce duplication while preserving the same fallbacks and use of TransactionEvent and metadata.crates/domain/src/paystack.rs (1)
31-34:as_f64()already handles JSON integers — theor_elsewithas_i64is redundant.
serde_json::Value::as_f64()returnsSomefor both JSON floats and integers that fit inf64. Theor_elsebranch is dead code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/domain/src/paystack.rs` around lines 31 - 34, The amount extraction uses redundant branches because serde_json::Value::as_f64() already returns Some for JSON integers; remove the unnecessary or_else(...) chain and simplify the amount assignment to use data["amount"].as_f64().unwrap_or(0.0) (referencing the amount variable in paystack.rs) so the dead-code path with as_i64() is eliminated.crates/webhook-server/src/main.rs (1)
41-57: Error context is silently discarded — add logging before mapping to status codes.Both the normalization error and the publish error are mapped to bare status codes with
map_err(|_| ...). In production, you'll have no visibility into why requests are failing. Log the error before converting.Proposed improvement
- let event = normalize_webhook_payload(&body).map_err(|_| StatusCode::BAD_REQUEST)?; + let event = normalize_webhook_payload(&body).map_err(|err| { + eprintln!("webhook normalization error: {err:#}"); + StatusCode::BAD_REQUEST + })?; state .queue .publish(event) .await - .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; + .map_err(|err| { + eprintln!("queue publish error: {err:#}"); + StatusCode::INTERNAL_SERVER_ERROR + })?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/webhook-server/src/main.rs` around lines 41 - 57, The webhook_handler currently discards error details by using map_err(|_| ...) for normalize_webhook_payload and state.queue.publish; update webhook_handler to capture the error values from normalize_webhook_payload(...) and state.queue.publish(...).await, log the error details (e.g., with tracing::error! or log::error!) including contextual info (request body or event type) before converting to StatusCode::BAD_REQUEST or StatusCode::INTERNAL_SERVER_ERROR, and then map or return the appropriate status code; reference normalize_webhook_payload for the parsing step and state.queue.publish for the publish step so you log the exact errors from those calls.crates/queue/src/lib.rs (2)
43-43:Error::msg(err.to_string())discards the error chain — prefer idiomaticanyhowconversion.This pattern appears throughout the file (lines 43, 58, 68, 79, 86, 106, 108). All the source error types (
redis::RedisError,serde_json::Error,mpsc::SendError) implementstd::error::Error, soanyhowcan wrap them directly without stringifying. Use.map_err(anyhow::Error::from)or simply?in contexts where the function already returnsResult<_, anyhow::Error>:♻️ Example fix (applies to all occurrences)
- let client = redis::Client::open(redis_url).map_err(|err| Error::msg(err.to_string()))?; + let client = redis::Client::open(redis_url).map_err(anyhow::Error::from)?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/queue/src/lib.rs` at line 43, The current calls like redis::Client::open(redis_url).map_err(|err| Error::msg(err.to_string())) (and the similar patterns around serde_json::from_str and tx.send) convert errors to strings and drop the original error chain; replace these with idiomatic anyhow propagation—either drop the map_err and use `?` if the surrounding function returns Result<_, anyhow::Error>, or change the mapper to `.map_err(anyhow::Error::from)` (or `Into::into`) so the original error type (e.g., redis::RedisError, serde_json::Error, mpsc::SendError) is preserved; update calls in the code paths that construct the redis client (redis::Client::open), JSON parsing (serde_json::from_str), and channel sends (tx.send) accordingly.
9-17: Consider replacing#[async_trait]with nativeasync fnin traits (Rust ≥ 1.75).
async_traitallocates aBox<dyn Future>on every call. Since Rust 1.75 (December 28, 2023),async fnin traits is stable natively through return-positionimpl Traitin traits (RPITIT), eliminating that overhead and simplifying the code.This requires removing the
async_traitcrate dependency and the six#[async_trait]attributes (on trait definitions at lines 9, 14 and trait implementations at lines 52, 62, 72, 92), while keeping the async method signatures unchanged.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/queue/src/lib.rs` around lines 9 - 17, Remove the #[async_trait] usages and dependency and convert the traits and their impls to use native async trait methods: keep the async method signatures for EventQueue::publish and EventQueueConsumer::consume but delete the #[async_trait] attribute on the trait definitions and on all impl blocks that implement those traits (i.e., remove the six #[async_trait] occurrences mentioned), delete the async_trait crate from Cargo.toml, and ensure the impls implement async fn publish(...) and async fn consume(...) directly (toolchain >= 1.75).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.githooks/pre-commit:
- Around line 17-18: Replace the auto-staging step that uses "git add -u" with a
formatter check so we don't clobber a developer's intentional partial staging:
remove the "git add -u" invocation in the pre-commit script and instead run
"cargo fmt --all" in check mode (e.g., pass the -- --check flag to cargo fmt)
and fail the hook when formatting is required so the developer can re-stage
explicitly; update the pre-commit flow around the existing cargo fmt invocation
to exit non-zero on formatting issues and print instructions to run cargo fmt
manually.
In `@crates/domain/src/paystack.rs`:
- Around line 17-21: The code currently uses data["customer"]["id"].as_str()
which returns None for numeric IDs; update the user_id extraction to first try
as_str(), then try as_i64()/as_u64() (or as_u64 then as_i64) and convert that
numeric value to a string, and only then fall back to
data["customer"]["email"].as_str(); apply the same change for the analogous
user_id extraction in flutterwave.rs so numeric customer IDs are preserved.
In `@crates/queue/Cargo.toml`:
- Line 12: The redis dependency is pinned to an old 0.25 release; bump it to a
modern 1.0.x (recommend "1.0.3") in Cargo.toml so you get recent fixes and
security updates—no code changes expected since the code already uses modern
async APIs like get_multiplexed_async_connection() and query_async(), so simply
update the redis version string to 1.0.3 in the crate's dependencies entry.
In `@crates/queue/src/lib.rs`:
- Around line 36-50: RedisQueue currently opens a new TCP connection on every
publish/consume because it only stores redis::Client; change RedisQueue to cache
a reusable async connection (e.g., store redis::aio::ConnectionManager or a
redis::aio::MultiplexedConnection) as a field (replace or add a conn field)
created once in RedisQueue::new, then update EventQueue::publish to use
self.conn.clone() instead of calling get_multiplexed_async_connection(), and
ensure EventQueueConsumer::consume uses a dedicated non-multiplexed blocking
connection or a separate connection manager for blocking BRPOP to avoid
exhausting Redis server connections.
- Around line 92-111: In RedisQueue::consume (EventQueueConsumer impl) switch
from a multiplexed connection to a dedicated non-multiplexed connection by using
client.get_async_connection() instead of get_multiplexed_async_connection(), and
change the BRPOP call to use a finite timeout (e.g., arg(5) instead of arg(0))
so the blocking call returns periodically and the caller (worker_loop) can
observe shutdown/cancellation; keep the existing error mapping/serde logic but
acquire and use the non-multiplexed connection for the blocking BRPOP.
In `@crates/rpc-client/src/lib.rs`:
- Line 83: The HTTP POST currently ignores HTTP error statuses because
send().await? only fails on network errors; in the method containing
self.http_client.post(url).json(&payload).send().await? capture the response
into a variable, call response.error_for_status()? to convert 4xx/5xx into an
Err, and propagate that error (e.g., using ?). Update the code around the
self.http_client.post(...).json(&payload).send().await? expression to perform
this additional check so HTTP error responses are not treated as success.
- Around line 48-54: The HttpBackendClient::new currently constructs
reqwest::Client with reqwest::Client::new(), which leaves no request timeout and
can hang; update HttpBackendClient::new to build the client with a default
timeout (e.g., use
reqwest::Client::builder().timeout(Duration::from_secs(10)).build().unwrap() or
return a Result) so http_client has a finite timeout, and ensure you import
std::time::Duration and handle build() errors appropriately; reference
HttpBackendClient::new, http_client, and reqwest::Client::builder() when making
the change.
- Around line 93-128: Tests app_config_selects_environment_from_env and
backend_url_reads_from_env_with_fallbacks mutate process-wide env via
std::env::set_var/remove_var, which is unsafe for parallel tests; fix by
serializing or avoiding env mutation: annotate the two test functions
(app_config_selects_environment_from_env and
backend_url_reads_from_env_with_fallbacks) with a serializing attribute from the
serial_test crate (e.g., #[serial]) or refactor to call
AppConfig::from_env()/AppConfig::backend_url() with injected config rather than
using std::env::set_var/remove_var, and remove the unsafe blocks around env
manipulation if you choose serialization or eliminate env mutation entirely.
In `@crates/webhook-server/src/main.rs`:
- Around line 41-57: The webhook_handler currently accepts any POST body; add
signature verification before calling normalize_webhook_payload and publishing
to state.queue: extract provider headers (e.g., "X-Paystack-Signature" and
"verif-hash") from the incoming request, retrieve the corresponding secret from
the AppState (e.g., paystack_secret/flutterwave_secret on State(AppState<Q>)),
compute the HMAC (Paystack uses HMAC-SHA512 over the raw body) and perform a
constant-time comparison with the header value, returning
StatusCode::UNAUTHORIZED on mismatch or missing header; only then call
normalize_webhook_payload and state.queue.publish(event). Ensure you reference
webhook_handler, normalize_webhook_payload, and state.queue.publish when making
changes.
In `@crates/worker/Cargo.toml`:
- Line 4: The crate uses edition = "2024" in Cargo.toml but the repository lacks
a pinned toolchain; add a rust-toolchain.toml at the repo root containing
channel = "1.85" (or newer) to ensure Rust 1.85+ is used for builds and CI, so
tools compiling code tied to edition = "2024" will not fail on older compilers.
In `@crates/worker/src/main.rs`:
- Around line 23-32: The worker_loop currently returns on any error from
queue.consume() or backend_client.send_transaction(), so change it to catch
errors, log them (using the crate's logger, e.g., tracing::error/warn), and
retry transient failures with exponential backoff (use tokio::time::sleep,
doubling delay up to a cap and a configurable max_retries), while continuing the
outer loop after backoff instead of returning; keep a separate short-circuit for
non-transient/fatal errors if your EventQueueConsumer or BackendClient exposes
error kinds (match on error type or provide is_transient()), and only propagate
or terminate on those fatal cases. Ensure to reference worker_loop,
queue.consume, and backend_client.send_transaction when adding the retry/backoff
and logging.
In `@docker-compose.yml`:
- Around line 1-11: The docker-compose service "webhook-server" lacks a
dependency on Redis, which can cause it to start before Redis is ready and lead
to connection failures in the Redis-backed queue (RedisQueue); update the
"webhook-server" service to include a depends_on entry referencing the Redis
service (e.g., add depends_on: - redis) so Docker Compose will start Redis
before webhook-server and prevent race conditions during startup.
In `@Dockerfile`:
- Around line 1-25: The Dockerfile hardcodes the binary name ("webhook-server")
and never declares the build arg BIN_NAME, causing services that pass BIN_NAME
(e.g., worker) to build/run the wrong binary; add ARG BIN_NAME with a default
(e.g., webhook-server) and replace occurrences of "webhook-server" in the cargo
build command (-p webhook-server), the copy source path
(/app/target/release/webhook-server), and the ENTRYPOINT (currently
/usr/local/bin/webhook-app) so they use the BIN_NAME value and keep the final
copied file/name consistent (e.g., copy from /app/target/release/${BIN_NAME} to
/usr/local/bin/${BIN_NAME} and set ENTRYPOINT to that path).
---
Duplicate comments:
In `@crates/domain/src/flutterwave.rs`:
- Around line 17-21: The current user_id extraction uses as_str() on
data["customer"]["id"] which returns None for numeric IDs; update the user_id
assignment (the user_id variable that reads data["customer"]["id"] and
data["customer"]["email"]) to first attempt to get the id as a number (e.g.,
as_i64/as_u64) and convert it to a string, then fall back to
data["customer"]["email"].as_str(), and finally default to "" — mirror the same
fix applied in paystack.rs so numeric customer.id values are handled correctly.
---
Nitpick comments:
In `@crates/domain/src/flutterwave.rs`:
- Around line 6-50: The Flutterwave normalizer (normalize_flutterwave_payload)
duplicates logic used by the Paystack normalizer: parse JSON, resolve `data`,
extract fields with fallbacks, and construct TransactionEvent; refactor by
extracting shared helpers (e.g., a resolve_data(&Value) -> &Value,
extract_str_field(&Value, keys: &[&str]) -> String, extract_f64_field(&Value,
keys: &[&str]) -> f64) and a small builder function that takes the resolved data
and a mapping of provider-specific keys to populate TransactionEvent; then
update normalize_flutterwave_payload (and the Paystack normalizer) to use these
helpers to reduce duplication while preserving the same fallbacks and use of
TransactionEvent and metadata.
In `@crates/domain/src/lib.rs`:
- Around line 9-11: The code currently parses the raw webhook JSON twice: in
normalize_webhook_payload and again inside each provider-specific normalizer;
refactor by parsing raw once into a serde_json::Value in
normalize_webhook_payload and change provider normalizer function signatures
(e.g., any functions named like normalize_{provider}_payload or similar) to
accept a &serde_json::Value (or owned Value) instead of &str, then call those
normalizers with the pre-parsed Value and remove their internal
serde_json::from_str calls; update all callers (including
normalize_webhook_payload’s provider routing) to pass the Value and adjust error
handling/return types accordingly.
- Around line 9-17: The error returned from normalize_webhook_payload when the
provider is not matched should include the actual provider string for debugging;
update the final match arm in normalize_webhook_payload to return an anyhow
error that interpolates the provider value (use the existing provider variable,
which is lowercased) into the message instead of the static "unsupported
provider" text so logs show e.g. "unsupported provider: {provider}".
In `@crates/domain/src/paystack.rs`:
- Around line 31-34: The amount extraction uses redundant branches because
serde_json::Value::as_f64() already returns Some for JSON integers; remove the
unnecessary or_else(...) chain and simplify the amount assignment to use
data["amount"].as_f64().unwrap_or(0.0) (referencing the amount variable in
paystack.rs) so the dead-code path with as_i64() is eliminated.
In `@crates/domain/src/transaction.rs`:
- Around line 3-12: The TransactionEvent struct uses f64 for the amount field
which can lose precision for currency; change the type of amount in
TransactionEvent to an exact representation (e.g., i64 representing minor
units/cents or rust_decimal::Decimal) and update serializers/deserializers and
any callers/converters that construct or consume TransactionEvent to use the new
type; specifically, modify the amount field on TransactionEvent, adjust serde
annotations or feature flags if using rust_decimal (e.g., serde feature) or
ensure conversion helpers exist where amounts are produced/parsed so existing
code compiles and preserves exact monetary values.
In `@crates/queue/src/lib.rs`:
- Line 43: The current calls like redis::Client::open(redis_url).map_err(|err|
Error::msg(err.to_string())) (and the similar patterns around
serde_json::from_str and tx.send) convert errors to strings and drop the
original error chain; replace these with idiomatic anyhow propagation—either
drop the map_err and use `?` if the surrounding function returns Result<_,
anyhow::Error>, or change the mapper to `.map_err(anyhow::Error::from)` (or
`Into::into`) so the original error type (e.g., redis::RedisError,
serde_json::Error, mpsc::SendError) is preserved; update calls in the code paths
that construct the redis client (redis::Client::open), JSON parsing
(serde_json::from_str), and channel sends (tx.send) accordingly.
- Around line 9-17: Remove the #[async_trait] usages and dependency and convert
the traits and their impls to use native async trait methods: keep the async
method signatures for EventQueue::publish and EventQueueConsumer::consume but
delete the #[async_trait] attribute on the trait definitions and on all impl
blocks that implement those traits (i.e., remove the six #[async_trait]
occurrences mentioned), delete the async_trait crate from Cargo.toml, and ensure
the impls implement async fn publish(...) and async fn consume(...) directly
(toolchain >= 1.75).
In `@crates/rpc-client/Cargo.toml`:
- Around line 6-15: Remove the unnecessary async-trait dependency from
crates/rpc-client/Cargo.toml and replace all uses of the #[async_trait] macro on
the BackendClient trait and its implementations with native async trait syntax:
remove the attribute, declare async fns directly in the trait (async fn ...) and
in each impl use native async fn implementations for the same method names;
ensure there are no Box<dyn BackendClient> usages (only generic C:
BackendClient) before removing the crate.
In `@crates/rpc-client/src/lib.rs`:
- Around line 57-66: TransactionRequestDto duplicates TransactionEvent
field-for-field; remove TransactionRequestDto and stop manual mapping by
serializing TransactionEvent directly wherever TransactionRequestDto is
constructed or passed (search for uses of TransactionRequestDto and the mapping
code that builds it), ensure TransactionEvent derives Serialize (it already
does) and update any function signatures or RPC payload types expecting
TransactionRequestDto to accept/serialize TransactionEvent instead, adjusting
tests and callers accordingly.
In `@crates/webhook-server/src/main.rs`:
- Around line 41-57: The webhook_handler currently discards error details by
using map_err(|_| ...) for normalize_webhook_payload and state.queue.publish;
update webhook_handler to capture the error values from
normalize_webhook_payload(...) and state.queue.publish(...).await, log the error
details (e.g., with tracing::error! or log::error!) including contextual info
(request body or event type) before converting to StatusCode::BAD_REQUEST or
StatusCode::INTERNAL_SERVER_ERROR, and then map or return the appropriate status
code; reference normalize_webhook_payload for the parsing step and
state.queue.publish for the publish step so you log the exact errors from those
calls.
In `@docker-compose.yml`:
- Around line 25-28: Add a Docker healthcheck for the redis service in
docker-compose.yml so other services wait for Redis to accept connections;
update the redis service block (service name "redis") to include a healthcheck
using a small command like "redis-cli ping" with sensible
interval/retries/timeouts, and then change the depends_on clauses in the
"webhook-server" and "worker" service definitions to use condition:
service_healthy for redis so they only start when the redis healthcheck passes.
In `@scripts/start-all.sh`:
- Around line 4-8: Add a signal trap that forwards termination signals to the
background cargo processes and ensures they are cleaned up: after launching the
processes and setting SERVER_PID and WORKER_PID, define a cleanup function that
checks those variables and sends a TERM (and if necessary a KILL) to
"$SERVER_PID" and "$WORKER_PID" and then waits for them, and register that
function with trap for INT, TERM, and EXIT so signals (Ctrl-C) are forwarded and
children are not orphaned; update the script around the SERVER_PID/WORKER_PID
logic and the final wait call to use this cleanup trap.
| let user_id = data["customer"]["id"] | ||
| .as_str() | ||
| .or_else(|| data["customer"]["email"].as_str()) | ||
| .unwrap_or("") | ||
| .to_string(); |
There was a problem hiding this comment.
customer.id is an integer in real Paystack webhooks — as_str() will always return None.
Paystack's API returns customer.id as a numeric value (e.g., 46123456), not a string. as_str() on a JSON number yields None, so this will silently fall through to the email every time, losing the actual customer ID.
Handle numeric IDs explicitly:
Proposed fix
- let user_id = data["customer"]["id"]
- .as_str()
- .or_else(|| data["customer"]["email"].as_str())
- .unwrap_or("")
- .to_string();
+ let user_id = data["customer"]["id"]
+ .as_str()
+ .map(String::from)
+ .or_else(|| data["customer"]["id"].as_i64().map(|v| v.to_string()))
+ .or_else(|| data["customer"]["email"].as_str().map(String::from))
+ .unwrap_or_default();The same issue exists in crates/domain/src/flutterwave.rs at lines 17-21 — Flutterwave also returns customer.id as an integer.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let user_id = data["customer"]["id"] | |
| .as_str() | |
| .or_else(|| data["customer"]["email"].as_str()) | |
| .unwrap_or("") | |
| .to_string(); | |
| let user_id = data["customer"]["id"] | |
| .as_str() | |
| .map(String::from) | |
| .or_else(|| data["customer"]["id"].as_i64().map(|v| v.to_string())) | |
| .or_else(|| data["customer"]["email"].as_str().map(String::from)) | |
| .unwrap_or_default(); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/domain/src/paystack.rs` around lines 17 - 21, The code currently uses
data["customer"]["id"].as_str() which returns None for numeric IDs; update the
user_id extraction to first try as_str(), then try as_i64()/as_u64() (or as_u64
then as_i64) and convert that numeric value to a string, and only then fall back
to data["customer"]["email"].as_str(); apply the same change for the analogous
user_id extraction in flutterwave.rs so numeric customer IDs are preserved.
| pub struct RedisQueue { | ||
| client: redis::Client, | ||
| queue_key: String, | ||
| } | ||
|
|
||
| impl RedisQueue { | ||
| pub fn new(redis_url: &str, queue_key: impl Into<String>) -> Result<Self, Error> { | ||
| let client = redis::Client::open(redis_url).map_err(|err| Error::msg(err.to_string()))?; | ||
|
|
||
| Ok(Self { | ||
| client, | ||
| queue_key: queue_key.into(), | ||
| }) | ||
| } | ||
| } |
There was a problem hiding this comment.
RedisQueue opens a new TCP connection on every publish and consume call.
Because RedisQueue holds only a redis::Client and no cached connection, both EventQueue::publish and EventQueueConsumer::consume call get_multiplexed_async_connection() on every invocation. For async connections, connection pooling isn't necessary—the MultiplexedConnection is cheaply cloneable and can be safely reused from multiple threads, so a single connection should be created and reused. Under any meaningful throughput this will exhaust Redis server connections.
Cache the connection (or a ConnectionManager for auto-reconnect) in the struct:
♻️ Suggested refactor
+use redis::aio::MultiplexedConnection;
+
#[derive(Clone)]
pub struct RedisQueue {
- client: redis::Client,
+ conn: MultiplexedConnection,
queue_key: String,
}
impl RedisQueue {
- pub fn new(redis_url: &str, queue_key: impl Into<String>) -> Result<Self, Error> {
- let client = redis::Client::open(redis_url).map_err(|err| Error::msg(err.to_string()))?;
- Ok(Self {
- client,
- queue_key: queue_key.into(),
- })
+ pub async fn new(redis_url: &str, queue_key: impl Into<String>) -> Result<Self, Error> {
+ let client = redis::Client::open(redis_url).map_err(|err| Error::msg(err.to_string()))?;
+ let conn = client
+ .get_multiplexed_async_connection()
+ .await
+ .map_err(|err| Error::msg(err.to_string()))?;
+ Ok(Self {
+ conn,
+ queue_key: queue_key.into(),
+ })
}
}Then use self.conn.clone() in publish instead of creating a new connection. For consume (blocking BRPOP), see the separate comment about using a dedicated non-multiplexed connection.
Alternatively, use the connection-manager feature for transparent reconnection:
use redis::aio::ConnectionManager;
pub struct RedisQueue {
conn: ConnectionManager,
queue_key: String,
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/queue/src/lib.rs` around lines 36 - 50, RedisQueue currently opens a
new TCP connection on every publish/consume because it only stores
redis::Client; change RedisQueue to cache a reusable async connection (e.g.,
store redis::aio::ConnectionManager or a redis::aio::MultiplexedConnection) as a
field (replace or add a conn field) created once in RedisQueue::new, then update
EventQueue::publish to use self.conn.clone() instead of calling
get_multiplexed_async_connection(), and ensure EventQueueConsumer::consume uses
a dedicated non-multiplexed blocking connection or a separate connection manager
for blocking BRPOP to avoid exhausting Redis server connections.
| async fn webhook_handler<Q>( | ||
| State(state): State<AppState<Q>>, | ||
| body: String, | ||
| ) -> Result<StatusCode, StatusCode> | ||
| where | ||
| Q: EventQueue + Clone + 'static, | ||
| { | ||
| let event = normalize_webhook_payload(&body).map_err(|_| StatusCode::BAD_REQUEST)?; | ||
|
|
||
| state | ||
| .queue | ||
| .publish(event) | ||
| .await | ||
| .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; | ||
|
|
||
| Ok(StatusCode::OK) | ||
| } |
There was a problem hiding this comment.
Missing webhook signature verification — anyone can POST fake events.
Neither Paystack (X-Paystack-Signature, HMAC-SHA512) nor Flutterwave (verif-hash) signatures are validated. Without this, the /webhook endpoint is open to spoofed events, which could inject fraudulent transactions into the queue.
This should be addressed before going to production. At minimum, extract the signature header and verify the HMAC against the raw body using the provider's secret key.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/webhook-server/src/main.rs` around lines 41 - 57, The webhook_handler
currently accepts any POST body; add signature verification before calling
normalize_webhook_payload and publishing to state.queue: extract provider
headers (e.g., "X-Paystack-Signature" and "verif-hash") from the incoming
request, retrieve the corresponding secret from the AppState (e.g.,
paystack_secret/flutterwave_secret on State(AppState<Q>)), compute the HMAC
(Paystack uses HMAC-SHA512 over the raw body) and perform a constant-time
comparison with the header value, returning StatusCode::UNAUTHORIZED on mismatch
or missing header; only then call normalize_webhook_payload and
state.queue.publish(event). Ensure you reference webhook_handler,
normalize_webhook_payload, and state.queue.publish when making changes.
| FROM rust:slim-trixie AS builder | ||
|
|
||
| WORKDIR /app | ||
|
|
||
| # Copy manifests and code | ||
| COPY Cargo.toml Cargo.lock ./ | ||
| COPY crates ./crates | ||
|
|
||
|
|
||
| RUN cargo build --release -p webhook-server | ||
|
|
||
| FROM debian:trixie-slim | ||
|
|
||
| RUN useradd -m appuser | ||
|
|
||
|
|
||
| COPY --from=builder /app/target/release/webhook-server /usr/local/bin/webhook-app | ||
|
|
||
| RUN chmod +x /usr/local/bin/webhook-app && chown appuser:appuser /usr/local/bin/webhook-app | ||
|
|
||
| USER appuser | ||
| ENV RUST_LOG=info | ||
|
|
||
|
|
||
| ENTRYPOINT ["/usr/local/bin/webhook-app"] No newline at end of file |
There was a problem hiding this comment.
BIN_NAME build arg is never declared or used — worker container will run webhook-server instead.
docker-compose.yml passes BIN_NAME: worker for the worker service, but this Dockerfile never declares ARG BIN_NAME and hardcodes webhook-server in the build command (Line 10), binary copy path (Line 17), and entrypoint (Line 25). The worker container will silently build and run the wrong binary.
🐛 Proposed fix: parameterize with ARG
FROM rust:slim-trixie AS builder
+ARG BIN_NAME=webhook-server
WORKDIR /app
# Copy manifests and code
COPY Cargo.toml Cargo.lock ./
COPY crates ./crates
-RUN cargo build --release -p webhook-server
+RUN cargo build --release -p ${BIN_NAME}
FROM debian:trixie-slim
+ARG BIN_NAME=webhook-server
RUN useradd -m appuser
-COPY --from=builder /app/target/release/webhook-server /usr/local/bin/webhook-app
+COPY --from=builder /app/target/release/${BIN_NAME} /usr/local/bin/webhook-app
RUN chmod +x /usr/local/bin/webhook-app && chown appuser:appuser /usr/local/bin/webhook-app
USER appuser
ENV RUST_LOG=info
ENTRYPOINT ["/usr/local/bin/webhook-app"]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| FROM rust:slim-trixie AS builder | |
| WORKDIR /app | |
| # Copy manifests and code | |
| COPY Cargo.toml Cargo.lock ./ | |
| COPY crates ./crates | |
| RUN cargo build --release -p webhook-server | |
| FROM debian:trixie-slim | |
| RUN useradd -m appuser | |
| COPY --from=builder /app/target/release/webhook-server /usr/local/bin/webhook-app | |
| RUN chmod +x /usr/local/bin/webhook-app && chown appuser:appuser /usr/local/bin/webhook-app | |
| USER appuser | |
| ENV RUST_LOG=info | |
| ENTRYPOINT ["/usr/local/bin/webhook-app"] | |
| FROM rust:slim-trixie AS builder | |
| ARG BIN_NAME=webhook-server | |
| WORKDIR /app | |
| # Copy manifests and code | |
| COPY Cargo.toml Cargo.lock ./ | |
| COPY crates ./crates | |
| RUN cargo build --release -p ${BIN_NAME} | |
| FROM debian:trixie-slim | |
| ARG BIN_NAME=webhook-server | |
| RUN useradd -m appuser | |
| COPY --from=builder /app/target/release/${BIN_NAME} /usr/local/bin/webhook-app | |
| RUN chmod +x /usr/local/bin/webhook-app && chown appuser:appuser /usr/local/bin/webhook-app | |
| USER appuser | |
| ENV RUST_LOG=info | |
| ENTRYPOINT ["/usr/local/bin/webhook-app"] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Dockerfile` around lines 1 - 25, The Dockerfile hardcodes the binary name
("webhook-server") and never declares the build arg BIN_NAME, causing services
that pass BIN_NAME (e.g., worker) to build/run the wrong binary; add ARG
BIN_NAME with a default (e.g., webhook-server) and replace occurrences of
"webhook-server" in the cargo build command (-p webhook-server), the copy source
path (/app/target/release/webhook-server), and the ENTRYPOINT (currently
/usr/local/bin/webhook-app) so they use the BIN_NAME value and keep the final
copied file/name consistent (e.g., copy from /app/target/release/${BIN_NAME} to
/usr/local/bin/${BIN_NAME} and set ENTRYPOINT to that path).
- Update Redis from 0.25 to 1.0 with connection-manager feature - Fix connection pooling (multiplexed for publish, sync for BRPOP) - Add rust-toolchain.toml (pins Rust 1.85) - Fix pre-commit hook (cargo fmt --check instead of git add -u) - Fix Paystack/Flutterwave customer.id parsing (handle numeric IDs) - Add depends_on: redis in docker-compose.yml - Parameterize Dockerfile with BIN_NAME argument - Remove problematic Redis integration test (API changed in 1.0) Test: cargo test --workspace (14 passing)
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (4)
crates/domain/src/mno.rs (1)
56-62: Status default is inconsistent across normalizers.Airtel's fallback is
"pending", while MTN and Orange fall back to"unknown". An unrecognized Airtel status will then be indistinguishable from a trueTP/pending, causing downstream consumers to act on incomplete information. Align on"unknown"for unrecognized values.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/domain/src/mno.rs` around lines 56 - 62, The status fallback for Airtel is currently set to "pending" in the match that constructs the status variable (the let status = match json["transaction"]["status"].as_str() { ... } .to_string(); block), which conflicts with MTN/Orange; change the default arm (_) to return "unknown" instead of "pending" so unrecognized Airtel statuses are aligned and distinguishable downstream.crates/webhook-server/src/main.rs (1)
124-161: Test only exercises the empty-secret bypass path.Because neither
FLUTTERWAVE_WEBHOOK_SECRETnorx-signatureis set,verify_hmactakes the "dev mode" return-truebranch. Once signature verification is corrected per the comment above, this test will no longer cover the handler's real auth path — consider adding positive/negative signature tests with a known secret and computed HMAC.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/webhook-server/src/main.rs` around lines 124 - 161, The test webhook_handler_returns_ok_for_flutterwave_payload currently exercises the dev-mode bypass because neither FLUTTERWAVE_WEBHOOK_SECRET nor an x-signature header is provided; update the test suite to include two cases that exercise real HMAC verification: (1) a positive case that sets a known secret (e.g., via the environment or injecting into AppState), computes the correct HMAC of the request body and sets the x-signature header before sending the request to webhook_handler::<InMemoryQueue>, and asserts StatusCode::OK; and (2) a negative case that uses the same secret but an incorrect/modified signature and asserts a non-OK response. Ensure the tests reference webhook_handler, verify_hmac, FLUTTERWAVE_WEBHOOK_SECRET and x-signature so they cover real auth behavior rather than the empty-secret bypass.crates/worker/src/main.rs (1)
75-78: Guardcalculate_backoffagainstattempt = 0andpowoverflow.
2u32.pow(attempt - 1)underflows (panics in debug) ifattempt == 0, andbase * 2u32.pow(n)panics onDurationmultiplication overflow for largen. Not exploitable today (callers start at 1 andmax_retries = 5), but trivially breakable by a future tuning change. Consider clamping or usingsaturating_*helpers.♻️ Defensive rewrite
fn calculate_backoff(attempt: u32, base: Duration, max: Duration) -> Duration { - let exponential = base * 2u32.pow(attempt - 1); - std::cmp::min(exponential, max) + let shift = attempt.saturating_sub(1).min(30); + let factor = 1u32 << shift; + base.checked_mul(factor).map(|d| d.min(max)).unwrap_or(max) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/worker/src/main.rs` around lines 75 - 78, calculate_backoff currently panics for attempt == 0 and can overflow when computing 2.pow(n); update calculate_backoff to early-return base when attempt == 0, compute the exponential multiplier in a checked/saturating way (e.g., use a checked_shl on 1u128 with attempt.saturating_sub(1) and fall back to a saturated max if shifting would overflow), do the multiplication on primitive integers with saturating_mul (using base.as_nanos() or as_secs_f64 approach) and then convert/clamp the resulting nanoseconds back into a Duration, finally return the minimum of that Duration and max; this prevents underflow on attempt-1 and Duration multiplication overflow while keeping the same min(exponential, max) behavior in calculate_backoff.crates/domain/src/flutterwave.rs (1)
33-36: Remove redundant integer fallback foramount.
serde_json::Value::as_f64()converts both float and integer JSON numbers tof64, making theor_else(|| data["amount"].as_i64().map(|v| v as f64))fallback unreachable.♻️ Simplification
- let amount = data["amount"] - .as_f64() - .or_else(|| data["amount"].as_i64().map(|v| v as f64)) - .unwrap_or(0.0); + let amount = data["amount"].as_f64().unwrap_or(0.0);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/domain/src/flutterwave.rs` around lines 33 - 36, The code creates `amount` from `data["amount"]` using `as_f64().or_else(...as_i64()...)` which is redundant because `serde_json::Value::as_f64()` already returns integers as `f64`; remove the unreachable `or_else(|| data["amount"].as_i64().map(|v| v as f64))` fallback and set `amount` to `data["amount"].as_f64().unwrap_or(0.0)` (update the binding where `let amount = ...` is defined).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/domain/src/lib.rs`:
- Around line 13-25: The current normalize_webhook_payload reads a "provider"
field from the JSON body which is absent in real provider payloads; change its
signature to accept the path provider (e.g., fn
normalize_webhook_payload(provider: &str, raw: &str) -> Result<TransactionEvent,
anyhow::Error>) and remove the top-level JSON provider extraction; switch the
match to dispatch on the provided provider string ("flutterwave", "paystack",
"mtn-momo", "airtel-money", "orange-money") and forward raw to the appropriate
normalizer (normalize_flutterwave_payload, normalize_paystack_payload, etc.);
then update webhook_handler to call normalize_webhook_payload(&provider, &body)
so the handler passes the URL provider and you avoid double-parsing the body.
In `@crates/domain/src/mno.rs`:
- Around line 64-67: The Airtel and Orange amount parsing sets amount to 0.0
when the JSON provides a numeric value because both only try .as_str() then
parse; update the amount extraction in crates/domain/src/mno.rs (the Airtel and
Orange branches where the variable amount is assigned) to attempt string parsing
first and then fall back to reading a numeric JSON value (use the same
two-branch approach as the MTN normalizer: try
json["transaction"]["amount"].as_str().and_then(|s|
s.parse::<f64>().ok()).or_else(||
json["transaction"]["amount"].as_f64()).unwrap_or(0.0)); ensure you apply this
change to the amount declarations for both Airtel and Orange so numeric amounts
are preserved.
In `@crates/worker/src/main.rs`:
- Around line 39-56: The retry loop currently calls
tokio::time::sleep(delay).await even after the final attempt, delaying
dead-letter handling; update the loop that calls backend_client.send_transaction
in main (references: attempt, max_retries, calculate_backoff,
tokio::time::sleep, event.transaction_ref, last_error) to only sleep when
attempt < max_retries (i.e., skip the delay for the last iteration) so the code
immediately proceeds to the "All N attempts failed" branch when the final try
fails.
---
Nitpick comments:
In `@crates/domain/src/flutterwave.rs`:
- Around line 33-36: The code creates `amount` from `data["amount"]` using
`as_f64().or_else(...as_i64()...)` which is redundant because
`serde_json::Value::as_f64()` already returns integers as `f64`; remove the
unreachable `or_else(|| data["amount"].as_i64().map(|v| v as f64))` fallback and
set `amount` to `data["amount"].as_f64().unwrap_or(0.0)` (update the binding
where `let amount = ...` is defined).
In `@crates/domain/src/mno.rs`:
- Around line 56-62: The status fallback for Airtel is currently set to
"pending" in the match that constructs the status variable (the let status =
match json["transaction"]["status"].as_str() { ... } .to_string(); block), which
conflicts with MTN/Orange; change the default arm (_) to return "unknown"
instead of "pending" so unrecognized Airtel statuses are aligned and
distinguishable downstream.
In `@crates/webhook-server/src/main.rs`:
- Around line 124-161: The test
webhook_handler_returns_ok_for_flutterwave_payload currently exercises the
dev-mode bypass because neither FLUTTERWAVE_WEBHOOK_SECRET nor an x-signature
header is provided; update the test suite to include two cases that exercise
real HMAC verification: (1) a positive case that sets a known secret (e.g., via
the environment or injecting into AppState), computes the correct HMAC of the
request body and sets the x-signature header before sending the request to
webhook_handler::<InMemoryQueue>, and asserts StatusCode::OK; and (2) a negative
case that uses the same secret but an incorrect/modified signature and asserts a
non-OK response. Ensure the tests reference webhook_handler, verify_hmac,
FLUTTERWAVE_WEBHOOK_SECRET and x-signature so they cover real auth behavior
rather than the empty-secret bypass.
In `@crates/worker/src/main.rs`:
- Around line 75-78: calculate_backoff currently panics for attempt == 0 and can
overflow when computing 2.pow(n); update calculate_backoff to early-return base
when attempt == 0, compute the exponential multiplier in a checked/saturating
way (e.g., use a checked_shl on 1u128 with attempt.saturating_sub(1) and fall
back to a saturated max if shifting would overflow), do the multiplication on
primitive integers with saturating_mul (using base.as_nanos() or as_secs_f64
approach) and then convert/clamp the resulting nanoseconds back into a Duration,
finally return the minimum of that Duration and max; this prevents underflow on
attempt-1 and Duration multiplication overflow while keeping the same
min(exponential, max) behavior in calculate_backoff.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2cac8887-6f65-4f88-a717-0c78889bcd44
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (14)
.githooks/pre-commitDockerfilecrates/domain/src/flutterwave.rscrates/domain/src/lib.rscrates/domain/src/mno.rscrates/domain/src/paystack.rscrates/queue/Cargo.tomlcrates/queue/src/lib.rscrates/rpc-client/src/lib.rscrates/webhook-server/Cargo.tomlcrates/webhook-server/src/main.rscrates/worker/src/main.rsdocker-compose.ymlrust-toolchain.toml
✅ Files skipped from review due to trivial changes (3)
- rust-toolchain.toml
- crates/queue/Cargo.toml
- docker-compose.yml
🚧 Files skipped from review as they are similar to previous changes (5)
- .githooks/pre-commit
- crates/webhook-server/Cargo.toml
- Dockerfile
- crates/domain/src/paystack.rs
- crates/queue/src/lib.rs
- queue: Cache MultiplexedConnection in RedisQueue instead of creating new connections on every publish/consume - domain: Remove provider dispatch from JSON body; pass provider as parameter from URL path - domain/mno: Add numeric fallback for Airtel/Orange amount parsing (handle both string and number) - worker: Skip sleep after final retry attempt to avoid delaying dead-letter handling - webhook-server: Extract correct signature headers per provider (X-Paystack-Signature, verif-hash) - webhook-server: Update RedisQueue::new to async for connection caching
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
crates/webhook-server/src/main.rs (1)
119-120:.expect()in a request handler panics the task.
HmacSha256::new_from_sliceonHmac<Sha256>never actually fails for any key length, so theexpectis defensive — but if this helper is ever swapped to a variant that does have size constraints, a panic inverify_hmacaborts the handler task and returns500to the provider (which will likely retry, amplifying the problem). Prefer propagatingfalse:- let mut mac = - HmacSha256::new_from_slice(secret.as_bytes()).expect("HMAC can take key of any size"); + let mut mac = match HmacSha256::new_from_slice(secret.as_bytes()) { + Ok(mac) => mac, + Err(_) => return false, + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/webhook-server/src/main.rs` around lines 119 - 120, Avoid panicking in verify_hmac: replace the HmacSha256::new_from_slice(...).expect(...) call so that any error constructing the HMAC returns false instead of panicking; i.e., attempt to create the HMAC (the variable mac) with new_from_slice and handle the Result via match/if-let/map_or to return false on Err, then proceed to compute and verify the tag, keeping the function signature and behavior (returning bool) intact so a construction error does not abort the handler task.crates/domain/src/lib.rs (1)
13-16: Two&strparams create an easy call-site mixup.Both
rawandproviderare&str, so it's trivial to accidentally pass them in the wrong order (there's no compile-time protection). Sinceprovideris the discriminator that selects the normalizer, consider putting it first for readability and to match the common convention of(kind, payload). This would also align with the suggestion from the earlier review thread.♻️ Suggested signature
pub fn normalize_webhook_payload( - raw: &str, provider: &str, + raw: &str, ) -> Result<TransactionEvent, anyhow::Error> {Update the single call site in
crates/webhook-server/src/main.rsaccordingly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/domain/src/lib.rs` around lines 13 - 16, Change the signature of normalize_webhook_payload to take provider first then raw (fn normalize_webhook_payload(provider: &str, raw: &str) -> Result<TransactionEvent, anyhow::Error>) to avoid easy call-site mixups, and update the single call site that invokes normalize_webhook_payload(...) in your webhook server's main function to pass the provider first and the raw payload second; ensure any tests, docs, and imports referencing the old signature are updated accordingly.crates/queue/src/lib.rs (1)
148-160: Test constructs a misleadingInMemoryQueueand doesn't actually assert the "closed" path.The test creates
(sender, receiver)on line 150, dropssender, then stores an unrelated sender frommpsc::channel(1).0into the struct (line 154). The stored sender belongs to a different channel thanreceiver, so thesenderfield is effectively dead weight and doesn't reflect howInMemoryQueue::newwould ever be used. It happens to pass becausereceiver's only sender was dropped, but the test would be clearer (and actually exercise the public API) by constructing vianew()and dropping a clone appropriately — or at minimum, dropping the struct's ownsenderto trigger channel closure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/queue/src/lib.rs` around lines 148 - 160, The test consume_returns_error_when_queue_closed constructs an InMemoryQueue but drops a different sender than the one stored in the queue, so it doesn't clearly exercise the closed-channel path; fix by constructing the queue with the same channel sender/receiver (or call the public InMemoryQueue::new() if available) and then drop the exact sender held in the queue (or drop a clone of that same sender until no senders remain) before calling InMemoryQueue::consume(), ensuring the stored sender field is the one closed so consume() returns an error.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/queue/src/lib.rs`:
- Around line 101-110: The consume function currently calls a blocking Redis
BRPOP via get_blocking_connection() and redis::cmd(...).query(&mut conn) inside
async fn consume, which stalls the Tokio runtime; replace the sync blocking call
with an async Redis call by obtaining an async connection (use
client.get_async_connection() or similar) and call query_async() awaited inside
consume, then handle the server timeout by expecting Option<(String,String)>
(i.e., treat nil as None instead of a TypeError) and only deserialize when
Some(payload); if you cannot switch to async client, wrap the existing blocking
BRPOP call in tokio::task::spawn_blocking to avoid blocking the executor,
referencing the consume function, get_blocking_connection, queue_key, and the
BRPOP query site.
In `@crates/webhook-server/src/main.rs`:
- Around line 91-129: verify_signature and verify_hmac are incorrect: normalize
provider via provider.to_ascii_lowercase() (and mirror in extract_signature) so
"Paystack"/"Flutterwave" don't fall through; for Paystack use HMAC-SHA512
(Hmac<Sha512>) and compare hex-encoded digest timing-safely against the Paystack
header; for Flutterwave support two modes—if using legacy verif-hash compare the
header value directly (timing-safe) to FLUTTERWAVE_WEBHOOK_SECRET, otherwise
read the flutterwave-signature header and verify HMAC-SHA256 with base64-encoded
output (decode header and compare timing-safely); and remove the dev fail-open:
do not return true when secret is empty in production—either return false or
require the relevant FLUTTERWAVE_WEBHOOK_SECRET / PAYSTACK_WEBHOOK_SECRET to be
set when APP_ENV == "production".
In `@crates/worker/src/main.rs`:
- Around line 39-63: The retry loop currently retries event.clone() via
BackendClient::send_transaction without an idempotency key and the
HttpBackendClient::send_transaction implementation treats any HTTP response as
success; change the worker's send path to generate an idempotency key (e.g.,
based on event.transaction_ref) and pass it to send_transaction on every
attempt, and update HttpBackendClient::send_transaction to check
response.status().is_success() before returning Ok(()); ensure retries include
the idempotency key in the HTTP request (e.g., an Idempotency-Key header) so the
backend can deduplicate, and preserve the backoff logic in calculate_backoff
while treating non-success HTTP responses as Err so retries behave safely.
---
Nitpick comments:
In `@crates/domain/src/lib.rs`:
- Around line 13-16: Change the signature of normalize_webhook_payload to take
provider first then raw (fn normalize_webhook_payload(provider: &str, raw: &str)
-> Result<TransactionEvent, anyhow::Error>) to avoid easy call-site mixups, and
update the single call site that invokes normalize_webhook_payload(...) in your
webhook server's main function to pass the provider first and the raw payload
second; ensure any tests, docs, and imports referencing the old signature are
updated accordingly.
In `@crates/queue/src/lib.rs`:
- Around line 148-160: The test consume_returns_error_when_queue_closed
constructs an InMemoryQueue but drops a different sender than the one stored in
the queue, so it doesn't clearly exercise the closed-channel path; fix by
constructing the queue with the same channel sender/receiver (or call the public
InMemoryQueue::new() if available) and then drop the exact sender held in the
queue (or drop a clone of that same sender until no senders remain) before
calling InMemoryQueue::consume(), ensuring the stored sender field is the one
closed so consume() returns an error.
In `@crates/webhook-server/src/main.rs`:
- Around line 119-120: Avoid panicking in verify_hmac: replace the
HmacSha256::new_from_slice(...).expect(...) call so that any error constructing
the HMAC returns false instead of panicking; i.e., attempt to create the HMAC
(the variable mac) with new_from_slice and handle the Result via
match/if-let/map_or to return false on Err, then proceed to compute and verify
the tag, keeping the function signature and behavior (returning bool) intact so
a construction error does not abort the handler task.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 08cd59cb-5bc1-4c0d-a6f7-c838243fb7ae
📒 Files selected for processing (5)
crates/domain/src/lib.rscrates/domain/src/mno.rscrates/queue/src/lib.rscrates/webhook-server/src/main.rscrates/worker/src/main.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/domain/src/mno.rs
| fn verify_signature(provider: &str, body: &str, signature: &str) -> bool { | ||
| match provider { | ||
| "flutterwave" => { | ||
| let secret = env::var("FLUTTERWAVE_WEBHOOK_SECRET").unwrap_or_default(); | ||
| verify_hmac(body, signature, &secret) | ||
| } | ||
| "paystack" => { | ||
| let secret = env::var("PAYSTACK_WEBHOOK_SECRET").unwrap_or_default(); | ||
| verify_hmac(body, signature, &secret) | ||
| } | ||
| _ => { | ||
| // In dev/testing, allow unknown providers | ||
| env::var("APP_ENV").unwrap_or_else(|_| "production".to_string()) != "production" | ||
| } | ||
| } | ||
| } | ||
|
|
||
| fn verify_hmac(body: &str, signature: &str, secret: &str) -> bool { | ||
| if secret.is_empty() { | ||
| // Skip verification if no secret configured (dev mode) | ||
| return true; | ||
| } | ||
|
|
||
| use hmac::{Hmac, Mac}; | ||
| use sha2::Sha256; | ||
|
|
||
| type HmacSha256 = Hmac<Sha256>; | ||
|
|
||
| let mut mac = | ||
| HmacSha256::new_from_slice(secret.as_bytes()).expect("HMAC can take key of any size"); | ||
| mac.update(body.as_bytes()); | ||
|
|
||
| let result = mac.finalize(); | ||
| let expected = hex::encode(result.into_bytes()); | ||
|
|
||
| // Timing-safe comparison | ||
| use subtle::ConstantTimeEq; | ||
| expected.as_bytes().ct_eq(signature.as_bytes()).into() | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Paystack webhook signature HMAC algorithm header name 2025
💡 Result:
The Paystack webhook signature uses the header name x-paystack-signature and the HMAC SHA512 algorithm. This has remained consistent through 2025 and into 2026 based on official documentation and recent SDK examples.
Citations:
- 1: https://paystack.com/docs/payments/webhooks/
- 2: https://docs-v2.paystack.com/docs/payments/webhooks
- 3: https://refunds-pilot--documentation-v2.netlify.app/payments/webhooks/
- 4: https://www.mintlify.com/efobi-dev/paystack/guides/webhooks
- 5: https://dev.to/ifedayo/handling-paystack-transactions-using-webhooks-4k61
🌐 Web query:
Flutterwave webhook verif-hash header signature verification
💡 Result:
Flutterwave webhook signature verification uses a user-configured secret hash (set in the dashboard, stored as e.g. FLW_SECRET_HASH). There are two documented methods across versions, both recommended for security since webhook endpoints are public. 1. Simple header match (older/legacy, e.g. v3.0 docs): Flutterwave sends the secret hash unchanged in the 'verif-hash' header. Verify by direct string comparison with your stored secret hash. If mismatch, discard (401). Example (Node.js/Express): const secretHash = process.env.FLW_SECRET_HASH; const signature = req.headers['verif-hash']; if (!signature || signature !== secretHash) { return res.status(401).end; } 2. HMAC-SHA256 signature (current/newer, e.g. main docs, v4.0, readme.io): Flutterwave computes HMAC-SHA256 of the raw request body using the secret hash as key, base64-encodes it, and sends in 'flutterwave-signature' header. Recompute HMAC on your server using rawBody and compare. Example (Node.js): const crypto = require('crypto'); function isValidFlutterwaveWebhook(rawBody, signature, secretHash) { const hash = crypto.createHmac('sha256', secretHash).update(rawBody).digest('base64'); return hash === signature; } // In route: const flutterwaveSignature = req.headers['flutterwave-signature']; const isValid = isValidFlutterwaveWebhook(req.rawBody, flutterwaveSignature, process.env.FLW_SECRET_HASH); if (!isValid) return res.status(401).send('Invalid signature'); Note: Use raw body (configure body-parser to capture rawBody before parsing JSON). Respond with 200 OK quickly; offload processing to queues. Always re-verify events via API before acting. Enable retries in dashboard. Recent docs (published 2025) emphasize HMAC as primary, but code samples show both methods—confirm via dashboard or test webhooks which header your account uses. Prioritize HMAC for stronger security against body tampering.
Citations:
- 1: https://developer.flutterwave.com/docs/webhooks
- 2: https://developer.flutterwave.com/v3.0/docs/webhooks
- 3: https://flutterwavedoc.readme.io/docs/webhooks
- 4: https://flutterwavedoc.readme.io/v4.0/docs/webhooks
🏁 Script executed:
cd crates/webhook-server && git ls-files src/main.rs | head -1Repository: ourpocket/webhook
Length of output: 72
🏁 Script executed:
cd crates/webhook-server && wc -l src/main.rsRepository: ourpocket/webhook
Length of output: 76
🏁 Script executed:
cd crates/webhook-server && sed -n '50,80p' src/main.rsRepository: ourpocket/webhook
Length of output: 988
🏁 Script executed:
cd crates/webhook-server && rg -n "extract_signature|x-signature|verif-hash|flutterwave-signature|x-paystack" src/main.rsRepository: ourpocket/webhook
Length of output: 317
🏁 Script executed:
cd crates/webhook-server && sed -n '74,130p' src/main.rsRepository: ourpocket/webhook
Length of output: 1817
Signature algorithms are fundamentally incompatible with both Paystack and Flutterwave — all real production webhooks will fail verification.
Three critical problems:
-
Wrong HMAC algorithm for Paystack. Paystack requires HMAC-SHA512 (per official docs and SDKs), but
verify_hmacusesHmac<Sha256>. Every Paystack webhook signature will fail to verify. -
Wrong verification method for Flutterwave. The code reads the
verif-hashheader (correct) but then runs it throughverify_hmacwith HMAC-SHA256. However:- If targeting Flutterwave v3.0 (legacy):
verif-hashis a plain secret string sent unchanged from the dashboard — it must be compared directly against your stored secret, not against an HMAC digest of the body. The current HMAC approach will never match. - If targeting Flutterwave v4.0+ (current): The signature should use the
flutterwave-signatureheader with HMAC-SHA256 base64-encoded, not hex-encoded. The header name and encoding are both wrong.
- If targeting Flutterwave v3.0 (legacy):
-
Fail-open in production when secrets are missing. Lines 108–112:
verify_hmacreturnstrueif the secret is empty. IfFLUTTERWAVE_WEBHOOK_SECRETorPAYSTACK_WEBHOOK_SECRETare unset in a production deploy, all webhooks are silently accepted — an attacker can inject arbitrary events into the queue. This must fail-closed: returnfalsein production or refuse to start without secrets, mirroring the unknown-provider check already in place (line 103). -
Case-sensitive provider dispatch. Both
extract_signature(line 74) andverify_signature(line 92) match the provider string literally against lowercase literals. A URL like/webhook/Paystack(capital P) falls through to the_arm, triggering header lookup forx-signature(wrong) and potentially the dev bypass. Normalize withprovider.to_ascii_lowercase().
The unit test passes only because no secret is configured, so the verify_hmac branch is skipped entirely.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/webhook-server/src/main.rs` around lines 91 - 129, verify_signature
and verify_hmac are incorrect: normalize provider via
provider.to_ascii_lowercase() (and mirror in extract_signature) so
"Paystack"/"Flutterwave" don't fall through; for Paystack use HMAC-SHA512
(Hmac<Sha512>) and compare hex-encoded digest timing-safely against the Paystack
header; for Flutterwave support two modes—if using legacy verif-hash compare the
header value directly (timing-safe) to FLUTTERWAVE_WEBHOOK_SECRET, otherwise
read the flutterwave-signature header and verify HMAC-SHA256 with base64-encoded
output (decode header and compare timing-safely); and remove the dev fail-open:
do not return true when secret is empty in production—either return false or
require the relevant FLUTTERWAVE_WEBHOOK_SECRET / PAYSTACK_WEBHOOK_SECRET to be
set when APP_ENV == "production".
- queue: Wrap blocking BRPOP in tokio::task::spawn_blocking to avoid runtime thread starvation - webhook-server: Fix signature verification: - Paystack: Use HMAC-SHA512 (not SHA256) per official docs - Flutterwave: Direct secret comparison for verif-hash (v3), not HMAC - Fail-closed: Reject webhooks when secrets are missing (no fail-open in prod) - Normalize provider to lowercase for case-insensitive matching - rpc-client: Check HTTP status codes (only 2xx = success) and add Idempotency-Key header - tests: Update webhook test to include verif-hash header and set test secret
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (4)
crates/rpc-client/src/lib.rs (2)
110-145:⚠️ Potential issue | 🟠 MajorAvoid mutating process-wide environment variables in parallel tests.
The unsafe blocks satisfy Rust 2024 syntax, but they do not make process-wide env mutation safe or isolated under the parallel test harness. Prefer testing pure parsing helpers, or run these tests in a fully serialized test binary.
A safer direction is to extract pure helpers, for example:
fn environment_from_value(value: Option<&str>) -> Result<Environment> { match value.map(str::trim).map(str::to_ascii_lowercase).as_deref() { None | Some("staging") => Ok(Environment::Staging), Some("production" | "prod") => Ok(Environment::Production), Some(other) => anyhow::bail!("unsupported APP_ENV: {other}"), } }Then the tests can cover
"staging","production", fallbacks, and invalid values withoutstd::env::set_var/remove_var.Verification:
#!/bin/bash # Check Rust edition and locate tests mutating process-wide environment state. printf '\nCargo editions:\n' rg -n 'edition\s*=' --glob 'Cargo.toml' printf '\nEnvironment mutation in tests/code:\n' rg -n -C3 'std::env::(set_var|remove_var)\s*\(' --type rust🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc-client/src/lib.rs` around lines 110 - 145, Tests mutate process-wide environment variables (unsafe set_var/remove_var) which is unsafe under parallel test harness; refactor by extracting a pure parser like environment_from_value(value: Option<&str>) -> Result<Environment> and have AppConfig::from_env call that helper (and backend_url continue to use AppConfig.environment). Update the tests app_config_selects_environment_from_env and backend_url_reads_from_env_with_fallbacks to call environment_from_value (and construct AppConfig with explicit Environment) instead of setting/removing std::env vars, and add unit tests for invalid/missing inputs to cover fallbacks without touching process-wide state.
49-52:⚠️ Potential issue | 🟠 MajorConfigure a finite timeout for backend requests.
reqwest::Client::new()has no default request timeout, so a stalled backend can block the worker before the retry loop gets an error to handle.⏱️ Proposed fix
use anyhow::Result; use domain::TransactionEvent; use serde::Serialize; +use std::time::Duration; impl HttpBackendClient { pub fn new(config: AppConfig) -> Self { Self { - http_client: reqwest::Client::new(), + http_client: reqwest::Client::builder() + .timeout(Duration::from_secs(30)) + .build() + .expect("failed to build HTTP client"), config, } } }Verification:
#!/bin/bash # Verify reqwest client construction uses a finite timeout. rg -n -C4 'reqwest::Client::new\(\)|Client::builder\(\)|\.timeout\s*\(' --type rust🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rpc-client/src/lib.rs` around lines 49 - 52, The reqwest client in the constructor Pub fn new(config: AppConfig) uses reqwest::Client::new() which has no request timeout; change the http_client initialization in new() to use reqwest::Client::builder().timeout(Duration::from_secs(30)).build().expect(...) (or handle the Result) and add the necessary std::time::Duration import so all backend requests have a finite timeout; update the http_client field initialization in the same function (pub fn new) accordingly.crates/webhook-server/src/main.rs (1)
93-101:⚠️ Potential issue | 🟡 MinorUse constant-time comparison for the Flutterwave secret.
signature == secretcompares a shared secret with normal string equality. The Paystack branch already usesct_eq; apply the same pattern here.🔒 Proposed comparison change
} // Flutterwave v3: verif-hash is the plain secret, compare directly - signature == secret + use subtle::ConstantTimeEq; + signature.as_bytes().ct_eq(secret.as_bytes()).into() }#!/bin/bash # Description: Verify Flutterwave secret comparison and timing-safe comparison usage. rg -n -C3 'signature == secret|ct_eq|flutterwave' crates/webhook-server/src/main.rs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/webhook-server/src/main.rs` around lines 93 - 101, The Flutterwave branch in the webhook signature match uses normal string equality (signature == secret); change it to use the same constant-time comparison used by the Paystack branch (ct_eq) to avoid timing attacks: locate the "flutterwave" match arm (the block that reads FLUTTERWAVE_WEBHOOK_SECRET and returns signature == secret) and replace the equality check with the timing-safe ct_eq call (and add any required use/import if ct_eq is defined elsewhere in the file or module) so the code mirrors the Paystack comparison pattern.crates/queue/src/lib.rs (1)
57-77:⚠️ Potential issue | 🟠 MajorAvoid reconnecting to Redis on every blocking poll.
brpop_asyncstill opens a fresh synchronous Redis connection for everyconsume()call or timeout. Reuse a dedicated blocking connection or a small pool for the consumer path; keep the multiplexed connection forLPUSH.#!/bin/bash # Description: Verify whether the Redis consumer path still creates a new blocking # connection for each BRPOP poll. rg -n -C4 'fn brpop_async|get_connection\(|BRPOP|fn consume' crates/queue/src/lib.rs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/queue/src/lib.rs` around lines 57 - 77, brpop_async currently calls client.get_connection() on every blocking poll which reconnects each time; change the consumer path to reuse a dedicated blocking connection or small sync pool instead of opening a new connection per call: add a persistent blocking connection/pool field on the queue struct (e.g., blocking_conn or sync_pool) initialized once when the queue is created, have brpop_async use that stored connection/pool (protect mutable access with a Mutex or use a small r2d2-like pool) instead of calling client.get_connection(), and update consume to rely on brpop_async using the reused connection so BRPOP no longer reconnects on each timeout.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/rpc-client/src/lib.rs`:
- Around line 17-25: The from_env function currently silently maps unknown or
misspelled APP_ENV values to Staging; change from_env to return
anyhow::Result<Self> and validate the raw environment string (trim whitespace
and normalize case), accept only the explicit variants you support (e.g.,
"production"/"prod" -> Environment::Production and "staging"/"stage" ->
Environment::Staging), and return an Err(anyhow::anyhow!(...)) with the original
value for any other input; update call sites (e.g., where from_env is called in
crates/worker/src/main.rs) to propagate or handle the Result so the process
fails-fast on invalid APP_ENV.
In `@crates/webhook-server/src/main.rs`:
- Around line 58-69: The normalized event returned by normalize_webhook_payload
may contain default/empty values; before calling state.queue.publish(event)
validate that event.transaction_ref, event.user_id, and event.status are present
and non-empty (trimmed) and, if any are missing or empty, return
Err(StatusCode::BAD_REQUEST); perform this check immediately after normalization
and before the println/publish call so invalid events are rejected early and
never enqueued.
- Around line 111-114: The default match arm in verify_signature currently
allows unknown providers only in non-production, which causes supported
providers (mtn-momo, airtel-money, orange-money) to fall through; update
verify_signature to handle each of "mtn-momo", "airtel-money", and
"orange-money" explicitly by adding dedicated match arms that call
provider-specific verification routines (or implement verification logic inline)
and return the correct boolean, or if you aren’t ready to verify them yet,
remove/disable those provider values from the public webhook route so they do
not hit the default arm; refer to verify_signature and normalize_webhook_payload
to ensure provider names match and that the verification result aligns with
production behavior.
---
Duplicate comments:
In `@crates/queue/src/lib.rs`:
- Around line 57-77: brpop_async currently calls client.get_connection() on
every blocking poll which reconnects each time; change the consumer path to
reuse a dedicated blocking connection or small sync pool instead of opening a
new connection per call: add a persistent blocking connection/pool field on the
queue struct (e.g., blocking_conn or sync_pool) initialized once when the queue
is created, have brpop_async use that stored connection/pool (protect mutable
access with a Mutex or use a small r2d2-like pool) instead of calling
client.get_connection(), and update consume to rely on brpop_async using the
reused connection so BRPOP no longer reconnects on each timeout.
In `@crates/rpc-client/src/lib.rs`:
- Around line 110-145: Tests mutate process-wide environment variables (unsafe
set_var/remove_var) which is unsafe under parallel test harness; refactor by
extracting a pure parser like environment_from_value(value: Option<&str>) ->
Result<Environment> and have AppConfig::from_env call that helper (and
backend_url continue to use AppConfig.environment). Update the tests
app_config_selects_environment_from_env and
backend_url_reads_from_env_with_fallbacks to call environment_from_value (and
construct AppConfig with explicit Environment) instead of setting/removing
std::env vars, and add unit tests for invalid/missing inputs to cover fallbacks
without touching process-wide state.
- Around line 49-52: The reqwest client in the constructor Pub fn new(config:
AppConfig) uses reqwest::Client::new() which has no request timeout; change the
http_client initialization in new() to use
reqwest::Client::builder().timeout(Duration::from_secs(30)).build().expect(...)
(or handle the Result) and add the necessary std::time::Duration import so all
backend requests have a finite timeout; update the http_client field
initialization in the same function (pub fn new) accordingly.
In `@crates/webhook-server/src/main.rs`:
- Around line 93-101: The Flutterwave branch in the webhook signature match uses
normal string equality (signature == secret); change it to use the same
constant-time comparison used by the Paystack branch (ct_eq) to avoid timing
attacks: locate the "flutterwave" match arm (the block that reads
FLUTTERWAVE_WEBHOOK_SECRET and returns signature == secret) and replace the
equality check with the timing-safe ct_eq call (and add any required use/import
if ct_eq is defined elsewhere in the file or module) so the code mirrors the
Paystack comparison pattern.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: be894517-208b-477f-8c13-fbf5740762e7
📒 Files selected for processing (3)
crates/queue/src/lib.rscrates/rpc-client/src/lib.rscrates/webhook-server/src/main.rs
| pub fn from_env() -> Self { | ||
| let env = std::env::var("APP_ENV").unwrap_or_else(|_| "staging".to_string()); | ||
| let environment = match env.to_lowercase().as_str() { | ||
| "production" | "prod" => Environment::Production, | ||
| _ => Environment::Staging, | ||
| }; | ||
|
|
||
| Self { environment } | ||
| } |
There was a problem hiding this comment.
Fail fast on invalid APP_ENV values.
APP_ENV=prodction or production currently falls through to staging. Since the worker uses this to choose the backend URL, a deploy typo can silently route production events to staging.
🛡️ Proposed fix
impl AppConfig {
- pub fn from_env() -> Self {
- let env = std::env::var("APP_ENV").unwrap_or_else(|_| "staging".to_string());
- let environment = match env.to_lowercase().as_str() {
- "production" | "prod" => Environment::Production,
- _ => Environment::Staging,
- };
+ pub fn from_env() -> Result<Self> {
+ let environment = match std::env::var("APP_ENV") {
+ Ok(env) => match env.trim().to_ascii_lowercase().as_str() {
+ "staging" => Environment::Staging,
+ "production" | "prod" => Environment::Production,
+ other => anyhow::bail!("unsupported APP_ENV: {other}"),
+ },
+ Err(_) => Environment::Staging,
+ };
- Self { environment }
+ Ok(Self { environment })
}Call sites that already return anyhow::Result, such as crates/worker/src/main.rs, can then use:
- let config = AppConfig::from_env();
+ let config = AppConfig::from_env()?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn from_env() -> Self { | |
| let env = std::env::var("APP_ENV").unwrap_or_else(|_| "staging".to_string()); | |
| let environment = match env.to_lowercase().as_str() { | |
| "production" | "prod" => Environment::Production, | |
| _ => Environment::Staging, | |
| }; | |
| Self { environment } | |
| } | |
| pub fn from_env() -> Result<Self> { | |
| let environment = match std::env::var("APP_ENV") { | |
| Ok(env) => match env.trim().to_ascii_lowercase().as_str() { | |
| "staging" => Environment::Staging, | |
| "production" | "prod" => Environment::Production, | |
| other => anyhow::bail!("unsupported APP_ENV: {other}"), | |
| }, | |
| Err(_) => Environment::Staging, | |
| }; | |
| Ok(Self { environment }) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/rpc-client/src/lib.rs` around lines 17 - 25, The from_env function
currently silently maps unknown or misspelled APP_ENV values to Staging; change
from_env to return anyhow::Result<Self> and validate the raw environment string
(trim whitespace and normalize case), accept only the explicit variants you
support (e.g., "production"/"prod" -> Environment::Production and
"staging"/"stage" -> Environment::Staging), and return an
Err(anyhow::anyhow!(...)) with the original value for any other input; update
call sites (e.g., where from_env is called in crates/worker/src/main.rs) to
propagate or handle the Result so the process fails-fast on invalid APP_ENV.
| let event = normalize_webhook_payload(&body, &provider).map_err(|_| StatusCode::BAD_REQUEST)?; | ||
|
|
||
| println!( | ||
| "Received webhook from {} for ref: {}", | ||
| provider, event.transaction_ref | ||
| ); | ||
|
|
||
| state | ||
| .queue | ||
| .publish(event) | ||
| .await | ||
| .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; |
There was a problem hiding this comment.
Validate normalized events before publishing.
The domain normalizers default missing fields to empty strings or 0.0, so a parseable webhook with missing required identifiers can still be queued and sent downstream. Reject at least empty transaction_ref, user_id, and status before publish.
🛡️ Proposed guard before enqueueing
let event = normalize_webhook_payload(&body, &provider).map_err(|_| StatusCode::BAD_REQUEST)?;
+
+ if event.transaction_ref.trim().is_empty()
+ || event.user_id.trim().is_empty()
+ || event.status.trim().is_empty()
+ {
+ eprintln!("Invalid normalized webhook payload for provider: {}", provider);
+ return Err(StatusCode::BAD_REQUEST);
+ }
println!(
"Received webhook from {} for ref: {}",
provider, event.transaction_ref📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let event = normalize_webhook_payload(&body, &provider).map_err(|_| StatusCode::BAD_REQUEST)?; | |
| println!( | |
| "Received webhook from {} for ref: {}", | |
| provider, event.transaction_ref | |
| ); | |
| state | |
| .queue | |
| .publish(event) | |
| .await | |
| .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; | |
| let event = normalize_webhook_payload(&body, &provider).map_err(|_| StatusCode::BAD_REQUEST)?; | |
| if event.transaction_ref.trim().is_empty() | |
| || event.user_id.trim().is_empty() | |
| || event.status.trim().is_empty() | |
| { | |
| eprintln!("Invalid normalized webhook payload for provider: {}", provider); | |
| return Err(StatusCode::BAD_REQUEST); | |
| } | |
| println!( | |
| "Received webhook from {} for ref: {}", | |
| provider, event.transaction_ref | |
| ); | |
| state | |
| .queue | |
| .publish(event) | |
| .await | |
| .map_err(|_| StatusCode::INTERNAL_SERVER_ERROR)?; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/webhook-server/src/main.rs` around lines 58 - 69, The normalized event
returned by normalize_webhook_payload may contain default/empty values; before
calling state.queue.publish(event) validate that event.transaction_ref,
event.user_id, and event.status are present and non-empty (trimmed) and, if any
are missing or empty, return Err(StatusCode::BAD_REQUEST); perform this check
immediately after normalization and before the println/publish call so invalid
events are rejected early and never enqueued.
| _ => { | ||
| // In dev/testing, allow unknown providers | ||
| env::var("APP_ENV").unwrap_or_else(|_| "production".to_string()) != "production" | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Verify provider coverage between domain normalization and webhook
# signature verification.
rg -n -C2 '"mtn-momo"|"airtel-money"|"orange-money"|verify_signature|extract_signature' \
crates/domain/src/lib.rs crates/webhook-server/src/main.rsRepository: ourpocket/webhook
Length of output: 1867
🏁 Script executed:
# Get file size first
wc -l crates/webhook-server/src/main.rsRepository: ourpocket/webhook
Length of output: 98
🏁 Script executed:
# Get the full extract_signature and verify_signature functions
# First, find their line ranges
rg -n "^fn (extract_signature|verify_signature)" crates/webhook-server/src/main.rsRepository: ourpocket/webhook
Length of output: 219
🏁 Script executed:
# Get context around lines 111-114
sed -n '105,120p' crates/webhook-server/src/main.rsRepository: ourpocket/webhook
Length of output: 626
🏁 Script executed:
# Read the full extract_signature function
sed -n '74,90p' crates/webhook-server/src/main.rsRepository: ourpocket/webhook
Length of output: 599
🏁 Script executed:
# Read the full verify_signature function
sed -n '91,115p' crates/webhook-server/src/main.rsRepository: ourpocket/webhook
Length of output: 1172
Add explicit verification handling for every supported provider.
normalize_webhook_payload in crates/domain/src/lib.rs supports mtn-momo, airtel-money, and orange-money, but the default case in verify_signature (lines 111-114) rejects them in production. These three providers fall through to the default match arm which allows them only in non-production environments. Either add explicit provider-specific verification for those providers or remove/disable them from the public webhook route until supported.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/webhook-server/src/main.rs` around lines 111 - 114, The default match
arm in verify_signature currently allows unknown providers only in
non-production, which causes supported providers (mtn-momo, airtel-money,
orange-money) to fall through; update verify_signature to handle each of
"mtn-momo", "airtel-money", and "orange-money" explicitly by adding dedicated
match arms that call provider-specific verification routines (or implement
verification logic inline) and return the correct boolean, or if you aren’t
ready to verify them yet, remove/disable those provider values from the public
webhook route so they do not hit the default arm; refer to verify_signature and
normalize_webhook_payload to ensure provider names match and that the
verification result aligns with production behavior.
|
This is amazing, how about we create a folder for holding all structs seperately, then import it where ever we want. |
Summary by CodeRabbit
New Features
Chores
Documentation
Tests