Skip to content

fix: migrate logging from zap to slog#183

Open
ravisuhag wants to merge 3 commits intomainfrom
fix/migrate-logging-to-slog
Open

fix: migrate logging from zap to slog#183
ravisuhag wants to merge 3 commits intomainfrom
fix/migrate-logging-to-slog

Conversation

@ravisuhag
Copy link
Copy Markdown
Member

Summary

  • Replace go.uber.org/zap with Go's standard log/slog across the entire codebase
  • Add configurable LogLevel to server config (debug/info/warn/error)
  • Reduce verbose DB query logs by setting pgx logger to warn level

Changes

pkg/logger/

  • logger.go: replaced zap init() with Init(logLevel) that configures slog with JSON handler, source info, and configurable level
  • pgx.go: new slog adapter implementing pgx.Logger interface for database logging

config/config.go

  • Added LogLevel field (default: info)

internal/server/server.go

  • Call logger.Init(cfg.LogLevel) at startup
  • Replace all log.Fatalln/log.Fatalf with slog.Error + os.Exit(1)

internal/store/postgres/postgres.go

  • Replace zap adapter with slog adapter
  • Set pgx log level to LogLevelWarn to reduce DB query noise

formats/json/

  • compatibility.go: replace logger.Logger.Warn(fmt.Sprintf(...)) with slog.Warn(..., "type", schemaTypes)
  • schema.go: replace zap warns with structured slog warns including error context

internal/middleware/recovery.go

  • Replace log.Printf with slog.ErrorContext

Removed dependencies: go.uber.org/zap, pgx/v4/log/zapadapter

Closes #178

Test plan

  • go build ./... passes
  • go test ./... passes (except pre-existing formats/protobuf)
  • Server starts with loglevel: debug — DB queries visible
  • Server starts with loglevel: warn — DB queries suppressed
  • Logs output as structured JSON to stderr

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 21, 2026

Warning

Rate limit exceeded

@ravisuhag has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 40 minutes and 35 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 40 minutes and 35 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: e2f8f978-9b6a-4795-8ec6-2db4c9bf115a

📥 Commits

Reviewing files that changed from the base of the PR and between 7ce5801 and ddbd0a1.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (45)
  • .github/workflows/docs.yml
  • .github/workflows/lint.yml
  • .github/workflows/release-server.yml
  • .github/workflows/test-server.yaml
  • .golangci.yaml
  • cmd/check.go
  • cmd/create.go
  • cmd/delete.go
  • cmd/diff.go
  • cmd/download.go
  • cmd/edit.go
  • cmd/graph.go
  • cmd/info.go
  • cmd/list.go
  • cmd/namespace.go
  • cmd/print.go
  • config/build.go
  • config/config.go
  • config/load.go
  • core/namespace/namespace.go
  • core/namespace/service.go
  • core/schema/compatibility.go
  • core/schema/mocks/schema_cache.go
  • core/schema/schema.go
  • core/schema/service.go
  • core/search/search.go
  • core/search/service.go
  • formats/json/compatibility.go
  • formats/json/schema.go
  • formats/json/utils.go
  • formats/protobuf/compatibility.go
  • formats/protobuf/provider_test.go
  • go.mod
  • internal/api/api.go
  • internal/middleware/recovery.go
  • internal/server/server.go
  • internal/store/errors.go
  • internal/store/postgres/namespace_repository.go
  • internal/store/postgres/postgres.go
  • internal/store/postgres/postgres_test.go
  • internal/store/postgres/schema_repository.go
  • pkg/graph/graph.go
  • pkg/logger/logger.go
  • pkg/logger/pgx.go
  • ui/embed.go
📝 Walkthrough

Walkthrough

This PR refactors logging by migrating from Zap to Go’s structured log/slog. It adds Config.LogLevel (default "info") and introduces pkg/logger.Init(logLevel string) to configure a JSON slog logger. A new pkg/logger.PgxLogger adapts pgx logging to slog. Call sites across the codebase were updated to use slog or the new Init-based logger setup; Zap and its adapters were removed from dependencies and go.mod was updated (Go toolchain bumped to 1.25).

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 26.19% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Out of Scope Changes check ❓ Inconclusive While the PR includes some unrelated changes (workflow updates, linting fixes, cache improvements), the core logging migration commits align with PR #183 objectives. Verify that workflow/CI updates, linting, and cache-related changes in commits were intentional or should be separated into distinct PRs for cleaner review scope.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The PR title accurately captures the main change: migrating from zap to slog logging library across the codebase.
Description check ✅ Passed The description is comprehensive and clearly related to the changeset, detailing the migration strategy and affected files.
Linked Issues check ✅ Passed The PR addresses issue #178 by implementing structured logging with slog, reducing verbose logs via configurable log levels, and suppressing DB query logs with pgx warn level setting.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
internal/store/postgres/postgres.go (1)

41-42: Prefer returning startup errors over panicking from the store layer.

This preserves structured logs and avoids an extra unstructured panic dump on stderr. If changing NewStore’s signature is too large for this PR, consider tracking it as follow-up.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/store/postgres/postgres.go` around lines 41 - 42, The store layer
should return initialization errors instead of panicking: change NewStore (the
constructor around slog.Error and panic(err)) to return (/*store type*/, error),
remove the panic(err) and replace it with returning the error after logging (use
slog.Error(...,"error", err)), and update callers of NewStore to handle the
returned error and propagate or fail gracefully; ensure all references to
panic(err) in postgres.go are removed and replaced with proper error returns.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@go.mod`:
- Line 3: go.mod was bumped to "go 1.25" which requires CI to run Go >=1.21;
update the CI workflow YAMLs that set the go-version (currently "1.20" or
"^1.20") to at least "1.21" or match "1.25" so builds/test use a compatible
toolchain. Locate the CI YAMLs that contain the go-version keys and change them
to a compatible version (e.g., "1.21" or "1.25") or a caret range that includes
1.25, then rerun CI to verify the slog-related build succeeds.

In `@internal/store/postgres/postgres.go`:
- Line 37: The code forces pgx verbosity to Warn by setting
cc.ConnConfig.LogLevel = pgx.LogLevelWarn which prevents Debug/Info DB query
events from reaching the PgxLogger adapter; update NewStore() (or its caller) to
accept the configured log level and map that to cc.ConnConfig.LogLevel (e.g.,
map your slog/config levels to pgx.LogLevelDebug/Info/Warn/Error) so pgx emits
the same verbosity as the application, or alternatively add documentation near
NewStore()/cc.ConnConfig.LogLevel explaining that pgx query logs are
intentionally suppressed below Warn.
- Around line 35-37: ParseConfig’s error is being ignored causing a possible
nil/invalid cc access when setting cc.ConnConfig; change the code around
pgxpool.ParseConfig(conn) so you capture the returned error, check it
immediately, and log/return it before touching cc.ConnConfig or
cc.ConnConfig.Logger (references: pgxpool.ParseConfig, cc, cc.ConnConfig,
logger.PgxLogger, cc.ConnConfig.LogLevel). Ensure the function returns or logs
the ParseConfig error (the same structured error path used later) to avoid panic
and only set ConnConfig.Logger/LogLevel after confirming cc is non-nil and err
== nil.

In `@pkg/logger/pgx.go`:
- Around line 13-17: PgxLogger.Log currently iterates and logs all entries from
the pgx `data` map (including sensitive keys like "args"), so change it to
redact or allowlist before building slog.Attr list: inside PgxLogger.Log inspect
`data` and either (a) use an allowlist of safe keys (e.g., "sql", "command_tag",
"duration", "rows") and only append those to `attrs`, or (b) for any key that is
not explicitly safe (e.g., "args", "password", "params"), append a placeholder
value like "<redacted>" instead of the real value; ensure you still preserve
types for safe fields and that the logic is applied before creating `attrs`.

---

Nitpick comments:
In `@internal/store/postgres/postgres.go`:
- Around line 41-42: The store layer should return initialization errors instead
of panicking: change NewStore (the constructor around slog.Error and panic(err))
to return (/*store type*/, error), remove the panic(err) and replace it with
returning the error after logging (use slog.Error(...,"error", err)), and update
callers of NewStore to handle the returned error and propagate or fail
gracefully; ensure all references to panic(err) in postgres.go are removed and
replaced with proper error returns.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7c260f5a-e979-49d8-aa98-527c079b6c39

📥 Commits

Reviewing files that changed from the base of the PR and between fc6be21 and 32c93e9.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (9)
  • config/config.go
  • formats/json/compatibility.go
  • formats/json/schema.go
  • go.mod
  • internal/middleware/recovery.go
  • internal/server/server.go
  • internal/store/postgres/postgres.go
  • pkg/logger/logger.go
  • pkg/logger/pgx.go

Comment thread go.mod
Comment on lines 35 to +37
cc, _ := pgxpool.ParseConfig(conn)
cc.ConnConfig.Logger = zapadapter.NewLogger(logger.Logger)
cc.ConnConfig.Logger = &logger.PgxLogger{}
cc.ConnConfig.LogLevel = pgx.LogLevelWarn
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's locate and read the file
find . -name "postgres.go" -path "*/store/postgres/*" | head -5

Repository: raystack/stencil

Length of output: 97


🏁 Script executed:

# Read the relevant section of the file
cat -n internal/store/postgres/postgres.go | head -50

Repository: raystack/stencil

Length of output: 1611


🏁 Script executed:

# Check pgx library usage and imports
rg "pgxpool|pgx" internal/store/postgres/postgres.go -B 2 -A 2

Repository: raystack/stencil

Length of output: 867


🏁 Script executed:

# Search for any test files or usage examples that show ParseConfig error handling
rg "ParseConfig" --type go -B 2 -A 2

Repository: raystack/stencil

Length of output: 431


🌐 Web query:

pgxpool.ParseConfig error handling pgx v4 nil pointer

💡 Result:

In pgx v4, pgxpool.ParseConfig parses a PostgreSQL connection string and returns a *pgxpool.Config or an error. It delegates initial parsing to pgx.ParseConfig (which calls pgconn.ParseConfig), then processes pool-specific parameters like pool_max_conns from RuntimeParams, setting defaults if absent (e.g., MaxConns=4). Errors from invalid parameters (e.g., non-integer pool_max_conns) are returned as pgconn.ParseConfigError via pgconn.NewParseConfigError. No evidence of nil pointer panics directly in ParseConfig itself across source code and issues. Nil pointer dereferences in pgx v4 pgxpool typically occur elsewhere, such as: - Passing nil pointers to Scan methods, causing panics in pgtype scanning that previously hung the pool on Close due to unreleased connections (fixed in later v5, but similar behavior possible in v4). - Using nil *pgxpool.Pool (e.g., from failed New/Connect where err ignored and := shadows variable). - Type encoding issues with nil interfaces/pointers in queries (more common in v5). To handle ParseConfig errors safely: config, err := pgxpool.ParseConfig(connString) if err != nil { // Handle error, e.g., log and return/fail return nil, fmt.Errorf("failed to parse config: %w", err) } // Modify config as needed, e.g., config.MaxConns = 10 pool, err := pgxpool.NewWithConfig(ctx, config) if err != nil { return nil, fmt.Errorf("failed to create pool: %w", err) } defer pool.Close Always check err != nil after ParseConfig and NewWithConfig/New/ConnectConfig. Never ignore err or use pool if err != nil. For query operations, always pass addresses (&var) to Scan to avoid nil pointer panics.

Citations:


🏁 Script executed:

# Check if there are any other ParseConfig calls in the codebase that handle errors correctly
rg "ParseConfig" --type go -A 5 | grep -A 3 "err"

Repository: raystack/stencil

Length of output: 213


Handle ParseConfig errors before touching ConnConfig.

If the connection string is invalid, ParseConfig returns an error that is silently ignored with _. Accessing cc.ConnConfig on line 36 when cc may be in an undefined state can panic before the intended structured error log on line 41.

Proposed fix
 func NewStore(conn string) *DB {
-	cc, _ := pgxpool.ParseConfig(conn)
+	cc, err := pgxpool.ParseConfig(conn)
+	if err != nil {
+		slog.Error("failed to parse database config", "error", err)
+		panic(err)
+	}
 	cc.ConnConfig.Logger = &logger.PgxLogger{}
 	cc.ConnConfig.LogLevel = pgx.LogLevelWarn
 
 	pgxPool, err := pgxpool.ConnectConfig(context.Background(), cc)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
cc, _ := pgxpool.ParseConfig(conn)
cc.ConnConfig.Logger = zapadapter.NewLogger(logger.Logger)
cc.ConnConfig.Logger = &logger.PgxLogger{}
cc.ConnConfig.LogLevel = pgx.LogLevelWarn
func NewStore(conn string) *DB {
cc, err := pgxpool.ParseConfig(conn)
if err != nil {
slog.Error("failed to parse database config", "error", err)
panic(err)
}
cc.ConnConfig.Logger = &logger.PgxLogger{}
cc.ConnConfig.LogLevel = pgx.LogLevelWarn
pgxPool, err := pgxpool.ConnectConfig(context.Background(), cc)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/store/postgres/postgres.go` around lines 35 - 37, ParseConfig’s
error is being ignored causing a possible nil/invalid cc access when setting
cc.ConnConfig; change the code around pgxpool.ParseConfig(conn) so you capture
the returned error, check it immediately, and log/return it before touching
cc.ConnConfig or cc.ConnConfig.Logger (references: pgxpool.ParseConfig, cc,
cc.ConnConfig, logger.PgxLogger, cc.ConnConfig.LogLevel). Ensure the function
returns or logs the ParseConfig error (the same structured error path used
later) to avoid panic and only set ConnConfig.Logger/LogLevel after confirming
cc is non-nil and err == nil.

Comment thread internal/store/postgres/postgres.go
Comment thread pkg/logger/pgx.go
@coveralls
Copy link
Copy Markdown

coveralls commented Apr 21, 2026

Coverage Report for CI Build 24734032200

Warning

No base build found for commit fc6be21 on main.
Coverage changes can't be calculated without a base build.
If a base build is processing, this comment will update automatically when it completes.

Coverage: 17.686%

Details

  • Patch coverage: 126 uncovered changes across 24 files (28 of 154 lines covered, 18.18%).

Uncovered Changes

Top 10 Files by Coverage Impact Changed Covered %
pkg/logger/pgx.go 17 0 0.0%
core/schema/service.go 16 0 0.0%
pkg/logger/logger.go 16 0 0.0%
cmd/print.go 12 0 0.0%
internal/store/postgres/schema_repository.go 22 10 45.45%
internal/server/server.go 10 0 0.0%
cmd/diff.go 6 0 0.0%
internal/store/postgres/namespace_repository.go 11 5 45.45%
internal/api/api.go 8 4 50.0%
cmd/check.go 3 0 0.0%

Coverage Regressions

Requires a base build to compare against. How to fix this →


Coverage Stats

Coverage Status
Relevant Lines: 5100
Covered Lines: 902
Line Coverage: 17.69%
Coverage Strength: 0.21 hits per line

💛 - Coveralls

@ravisuhag ravisuhag mentioned this pull request Apr 21, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
formats/protobuf/provider_test.go (1)

58-62: ⚠️ Potential issue | 🟡 Minor

Unchecked error from rand.Read.

crypto/rand.Read returns an error that is being discarded. While Go 1.24+ documents that it never returns a non-nil error (panicking instead on failure), errcheck/linters may still flag this, and being explicit is safer for forward compatibility. Consider handling it:

Proposed fix
 func getRandomName() string {
 	b := make([]byte, 10)
-	rand.Read(b)
+	if _, err := rand.Read(b); err != nil {
+		panic(err)
+	}
 	return hex.EncodeToString(b)
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@formats/protobuf/provider_test.go` around lines 58 - 62, The getRandomName
function currently ignores the error returned by crypto/rand.Read; update
getRandomName to capture and handle that error (e.g., check err := rand.Read(b)
and if err != nil either return a deterministic fallback or fail fast with a
panic/log.Fatalf) so linters won’t flag the unchecked error; modify the function
named getRandomName to perform the err check and handle the error path
accordingly.
formats/protobuf/compatibility.go (1)

113-210: ⚠️ Potential issue | 🟡 Minor

Unrelated change in a logging-migration PR.

This file tightens reserved-range inclusivity checks (AND → OR) in compareMessages (line 118) and compareEnums (line 175), which is unrelated to the stated objective of migrating from zap to slog. Mixing behavioral changes to compatibility semantics into a logging refactor PR makes review/bisect harder and risks regressions slipping past reviewers focused on logging. Consider splitting this into its own PR with a dedicated description and, ideally, test cases that specifically exercise the new "either endpoint missing" path (the existing test fixtures in compatibility_test.go appear to already satisfy both the old AND and the new OR condition, so the stricter semantics may not be explicitly covered).

Also note the PR description states tests pass "except pre-existing formats/protobuf" — please confirm whether that failure is actually pre-existing or was introduced/affected by this semantic change.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@formats/protobuf/compatibility.go` around lines 113 - 210, Revert the
tightened reserved-range check back to the original "both endpoints must be
present" semantics in compareMessages and compareEnums: in compareMessages
(function compareMessages) change the condition that checks
current.ReservedRanges() for start and end-1 to require both Has(start) AND
Has(end-1) (restore && instead of ||), and in compareEnums (function
compareEnums) require both Has(start) AND Has(end) for non-inclusive detection
(restore &&); keep the special-case handling when start==end unchanged and
ensure the nonInclusivereservedRange diagnostic is used as before. Also remove
this behavioral change from the logging-migration PR and split it into its own
PR with focused tests exercising the endpoint-missing paths so
compatibility_test.go covers the new semantics.
.github/workflows/test-server.yaml (1)

22-36: ⚠️ Potential issue | 🟠 Major

Update docker-compose.yml to postgres:15 to match test workflow.

The test workflow now uses postgres:15, but docker-compose.yml still specifies postgres:13. This causes CI/local developer divergence.

SQL migrations use only generic, backwards-compatible syntax (standard CREATE TABLE, JSONB, GIN indexes) — no postgres 13-specific features found, so the version bump is safe. pgx v4.18.3 (pinned in go.mod) is compatible with postgres 15 per the official upstream support policy.

Update docker-compose.yml line 4 from postgres:13 to postgres:15.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/test-server.yaml around lines 22 - 36, Change the postgres
service image in docker-compose from postgres:13 to postgres:15 to match the
test workflow; locate the docker-compose postgres service (the "postgres"
service block and its "image" key) and update the image value from "postgres:13"
to "postgres:15". Ensure no other references to postgres:13 remain in the
compose file.
♻️ Duplicate comments (1)
internal/store/postgres/postgres.go (1)

35-37: ⚠️ Potential issue | 🔴 Critical

Check ParseConfig before dereferencing the config.

pgxpool.ParseConfig still discards err; an invalid DSN can leave cc unusable before the structured connect log runs. Line 37 also forces pgx to Warn regardless of app log level, so either keep that suppression documented here or wire it to config if DB debug logs are expected.

🐛 Proposed fix
 func NewStore(conn string) *DB {
-	cc, _ := pgxpool.ParseConfig(conn)
+	cc, err := pgxpool.ParseConfig(conn)
+	if err != nil {
+		slog.Error("failed to parse database config", "error", err)
+		panic(err)
+	}
 	cc.ConnConfig.Logger = &logger.PgxLogger{}
+	// Keep pgx at warn to suppress noisy query logs regardless of app log level.
 	cc.ConnConfig.LogLevel = pgx.LogLevelWarn
 
 	pgxPool, err := pgxpool.ConnectConfig(context.Background(), cc)

Verification:

#!/bin/bash
# Description: Inspect pgx config parsing and log-level wiring.
# Expect: ParseConfig errors are checked before ConnConfig is used, and pgx log-level policy is documented or config-driven.
rg -n -C3 '\bpgxpool\.ParseConfig\(|ConnConfig\.Logger|ConnConfig\.LogLevel|LogLevel' --type go
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/store/postgres/postgres.go` around lines 35 - 37, Check the error
returned by pgxpool.ParseConfig before using the returned config (avoid
dereferencing cc when ParseConfig returns an error) and handle or return that
error from the function; locate the pgxpool.ParseConfig call and replace the
unchecked pattern that then sets cc.ConnConfig.Logger and cc.ConnConfig.LogLevel
so you only set cc.ConnConfig.Logger = &logger.PgxLogger{} and
cc.ConnConfig.LogLevel when cc is valid. Also make the pgx log level
configurable (wire ConnConfig.LogLevel to your app DB debug setting) or add a
clear comment documenting why LogLevel is forced to pgx.LogLevelWarn (adjust the
code paths that set ConnConfig.LogLevel to use the app config variable instead
of hardcoding).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/release-server.yml:
- Around line 5-6: The tags filter in release-server.yml has been broadened to
"- "v*"" which will trigger the workflow for any tag starting with "v"; to
restrict releases to semver tags change that value back to a semver pattern such
as "- "v*.*.*"" or use a stricter regex-like pattern (e.g.
"v[0-9]+.[0-9]+.[0-9]+(-.*)?") in the tags: field; if the wider matching is
intentional for pre-release/short tags, leave "v*" as-is, otherwise replace the
current "v*" entry in the tags: list with one of the semver patterns ("v*.*.*"
or the regex form).

In @.github/workflows/test-server.yaml:
- Around line 53-58: The "Install goveralls and send coverage" step currently
uses a floating go install (`@latest`) and runs only on successful jobs; change
the go install invocation to pin goveralls to a fixed released tag (replace
github.com/mattn/goveralls@latest with
github.com/mattn/goveralls@<specific-version>) so CI is reproducible, and add
if: always() to the step so coverage upload runs even when tests fail (leave
COVERALLS_TOKEN and -service=github as-is).

In `@core/schema/service.go`:
- Around line 127-134: The code calls s.repo.ListVersions and ignores its error,
then proceeds to delete storage and remove cache entries; if ListVersions fails
but Delete succeeds this leaves stale cache keys. Change the flow in the Delete
handler (where s.repo.ListVersions, s.repo.Delete, s.cache.Del and schemaKeyFunc
are used) to first call s.repo.ListVersions and if it returns an error return
that error immediately; only when ListVersions succeeds proceed to call
s.repo.Delete and, on successful delete, iterate versions and call
s.cache.Del(schemaKeyFunc(...)) to evict keys.

In `@formats/protobuf/compatibility.go`:
- Line 118: The check using current.ReservedRanges().Has(start) and Has(end-1)
only verifies endpoints and can miss interior gaps; update the logic in the
field-range validation (the block using variables current, start, end and
calling ReservedRanges().Has and dealing with protoreflect.FieldRange) to ensure
full coverage of [start,end) by either iterating i from start to end-1 and
asserting current.ReservedRanges().Has(i) for each i, or by checking that a
single existing current range fully contains start and end-1; apply the
analogous fix for enum reserved ranges (the code handling protoreflect.EnumRange
/ inclusive [start,end] at the other check) so the entire integer span is
validated rather than only endpoints.

In `@internal/api/api.go`:
- Around line 115-120: The handler currently calls
w.WriteHeader(http.StatusCreated) before json.NewEncoder(w).Encode(sc), so if
encoding fails you cannot change the committed status; fix by encoding to a
temporary buffer (e.g., bytes.Buffer) first or by encoding before calling
w.WriteHeader, then only call w.WriteHeader and copy the buffer to w on success;
if encoding fails log the error via slog (do not call http.Error after the
response is committed) and return. Apply the same pattern in the writeError
path: do not call http.Error after WriteHeader — instead prepare the error body
before committing the status and log any encoding/writing errors with slog.

In `@internal/store/postgres/postgres_test.go`:
- Line 20: The teardown currently ignores the result of m.Down(), which can
leave the test DB in a dirty state; update the test to check m.Down()'s returned
error and fail the test if it is non-nil (e.g., call t.Fatalf or require.NoError
with the error) so teardown errors are surfaced and tests do not continue
silently with a bad DB state; locate the call to m.Down() in the test (the
m.Down() invocation) and replace the silent discard with an explicit error
assertion.

In `@internal/store/postgres/schema_repository.go`:
- Around line 88-99: The delete should be combined with the orphan cleanup
inside a single DB transaction so both succeed or both roll back: replace the
separate r.db.Exec calls for deleteSchemaQuery and deleteOrphanedData in the
method that currently checks ct.RowsAffected() with a r.db.BeginFunc transaction
block, perform the DELETE (deleteSchemaQuery) and ensure rowsAffected > 0
(return store.NoRowsErr via wrapError if zero) then run the orphan cleanup
(deleteOrphanedData) inside the same transaction; propagate any error so the
transaction rolls back and cache invalidation logic in core/schema/service.go
only runs when both ops committed. Ensure this change is applied similarly to
the other occurrence around lines 109-120.

---

Outside diff comments:
In @.github/workflows/test-server.yaml:
- Around line 22-36: Change the postgres service image in docker-compose from
postgres:13 to postgres:15 to match the test workflow; locate the docker-compose
postgres service (the "postgres" service block and its "image" key) and update
the image value from "postgres:13" to "postgres:15". Ensure no other references
to postgres:13 remain in the compose file.

In `@formats/protobuf/compatibility.go`:
- Around line 113-210: Revert the tightened reserved-range check back to the
original "both endpoints must be present" semantics in compareMessages and
compareEnums: in compareMessages (function compareMessages) change the condition
that checks current.ReservedRanges() for start and end-1 to require both
Has(start) AND Has(end-1) (restore && instead of ||), and in compareEnums
(function compareEnums) require both Has(start) AND Has(end) for non-inclusive
detection (restore &&); keep the special-case handling when start==end unchanged
and ensure the nonInclusivereservedRange diagnostic is used as before. Also
remove this behavioral change from the logging-migration PR and split it into
its own PR with focused tests exercising the endpoint-missing paths so
compatibility_test.go covers the new semantics.

In `@formats/protobuf/provider_test.go`:
- Around line 58-62: The getRandomName function currently ignores the error
returned by crypto/rand.Read; update getRandomName to capture and handle that
error (e.g., check err := rand.Read(b) and if err != nil either return a
deterministic fallback or fail fast with a panic/log.Fatalf) so linters won’t
flag the unchecked error; modify the function named getRandomName to perform the
err check and handle the error path accordingly.

---

Duplicate comments:
In `@internal/store/postgres/postgres.go`:
- Around line 35-37: Check the error returned by pgxpool.ParseConfig before
using the returned config (avoid dereferencing cc when ParseConfig returns an
error) and handle or return that error from the function; locate the
pgxpool.ParseConfig call and replace the unchecked pattern that then sets
cc.ConnConfig.Logger and cc.ConnConfig.LogLevel so you only set
cc.ConnConfig.Logger = &logger.PgxLogger{} and cc.ConnConfig.LogLevel when cc is
valid. Also make the pgx log level configurable (wire ConnConfig.LogLevel to
your app DB debug setting) or add a clear comment documenting why LogLevel is
forced to pgx.LogLevelWarn (adjust the code paths that set ConnConfig.LogLevel
to use the app config variable instead of hardcoding).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: d264e4ff-4205-4118-b44d-45cbfa954317

📥 Commits

Reviewing files that changed from the base of the PR and between 32c93e9 and 7ce5801.

📒 Files selected for processing (38)
  • .github/workflows/docs.yml
  • .github/workflows/lint.yml
  • .github/workflows/release-server.yml
  • .github/workflows/test-server.yaml
  • .golangci.yaml
  • cmd/check.go
  • cmd/create.go
  • cmd/delete.go
  • cmd/diff.go
  • cmd/download.go
  • cmd/edit.go
  • cmd/graph.go
  • cmd/info.go
  • cmd/list.go
  • cmd/namespace.go
  • cmd/print.go
  • config/build.go
  • config/load.go
  • core/namespace/namespace.go
  • core/namespace/service.go
  • core/schema/compatibility.go
  • core/schema/mocks/schema_cache.go
  • core/schema/schema.go
  • core/schema/service.go
  • core/search/search.go
  • core/search/service.go
  • formats/json/utils.go
  • formats/protobuf/compatibility.go
  • formats/protobuf/provider_test.go
  • internal/api/api.go
  • internal/store/errors.go
  • internal/store/postgres/namespace_repository.go
  • internal/store/postgres/postgres.go
  • internal/store/postgres/postgres_test.go
  • internal/store/postgres/schema_repository.go
  • pkg/graph/graph.go
  • pkg/logger/pgx.go
  • ui/embed.go
✅ Files skipped from review due to trivial changes (18)
  • internal/store/errors.go
  • cmd/create.go
  • cmd/info.go
  • cmd/edit.go
  • pkg/graph/graph.go
  • ui/embed.go
  • config/build.go
  • cmd/download.go
  • config/load.go
  • core/search/search.go
  • cmd/check.go
  • cmd/delete.go
  • core/schema/compatibility.go
  • core/namespace/namespace.go
  • core/namespace/service.go
  • core/search/service.go
  • formats/json/utils.go
  • cmd/list.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • pkg/logger/pgx.go

Comment on lines 5 to +6
tags:
- "v*.*.*"
- "v*"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Tag filter widened from semver to any v* tag.

Changing v*.*.*v* means any tag starting with v (e.g. v-test, valpha, v1, v2-rc) will now trigger the release + Docker publish workflow. If that's intentional to support pre-release / short tags, ignore; otherwise consider restricting to semver-shaped tags, e.g.:

-      - "v*"
+      - "v[0-9]+.[0-9]+.[0-9]+"
+      - "v[0-9]+.[0-9]+.[0-9]+-*"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/release-server.yml around lines 5 - 6, The tags filter in
release-server.yml has been broadened to "- "v*"" which will trigger the
workflow for any tag starting with "v"; to restrict releases to semver tags
change that value back to a semver pattern such as "- "v*.*.*"" or use a
stricter regex-like pattern (e.g. "v[0-9]+.[0-9]+.[0-9]+(-.*)?") in the tags:
field; if the wider matching is intentional for pre-release/short tags, leave
"v*" as-is, otherwise replace the current "v*" entry in the tags: list with one
of the semver patterns ("v*.*.*" or the regex form).

Comment on lines +53 to +58
- name: Install goveralls and send coverage
env:
COVERALLS_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
go install github.com/mattn/goveralls@latest
goveralls -coverprofile=coverage.out -service=github
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify that `make test` produces coverage.out at the repo root and check the test target definition.
fd -HI -t f '^Makefile$' --max-depth 3
rg -nP -C2 '^(test|coverage)\s*:' Makefile 2>/dev/null || true
rg -n 'coverage\.out|-coverprofile' Makefile 2>/dev/null || true

Repository: raystack/stencil

Length of output: 518


🏁 Script executed:

#!/bin/bash
# Check if v0.0.12 is a valid version and look for available versions of goveralls
curl -s https://api.github.com/repos/mattn/goveralls/releases | jq -r '.[0:5] | .[] | "\(.tag_name)"' 2>/dev/null || echo "Unable to fetch releases"

Repository: raystack/stencil

Length of output: 84


Harden the coverage-upload step.

Two concrete issues worth addressing:

  1. go install github.com/mattn/goveralls@latest is a floating pin — pin to a specific version so CI behavior is reproducible if upstream ships a breaking change.
  2. Because there's no if: always(), a single test failure prevents coverage upload entirely. Coverage trends are lost on flaky tests. Consider if: always() to capture coverage even when tests fail (or accept this intentionally).

The assumption that make test produces coverage.out is correct: the test target in the Makefile explicitly uses -coverprofile=coverage.out. Using secrets.GITHUB_TOKEN as COVERALLS_TOKEN with -service=github is also the documented pattern, so that part is fine.

♻️ Suggested change
       - name: Install goveralls and send coverage
+        if: always()
         env:
           COVERALLS_TOKEN: ${{ secrets.GITHUB_TOKEN }}
         run: |
-          go install github.com/mattn/goveralls@latest
+          go install github.com/mattn/goveralls@v0.0.12
           goveralls -coverprofile=coverage.out -service=github
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/test-server.yaml around lines 53 - 58, The "Install
goveralls and send coverage" step currently uses a floating go install (`@latest`)
and runs only on successful jobs; change the go install invocation to pin
goveralls to a fixed released tag (replace github.com/mattn/goveralls@latest
with github.com/mattn/goveralls@<specific-version>) so CI is reproducible, and
add if: always() to the step so coverage upload runs even when tests fail (leave
COVERALLS_TOKEN and -service=github as-is).

Comment thread core/schema/service.go Outdated
prevRange := prevRanges.Get(i)
start, end := prevRange[0], prevRange[1]
if !(current.ReservedRanges().Has(start) && current.ReservedRanges().Has(end-1)) {
if !current.ReservedRanges().Has(start) || !current.ReservedRanges().Has(end-1) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Endpoint-only check still allows gaps in the middle of the range.

Switching from AND to OR makes the check stricter and is a reasonable improvement, but checking only start and end-1 can still miss a case where the endpoints exist in current but an interior number (e.g., start+1) is no longer reserved — that range would incorrectly be treated as inclusive. If the intent is to ensure the previous reserved range is fully preserved, iterate over [start, end) (messages are half-open per protoreflect.FieldRange) and assert each number is present, or verify that a single current range fully covers [start, end).

Same consideration applies at line 175 for enums (where ranges are inclusive [start, end] per protoreflect.EnumRange).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@formats/protobuf/compatibility.go` at line 118, The check using
current.ReservedRanges().Has(start) and Has(end-1) only verifies endpoints and
can miss interior gaps; update the logic in the field-range validation (the
block using variables current, start, end and calling ReservedRanges().Has and
dealing with protoreflect.FieldRange) to ensure full coverage of [start,end) by
either iterating i from start to end-1 and asserting
current.ReservedRanges().Has(i) for each i, or by checking that a single
existing current range fully contains start and end-1; apply the analogous fix
for enum reserved ranges (the code handling protoreflect.EnumRange / inclusive
[start,end] at the other check) so the entire integer span is validated rather
than only endpoints.

Comment thread internal/api/api.go
Comment on lines 115 to 120
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
json.NewEncoder(w).Encode(sc)
if err := json.NewEncoder(w).Encode(sc); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

http.Error after WriteHeader won't override the status.

w.WriteHeader(http.StatusCreated) at line 116 commits the response headers, so if Encode fails partway through, the subsequent http.Error(w, ..., 500) can't change the status (it'll just log a "superfluous response.WriteHeader" warning and append the error text to a response already returning 201). The same pattern exists in writeError (Line 155-157).

At this point the response is effectively unrecoverable; logging the encoder error (especially now that slog is available) and returning is usually more honest than pretending to send a 500.

Suggested change
-	w.Header().Set("Content-Type", "application/json")
-	w.WriteHeader(http.StatusCreated)
-	if err := json.NewEncoder(w).Encode(sc); err != nil {
-		http.Error(w, err.Error(), http.StatusInternalServerError)
-	}
+	w.Header().Set("Content-Type", "application/json")
+	w.WriteHeader(http.StatusCreated)
+	if err := json.NewEncoder(w).Encode(sc); err != nil {
+		slog.Error("failed to encode upload schema response", "err", err)
+	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
json.NewEncoder(w).Encode(sc)
if err := json.NewEncoder(w).Encode(sc); err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusCreated)
if err := json.NewEncoder(w).Encode(sc); err != nil {
slog.Error("failed to encode upload schema response", "err", err)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/api/api.go` around lines 115 - 120, The handler currently calls
w.WriteHeader(http.StatusCreated) before json.NewEncoder(w).Encode(sc), so if
encoding fails you cannot change the committed status; fix by encoding to a
temporary buffer (e.g., bytes.Buffer) first or by encoding before calling
w.WriteHeader, then only call w.WriteHeader and copy the buffer to w on success;
if encoding fails log the error via slog (do not call http.Error after the
response is committed) and return. Apply the same pattern in the writeError
path: do not call http.Error after WriteHeader — instead prepare the error body
before committing the status and log any encoding/writing errors with slog.

m, err := postgres.NewHTTPFSMigrator(connectionString)
if assert.NoError(t, err) {
m.Down()
_ = m.Down()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fail the teardown when rollback fails.

Silently ignoring m.Down() can leave test DB state behind and make later failures misleading.

🧪 Proposed fix
-		_ = m.Down()
+		if err := m.Down(); err != nil {
+			t.Fatalf("failed to roll back test migrations: %v", err)
+		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
_ = m.Down()
if err := m.Down(); err != nil {
t.Fatalf("failed to roll back test migrations: %v", err)
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/store/postgres/postgres_test.go` at line 20, The teardown currently
ignores the result of m.Down(), which can leave the test DB in a dirty state;
update the test to check m.Down()'s returned error and fail the test if it is
non-nil (e.g., call t.Fatalf or require.NoError with the error) so teardown
errors are surfaced and tests do not continue silently with a bad DB state;
locate the call to m.Down() in the test (the m.Down() invocation) and replace
the silent discard with an explicit error assertion.

Comment on lines +88 to +99
ct, err := r.db.Exec(ctx, deleteSchemaQuery, ns, sc)
if err != nil {
return wrapError(err, "delete schema")
}
if ct.RowsAffected() == 0 {
return store.NoRowsErr.WithErr(nil, "schema")
}
// Idempotent operation to clean orphaned data.
r.db.Exec(ctx, deleteOrphanedData)
return wrapError(err, "delete schema")
if _, execErr := r.db.Exec(ctx, deleteOrphanedData); execErr != nil {
return wrapError(execErr, "delete schema")
}
return nil
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Inspect schema delete/cache invalidation flow.
# Expect: committed deletes cannot return cleanup errors that prevent cache invalidation.
rg -n -C4 'func \(r \*SchemaRepository\) Delete|func \(r \*SchemaRepository\) DeleteVersion|deleteOrphanedData|BeginFunc|cache\.Del' --type go

Repository: raystack/stencil

Length of output: 6121


Wrap primary delete and orphan cleanup in a single transaction to ensure cache invalidation is consistent with database state.

When deleteOrphanedData fails after the primary DELETE has already committed, the rows are gone from the database but core/schema/service.go skips cache invalidation because the error is returned. This leaves stale cached entries for deleted rows. Use r.db.BeginFunc to make both operations atomic—if cleanup fails, the entire transaction rolls back so the error correctly prevents cache invalidation.

Proposed fix
 func (r *SchemaRepository) Delete(ctx context.Context, ns string, sc string) error {
-	ct, err := r.db.Exec(ctx, deleteSchemaQuery, ns, sc)
-	if err != nil {
-		return wrapError(err, "delete schema")
-	}
-	if ct.RowsAffected() == 0 {
-		return store.NoRowsErr.WithErr(nil, "schema")
-	}
-	// Idempotent operation to clean orphaned data.
-	if _, execErr := r.db.Exec(ctx, deleteOrphanedData); execErr != nil {
-		return wrapError(execErr, "delete schema")
-	}
-	return nil
+	return r.db.BeginFunc(ctx, func(tx pgx.Tx) error {
+		ct, err := tx.Exec(ctx, deleteSchemaQuery, ns, sc)
+		if err != nil {
+			return wrapError(err, "delete schema")
+		}
+		if ct.RowsAffected() == 0 {
+			return store.NoRowsErr.WithErr(nil, "schema")
+		}
+		// Idempotent operation to clean orphaned data.
+		if _, execErr := tx.Exec(ctx, deleteOrphanedData); execErr != nil {
+			return wrapError(execErr, "delete schema")
+		}
+		return nil
+	})
 }
 
 func (r *SchemaRepository) DeleteVersion(ctx context.Context, ns string, sc string, version int32) error {
-	ct, err := r.db.Exec(ctx, deleteVersionQuery, ns, sc, version)
-	if err != nil {
-		return wrapError(err, "delete version")
-	}
-	if ct.RowsAffected() == 0 {
-		return store.NoRowsErr.WithErr(nil, "version")
-	}
-	// Idempotent operation to clean orphaned data.
-	if _, execErr := r.db.Exec(ctx, deleteOrphanedData); execErr != nil {
-		return wrapError(execErr, "delete version")
-	}
-	return nil
+	return r.db.BeginFunc(ctx, func(tx pgx.Tx) error {
+		ct, err := tx.Exec(ctx, deleteVersionQuery, ns, sc, version)
+		if err != nil {
+			return wrapError(err, "delete version")
+		}
+		if ct.RowsAffected() == 0 {
+			return store.NoRowsErr.WithErr(nil, "version")
+		}
+		// Idempotent operation to clean orphaned data.
+		if _, execErr := tx.Exec(ctx, deleteOrphanedData); execErr != nil {
+			return wrapError(execErr, "delete version")
+		}
+		return nil
+	})
 }

Also applies to: 109-120

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/store/postgres/schema_repository.go` around lines 88 - 99, The
delete should be combined with the orphan cleanup inside a single DB transaction
so both succeed or both roll back: replace the separate r.db.Exec calls for
deleteSchemaQuery and deleteOrphanedData in the method that currently checks
ct.RowsAffected() with a r.db.BeginFunc transaction block, perform the DELETE
(deleteSchemaQuery) and ensure rowsAffected > 0 (return store.NoRowsErr via
wrapError if zero) then run the orphan cleanup (deleteOrphanedData) inside the
same transaction; propagate any error so the transaction rolls back and cache
invalidation logic in core/schema/service.go only runs when both ops committed.
Ensure this change is applied similarly to the other occurrence around lines
109-120.

@ravisuhag ravisuhag force-pushed the fix/migrate-logging-to-slog branch from 5bd3110 to 9ba3de8 Compare April 21, 2026 16:21
Replace zap logger with Go's standard log/slog throughout the codebase,
establishing a single consistent logging approach.

- pkg/logger: replace zap init with slog JSON handler + configurable level
- pkg/logger/pgx.go: new slog adapter for pgx database logger with
  allowlisted safe keys to avoid logging sensitive bind arguments
- config: add LogLevel field (debug/info/warn/error)
- postgres: use slog adapter with warn-level DB logging
- formats/json: replace zap warns with structured slog warns
- middleware/recovery: use slog instead of stdlib log
- server: use logger.Init() and slog throughout

Closes #178
- Invalidate ristretto cache entries when schemas/versions are deleted,
  preventing stale data from being served after recreation
- Return not-found error when deleting non-existent namespaces,
  schemas, or versions by checking RowsAffected
- Check ListVersions error before proceeding with schema delete
- Log encode errors instead of attempting http.Error after WriteHeader
- Add Del method to Cache interface

Fixes #180
Fixes #145
- Upgrade CI: actions/checkout@v6, actions/setup-go@v6,
  golangci-lint-action@v9, goreleaser-action@v7, setup-protoc@v3
- Use go-version-file instead of hardcoded Go versions
- Add concurrency groups and path filters
- Update golangci-lint config to v2 schema
- Fix all errcheck issues (unchecked MarkFlagRequired, json.Encode, etc.)
- Fix staticcheck issues (deprecated ioutil/rand, redundant nil checks)
- Add comments on exported types
@ravisuhag ravisuhag force-pushed the fix/migrate-logging-to-slog branch from 9ba3de8 to ddbd0a1 Compare April 21, 2026 16:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Logging improvement

2 participants