Skip to content

シフト変更通知機能#254

Open
taminororo wants to merge 29 commits intodevelopfrom
feat/kanba/249/migrate-notification-mock
Open

シフト変更通知機能#254
taminororo wants to merge 29 commits intodevelopfrom
feat/kanba/249/migrate-notification-mock

Conversation

@taminororo
Copy link
Copy Markdown
Collaborator

@taminororo taminororo commented Feb 20, 2026

対応Issue

resolve
#246
#249

概要

slackのチャンネルIDとユーザーIDとBotのIDを貼ることで、テストできます
SLACK_BOT_TOKENはhttps://api.slack.com/apps/A0A5MJSNKKP/oauth?で取得できます
SLACK_CHANNEL_IDはチャンネルの詳細から取得できます

通知までの流れは、

  1. shift_usecase.goのUpdateShiftsFromGAS関数でシフトの変更差分を取得したらaction_logsテーブルに差分を挿入
  2. notification_usecase.goで1行のレコードから送信メッセージをグループにする処理
  3. slack_service.goでTOKENの取得、メッセージ文の作成
  4. di.goの関数から5分ごとにメッセージを送信

5分待つのめんどくさいので、
main.goのuc.ProcessUnsentNotificationsを実行したらテストできる
コマンドは以下
docker exec -it nutfes-seeft-api sh -c "cd /app && go run ./cmd/send-notifications"

Slackでのメッセージ内容は
https://nut-m-e-g.slack.com/archives/C08NXLCLA3T/p1771571370540459
になります

画面スクリーンショット等

  • URL
    スクリーンショット

テスト項目

備考

Summary by CodeRabbit

  • New Features

    • Automated Slack notifications for users about shift changes (background worker + CLI).
    • Action logging to record create/update/delete events for shifts.
    • Slack user linkage for direct messaging.
  • Chores

    • Database migrations to store action logs and Slack IDs.
    • Added Slack and env-management dependencies; updated environment config for Slack credentials.
    • Removed an ignore rule so environment files under the env folder are no longer excluded from version control.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

シフト変更を検知して action_logs に差分を記録し、未送信ログを定期処理して Slack(チャンネル/DM)に通知する仕組みを本番リポジトリへ導入するPRです。

Changes:

  • action_logs テーブル追加・users.slack_user_id 追加(通知先ユーザー紐付け)
  • GAS由来のシフト更新時に action_logs を作成し、ワーカーで未送信ログをSlack通知
  • Slack送信サービスと、手動実行コマンド(未送信通知を即時送信)を追加

Reviewed changes

Copilot reviewed 14 out of 15 changed files in this pull request and generated 11 comments.

Show a summary per file
File Description
mysql/db/create5ActonLogs.sql 通知用の action_logs テーブルを追加
mysql/db/create6AddSlackUserIdToUsers.sql usersslack_user_id カラム追加
mysql/db/update_action_log_for_shift_change_test.sql 通知テスト用に action_logs を未送信UPDATEへ書き換えるSQL
mysql/db/update_shift_one_row.sql シフト更新の動作確認用SQL
api/lib/entity/action_log.go ActionLogエンティティ追加
api/lib/entity/user.go Userに SlackUserID を追加
api/lib/internals/repository/action_log_repository.go action_logs のCRUD(作成・未送信取得・送信済み更新)追加
api/lib/usecase/shift_usecase.go シフト更新時に action_logs を作る処理を追加
api/lib/usecase/notification_usecase.go 未送信ログのグルーピングとSlack通知処理を追加
api/lib/externals/slack/slack_service.go Slack API クライアント初期化・送信・Block構築を追加
api/lib/di/di.go NotificationUseCaseと定期ワーカーのDI/起動を追加
api/cmd/send-notifications/main.go 未送信通知をその場で1回処理する手動コマンドを追加
api/go.mod / api/go.sum Slack SDK / godotenv 依存追加
api/env/dev.env 開発用envファイルを削除

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +244 to +266
logsWithTime := make([]logWithTime, 0, len(logs))
for _, log := range logs {
shiftRow, err := n.shiftRep.Find(ctx, strconv.Itoa(log.ShiftID))
if err != nil {
continue
}

var shiftID, taskID, userID, yearID, dateID, timeID, weatherID int
var isAttendance bool
var createdAt, updatedAt interface{}
err = shiftRow.Scan(
&shiftID, &taskID, &userID, &yearID, &dateID,
&timeID, &weatherID, &isAttendance, &createdAt, &updatedAt,
)
if err != nil {
continue
}

logsWithTime = append(logsWithTime, logWithTime{
log: log,
timeID: timeID,
})
}
Copy link

Copilot AI Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sortLogsByTimeshiftRep.Find / Scan に失敗したログを continue で落としていますが、呼び出し元では元の group 全件を送信済みIDとして収集して MarkAsSent しています。結果として「通知に含まれていないログ」まで送信済みになり、永続的に通知されなくなります。処理できなかったログはグループから除外して sentLogIDs に入れない、もしくはエラー扱いでグループ全体を失敗にして再試行できるようにしてください。

Copilot uses AI. Check for mistakes.
Comment thread api/lib/usecase/notification_usecase.go Outdated
Comment on lines +288 to +308
// 各ログのtime_idを取得
timeIDs := make([]int, 0, len(logs))
for _, log := range logs {
shiftRow, err := n.shiftRep.Find(ctx, strconv.Itoa(log.ShiftID))
if err != nil {
continue
}

var shiftID, taskID, userID, yearID, dateID, timeID, weatherID int
var isAttendance bool
var createdAt, updatedAt interface{}
err = shiftRow.Scan(
&shiftID, &taskID, &userID, &yearID, &dateID,
&timeID, &weatherID, &isAttendance, &createdAt, &updatedAt,
)
if err != nil {
continue
}

timeIDs = append(timeIDs, timeID)
}
Copy link

Copilot AI Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

このユースケースはログ1件あたり複数回 shiftRep.Find / timeRep.Find / taskRep.Find を呼んでおり(sortLogsByTimebuildGroupedMessagebuildChangesList で重複)、未送信ログが増えるとDBへのN+1問い合わせがボトルネックになります。必要な情報をJOINしたクエリでまとめて取得するか、shift/time/task をID単位でキャッシュして同一バッチ内の重複クエリを避けてください。

Copilot uses AI. Check for mistakes.
Comment on lines +62 to +68
_, _, err := s.client.PostMessage(
slackUserID,
slack.MsgOptionBlocks(blocks...),
)
if err != nil {
// DM失敗はログだけ出してエラーにはしない(チャンネルには届いているため)
log.Printf("dm send error for user %s: %v", slackUserID, err)
Copy link

Copilot AI Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DM送信で PostMessage(slackUserID, ...) にユーザーIDを渡していますが、Slack API の channel パラメータはチャンネルID/IMチャンネルIDが必要で、ユーザーIDではDMになりません。conversations.open(slack-go の OpenConversation 等)でIMチャンネルIDを取得してから送信する形に修正してください。

Suggested change
_, _, err := s.client.PostMessage(
slackUserID,
slack.MsgOptionBlocks(blocks...),
)
if err != nil {
// DM失敗はログだけ出してエラーにはしない(チャンネルには届いているため)
log.Printf("dm send error for user %s: %v", slackUserID, err)
// ユーザーIDからIMチャンネルを開く(または取得)
channel, err := s.client.OpenConversation(&slack.OpenConversationParameters{
ReturnIM: true,
Users: []string{slackUserID},
})
if err != nil {
// DMチャンネルの取得に失敗した場合も、チャンネル送信は成功しているためエラーにはしない
log.Printf("failed to open IM channel for user %s: %v", slackUserID, err)
} else if channel != nil {
_, _, err := s.client.PostMessage(
channel.ID,
slack.MsgOptionBlocks(blocks...),
)
if err != nil {
// DM失敗はログだけ出してエラーにはしない(チャンネルには届いているため)
log.Printf("dm send error for user %s: %v", slackUserID, err)
}

Copilot uses AI. Check for mistakes.
Comment thread api/lib/usecase/notification_usecase.go Outdated
Comment on lines +427 to +469
// 時刻を取得
timeRow, err := n.timeRep.Find(ctx, strconv.Itoa(timeID))
if err != nil {
continue
}
var time entity.Time
err = timeRow.Scan(&time.ID, &time.Time, &time.CreatedAt, &time.UpdatedAt)
if err != nil {
continue
}

// 変更内容を構築
var changeText string
switch log.ActionType {
case "CREATE":
changeText = fmt.Sprintf("%s(新規)", task.Task)
case "UPDATE":
// diff_payloadからold/newを取得
oldTask := "(不明)"
newTask := task.Task
if changes, ok := payload["changes"].([]interface{}); ok && len(changes) > 0 {
if change, ok := changes[0].(map[string]interface{}); ok {
if old, ok := change["old"].(string); ok {
oldTask = old
}
if new, ok := change["new"].(string); ok {
newTask = new
}
}
}
changeText = fmt.Sprintf("%s → %s", oldTask, newTask)
case "DELETE":
oldTask := "(不明)"
if deleted, ok := payload["deleted_task"].(string); ok {
oldTask = deleted
}
changeText = fmt.Sprintf("%s(削除)", oldTask)
default:
changeText = fmt.Sprintf("%s", task.Task)
}

changes = append(changes, changeText)
}
Copy link

Copilot AI Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

通知本文の changesold → new 等の文字列だけで、どの時刻枠の変更かが分からない(さらに timeRange は全ログの範囲を一括表示するだけで、変更リストと対応づきません)。複数枠の変更が同日に入った場合に内容が曖昧になるので、変更リスト側にも時刻を含める/連続時間帯ごとに changes をグルーピングして表示するようにしてください。

Copilot uses AI. Check for mistakes.
@@ -0,0 +1 @@
ALTER TABLE users ADD COLUMN slack_user_id VARCHAR(255); No newline at end of file
Copy link

Copilot AI Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

slack_user_id カラム追加により、SELECT * FROM users を使っている箇所で row.Scan(...) の引数個数が合わず実行時エラーになる可能性が高いです(例: api/lib/internals/repository/user_repository.go が複数箇所で SELECT *)。ユーザー取得系のSQLを SELECT id, ... のように列挙する形に揃えるか、全ての Scan 呼び出しを新カラム分も受け取るように更新してください。

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

これ普通に問題起きてた。
GASからのユーザーの初期化やadminへのログインができなくなってる。
原因はcopilotが言っている通りで、slack_user_id カラム追加により、SELECT * FROM users を使っている箇所で row.Scan(...) の引数個数が合わず実行時エラーになってる。
全てのユーザー取得系のSQLに、slack_user_id カラムを追加しないといけなそう

Comment thread api/lib/usecase/shift_usecase.go Outdated
Comment on lines +1488 to +1506
u.rep.Create(ctx, taskID, userID, yearID, dateID, timeID, weatherID, strconv.FormatBool(isAttendance))
// 最新のシフトIDを取得
latestRow, err := u.rep.FindLatestRecord(ctx)
if err == nil {
var newShift entity.ShiftAdmin
if err := latestRow.Scan(&newShift.ID, &newShift.TaskID, &newShift.UserID, &newShift.YearID, &newShift.DateID, &newShift.TimeID, &newShift.WeatherID, &newShift.IsAttendance, &newShift.CreatedAt, &newShift.UpdatedAt); err == nil {
// action_logに記録
newTaskName := task.Task
if newTaskName == "" {
newTaskName = "(新規)"
}
diffPayload := map[string]interface{}{
"changes": []map[string]string{
{"field": "task_name", "old": "なし", "new": newTaskName},
},
}
if u.actionLogRepo != nil {
u.actionLogRepo.Create(ctx, newShift.ID, user.ID, dateIDInt, "CREATE", diffPayload)
}
Copy link

Copilot AI Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

新規シフト作成後に FindLatestRecord でIDを取得して action_log を作っていますが、並行実行があると「別リクエストで作られた最新シフト」を拾って誤った shift_id を記録するレースが起きます。シフト作成時に RETURNING id 等で作成したIDを確実に取得して、そのIDで action_log を作成してください。

Suggested change
u.rep.Create(ctx, taskID, userID, yearID, dateID, timeID, weatherID, strconv.FormatBool(isAttendance))
// 最新のシフトIDを取得
latestRow, err := u.rep.FindLatestRecord(ctx)
if err == nil {
var newShift entity.ShiftAdmin
if err := latestRow.Scan(&newShift.ID, &newShift.TaskID, &newShift.UserID, &newShift.YearID, &newShift.DateID, &newShift.TimeID, &newShift.WeatherID, &newShift.IsAttendance, &newShift.CreatedAt, &newShift.UpdatedAt); err == nil {
// action_logに記録
newTaskName := task.Task
if newTaskName == "" {
newTaskName = "(新規)"
}
diffPayload := map[string]interface{}{
"changes": []map[string]string{
{"field": "task_name", "old": "なし", "new": newTaskName},
},
}
if u.actionLogRepo != nil {
u.actionLogRepo.Create(ctx, newShift.ID, user.ID, dateIDInt, "CREATE", diffPayload)
}
res, err := u.rep.Create(ctx, taskID, userID, yearID, dateID, timeID, weatherID, strconv.FormatBool(isAttendance))
if err == nil {
if res != nil {
if insertedID, err := res.LastInsertId(); err == nil {
newShiftID := int(insertedID)
// action_logに記録
newTaskName := task.Task
if newTaskName == "" {
newTaskName = "(新規)"
}
diffPayload := map[string]interface{}{
"changes": []map[string]string{
{"field": "task_name", "old": "なし", "new": newTaskName},
},
}
if u.actionLogRepo != nil {
u.actionLogRepo.Create(ctx, newShiftID, user.ID, dateIDInt, "CREATE", diffPayload)
}
}

Copilot uses AI. Check for mistakes.
Comment on lines +1452 to +1481
dateIDInt, _ := strconv.Atoi(dateID)
if err := existRow.Scan(&existShift.ID, &existShift.TaskID, &existShift.UserID, &existShift.YearID, &existShift.DateID, &existShift.TimeID, &existShift.WeatherID, &existShift.IsAttendance, &existShift.CreatedAt, &existShift.UpdatedAt); err == nil && existShift.ID != 0 {
// 既存があれば更新
// 既存があれば更新(タスクが変更された場合のみaction_logに記録)
oldTaskID := existShift.TaskID
newTaskID, _ := strconv.Atoi(taskID)
if oldTaskID != newTaskID {
// タスクが変更された場合
oldTaskRow, _ := u.taskRep.Find(ctx, strconv.Itoa(oldTaskID))
var oldTask entity.Task
oldTaskName := "(不明)"
if oldTaskRow != nil {
if err := oldTaskRow.Scan(&oldTask.ID, &oldTask.Task, &oldTask.PlaceID, &oldTask.Url, &oldTask.BureauID, &oldTask.MaxMember, &oldTask.Color, &oldTask.Remark, &oldTask.YearID, &oldTask.CreatedAt, &oldTask.UpdatedAt); err == nil {
oldTaskName = oldTask.Task
}
}

newTaskName := task.Task
if newTaskName == "" {
newTaskName = "(不明)"
}

// action_logに記録
diffPayload := map[string]interface{}{
"changes": []map[string]string{
{"field": "task_name", "old": oldTaskName, "new": newTaskName},
},
}
if u.actionLogRepo != nil {
u.actionLogRepo.Create(ctx, existShift.ID, user.ID, dateIDInt, "UPDATE", diffPayload)
}
Copy link

Copilot AI Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

dateIDInt, _ := strconv.Atoi(dateID)newTaskID, _ := strconv.Atoi(taskID) の変換エラー、および actionLogRepo.Create(...) の戻り値エラーを握りつぶしているため、通知ログが欠落しても検知できません。変換エラーは処理を中断して返す/ログに残す、Create はエラーを上位に返す(もしくは少なくともログ出力する)ようにしてください。

Copilot uses AI. Check for mistakes.
Comment on lines +25 to +31
// 環境変数が既に設定されている場合は.envファイルの読み込みをスキップ
// docker-compose.ymlなどでenv_fileが設定されている場合は不要
goEnv := os.Getenv("GO_ENV")
if goEnv != "" {
_ = godotenv.Load(fmt.Sprintf("../%s.env", goEnv))
// エラーは無視(環境変数が既に設定されている場合があるため)
}
Copy link

Copilot AI Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.env 読み込み条件とパスが現状だと動きにくいです。GO_ENV != "" のときだけ ../%s.env を読む実装ですが、このリポジトリでは docker-compose が ./api/env/dev.env を参照しており、パス不一致になります(さらにこのPRで api/env/dev.env が削除されています)。Slack設定はアプリ起動時に確実に読み込める単一の方法(docker-composeのenv_fileに集約、または固定パスでLoad)に統一してください。

Copilot uses AI. Check for mistakes.
Comment on lines +27 to +30
func main() {
_ = godotenv.Load("env/dev.env")

client, err := db.ConnectMySQL()
Copy link

Copilot AI Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

このコマンドは env/dev.env を読み込む前提ですが、このPRで api/env/dev.env が削除されており、さらに docker-compose.yml / docker-compose.mac.yml も同ファイルを env_file として参照しています。通知テスト/開発環境が起動できなくなるため、(1) envファイルを復活させる、または (2) docker-compose とこのコマンドの参照先を新しい設定ファイル(例: .env)に揃える、のいずれかを行ってください。

Copilot uses AI. Check for mistakes.
Comment on lines +1 to +9
CREATE TABLE action_logs (
id SERIAL PRIMARY KEY,
shift_id INTEGER,
user_id INTEGER,
date_id INTEGER,
action_type VARCHAR(50) NOT NULL, -- "CREATE", "UPDATE", "DELETE"
diff_payload JSONB,
is_sent BOOLEAN DEFAULT FALSE,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP,
Copy link

Copilot AI Feb 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

他のDDL(例: create2Users.sql, create4Shifts.sql)は CREATE TABLE IF NOT EXISTSupdated_at + 更新トリガーで統一されていますが、この action_logs だけは IF NOT EXISTS がなく、updated_at もありません。初期化スクリプトの再実行や運用時の一貫性のため、既存テーブルと同じパターン(IF NOT EXISTS / updated_at / trigger)に揃えてください。

Copilot uses AI. Check for mistakes.
@taminororo taminororo requested a review from KazumaSun February 26, 2026 09:50
@taminororo
Copy link
Copy Markdown
Collaborator Author

taminororo commented Mar 24, 2026

slackのユーザーIDを指定してスプシからシフトを変更してをGASでバックエンドに送信したところ、 SlackのDMチャンネルに届きました

Comment thread api/env/dev.env
@@ -0,0 +1 @@
ALTER TABLE users ADD COLUMN slack_user_id VARCHAR(255); No newline at end of file
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

これ普通に問題起きてた。
GASからのユーザーの初期化やadminへのログインができなくなってる。
原因はcopilotが言っている通りで、slack_user_id カラム追加により、SELECT * FROM users を使っている箇所で row.Scan(...) の引数個数が合わず実行時エラーになってる。
全てのユーザー取得系のSQLに、slack_user_id カラムを追加しないといけなそう

Comment thread mysql/db/update_shift_one_row.sql Outdated
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 12, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds Slack-based notifications and action-audit support: DB migrations for action_logs and users.slack_user_id, new ActionLog entity and repository, SlackService, NotificationUseCase (background worker + one-shot CLI), and updates to use-cases and DI to emit/read Slack user IDs and action logs. (50 words)

Changes

Cohort / File(s) Summary
Database Schema
mysql/db/create5ActonLogs.sql, mysql/db/create6AddSlackUserIdToUsers.sql
Add action_logs table (FKs to shifts/users/dates, JSON diff, is_sent, created_at) and nullable users.slack_user_id.
Entities
api/lib/entity/action_log.go, api/lib/entity/user.go
New ActionLog, DiffPayload, ChangeItem structs; add SlackUserID string \json:"-"`toUser`.
Repositories
api/lib/internals/repository/action_log_repository.go
New ActionLogRepository with Create, GetUnsentLogs, GetUnsentLogsByUserAndDate, MarkAsSent implementations.
External Service
api/lib/externals/slack/slack_service.go
New SlackService reading env, building Slack blocks, sending DMs with rate-limit retry handling.
Use Cases
api/lib/usecase/notification_usecase.go, api/lib/usecase/shift_usecase.go
Add NotificationUseCase to process unsent logs (group by user/date, build messages, mark sent); ShiftUseCase now emits CREATE/UPDATE action logs and accepts ActionLogRepository.
Use Case Scans
api/lib/usecase/user_usecase.go, api/lib/usecase/mail_auth_usecase.go, api/lib/usecase/rescue_unified_usecase.go
Extend SQL Scan destinations to populate SlackUserID across user-related flows.
DI & Worker
api/lib/di/di.go
Wire SlackService and ActionLogRepository; conditionally construct NotificationUseCase and spawn a ticker worker running ProcessUnsentNotifications every 5 minutes.
CLI Entrypoint
api/cmd/send-notifications/main.go
New one-shot CLI to load env, connect DB, initialize deps and run notification processing once.
Config & Mods
api/env/dev.env, api/go.mod, .gitignore
Add SLACK_BOT_TOKEN and SLACK_CHANNEL_ID to dev env; add github.com/joho/godotenv and github.com/slack-go/slack to go.mod; removed api/env/* ignore rule.

Sequence Diagram

sequenceDiagram
    participant Worker as Worker (ticker)
    participant NUC as NotificationUseCase
    participant ALR as ActionLogRepository
    participant Repos as Repositories (User, Date, Shift, Task, Weather)
    participant SS as SlackService

    loop every 5 minutes
        Worker->>NUC: ProcessUnsentNotifications(ctx)
        NUC->>ALR: GetUnsentLogs()
        ALR-->>NUC: rows (unsent logs)
        NUC->>NUC: Scan into ActionLog entities
        alt logs exist
            NUC->>Repos: Preload time map
            NUC-->>NUC: time map
            NUC->>NUC: Group logs by userID_dateID
            loop per group
                NUC->>Repos: Load User, Date, Shifts, Tasks
                Repos-->>NUC: User, Date, Shifts, Tasks
                NUC->>NUC: Parse diffs, build changes text
                NUC->>Repos: GetWeather(shift)
                Repos-->>NUC: Weather (or fallback "不明")
                NUC->>SS: BuildMessageBlocks(params)
                SS-->>NUC: []slack.Block
                NUC->>SS: SendMessage(blocks, slackUserID)
                SS-->>NUC: success / error
                NUC->>ALR: MarkAsSent(logIDs)
                ALR-->>NUC: success / error
            end
        end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

Poem

🐰 Hop-hop, a ticker hums tonight,

Logs that whisper changes, clear and bright,
I parse diffs, stitch messages with care,
Slack-bound hops carry news through the air,
I twitch my nose — notifications take flight! ✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 16.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'シフト変更通知機能' (Shift Change Notification Feature) accurately describes the main feature being implemented across the entire changeset.
Description check ✅ Passed The PR description covers related issues (#246, #249), provides a clear explanation of the notification flow, includes setup instructions for Slack credentials, and offers testing commands.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/kanba/249/migrate-notification-mock

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

🧹 Nitpick comments (2)
api/lib/di/di.go (1)

48-53: Add an explicit health signal for the notification worker.

Slack init failures and per-tick delivery failures are only logged here. Since the API keeps running, missing DMs can go unnoticed while unsent action_logs accumulate. A metric/health flag for “worker enabled” and backlog size would make this detectable before users report it.

Also applies to: 85-95

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/lib/di/di.go` around lines 48 - 53, The SlackService initialization
(slack.NewSlackService) and notification worker should expose explicit health
signals: add a boolean "worker enabled" metric/flag set false when
slack.NewSlackService() fails (currently you set slackService = nil) and set
true when initialized, and emit/update a backlog size metric from the
notification worker when processing action_logs; update the same pattern used in
the later block (lines ~85-95) so both init failures and per-tick delivery
failures update the health flag and backlog gauge, and ensure the health flag is
accessible to your health-check/metrics system so alerts can be raised when
worker_enabled == false or backlog_size exceeds a threshold.
mysql/db/create5ActonLogs.sql (1)

4-5: Index the unsent-log polling paths.

GetUnsentLogs(...) and GetUnsentLogsByUserAndDate(...) filter on is_sent and sort by created_at. Without partial indexes, the 5-minute notification worker will degrade to recurring table scans as action_logs grows.

Possible indexes
CREATE INDEX idx_action_logs_unsent_created_at
    ON action_logs (created_at)
    WHERE is_sent = false;

CREATE INDEX idx_action_logs_unsent_user_date_created_at
    ON action_logs (user_id, date_id, created_at)
    WHERE is_sent = false;

Also applies to: 8-9

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mysql/db/create5ActonLogs.sql` around lines 4 - 5, GetUnsentLogs and
GetUnsentLogsByUserAndDate are causing table scans because they filter on
is_sent and order by created_at; add partial indexes on action_logs to support
these paths. Create an index on (created_at) with WHERE is_sent = false (e.g.,
idx_action_logs_unsent_created_at) and a composite partial index on (user_id,
date_id, created_at) with WHERE is_sent = false (e.g.,
idx_action_logs_unsent_user_date_created_at) so the 5-minute notification worker
can use index scans instead of full table scans.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@api/cmd/send-notifications/main.go`:
- Line 17: The code currently calls os.Exit(0) at the end of main which prevents
deferred calls like client.CloseDB() from running; remove the os.Exit(0) call
and let main return normally after logging completion so the existing defer
client.CloseDB() executes (locate the main function and the defer
client.CloseDB() usage and delete the os.Exit(0) invocation).

In `@api/env/dev.env`:
- Around line 8-9: Remove the placeholder Slack credentials from the dev.env by
setting SLACK_BOT_TOKEN and SLACK_CHANNEL_ID to empty strings (i.e., leave them
blank) so the runtime check in slack_service.go (the code that validates
SLACK_BOT_TOKEN and SLACK_CHANNEL_ID) will treat Slack as unconfigured; keep the
existing validation in slack_service.go (lines that check those env vars) as-is
so notification sending remains disabled until real credentials are provided.

In `@api/lib/entity/user.go`:
- Line 20: The User struct currently exposes the internal SlackUserID via JSON
tag `json:"slack_user_id"`, widening the public API; change the field tag on
SlackUserID in the `User` struct to `json:"-"` to keep it internal (or remove it
from the shared entity and expose it only via a dedicated DTO/endpoint used by
the settings flow), and update any marshaling/unmarshaling code or consumers to
use the new DTO or explicit getters/setters instead of relying on the shared
`entity.User`.

In `@api/lib/externals/slack/slack_service.go`:
- Around line 57-75: The current SendMessage flow treats any DM/send-to-user
error as a total failure even if the channel PostMessage already succeeded,
which causes duplicate channel posts on retry; update the logic in
slack_service.go (SendMessage) so you track channel delivery success separately:
after successfully posting to the channel (including after any RateLimitedError
retry handling around s.client.PostMessage and rateErr.RetryAfter), do not
return an error when subsequent DM/send-to-user calls fail—log the DM failure as
a warning (with error details) and return nil so
notification_usecase.processGroup treats delivery as successful; apply the same
change for the similar block around lines 80-98 to ensure channel success is not
undone by downstream DM errors.

In `@api/lib/usecase/notification_usecase.go`:
- Around line 59-63: The current flow uses n.actionLogRepo.GetUnsentLogs(ctx) to
read unsent logs, then sends Slack notifications and later marks them sent,
which risks duplicate delivery; change to claim rows before performing the Slack
side-effect by selecting/updating rows inside a transaction (e.g. using FOR
UPDATE SKIP LOCKED or an UPDATE ... RETURNING pattern) so only claimed rows are
returned for sending; specifically replace or augment GetUnsentLogs usage with a
new repository method that atomically claims and returns rows (or perform the
UPDATE ... RETURNING inside the use case) and then call MarkAsSent only after
successful send for those claimed IDs (update references to
n.actionLogRepo.GetUnsentLogs and any subsequent MarkAsSent logic).
- Around line 203-218: In loadTimeMap, timeRep.All(ctx) returns *sql.Rows that
is never closed, leaking DB connections; after obtaining timeRow check for nil
and defer timeRow.Close(), iterate as before and after the loop call
timeRow.Err() and return any error (wrap with context) instead of silently
ignoring it; ensure you still populate timeMap on successful scans and propagate
errors from timeRow.Err() so callers can handle DB/iteration failures.
- Around line 439-441: The "CREATE" branch currently uses the live DB row
(task.Task) which can be stale; instead parse the action log's diff_payload to
get the created task name and use that for changeText. In the switch handling
log.ActionType == "CREATE" (where changeText is set), parse log.DiffPayload (or
the diff_payload field on the log) as JSON/struct, extract the created task name
field (e.g. "task" or "Task"), fall back to task.Task only if parsing/field
missing, then set changeText = fmt.Sprintf("%s(新規)", extractedName). Ensure you
update the code paths that reference changeText, and keep references to
log.ActionType, changeText, task.Task and the diff_payload field to locate the
change.

In `@api/lib/usecase/shift_usecase.go`:
- Around line 1488-1493: The code races by calling u.rep.Create(...) without
checking its result and then calling u.rep.FindLatestRecord to find the inserted
shift; replace this pattern by modifying the Create method to return the
inserted shift ID (e.g., via INSERT ... RETURNING id) and use that returned ID
directly to load or attach the action log instead of calling FindLatestRecord;
update the call site in shift_usecase.go to capture the ID returned from
u.rep.Create (or a CreateReturnID helper) and then use that ID to query the
specific entity.ShiftAdmin (or pass the ID into any follow-up repository
methods) so concurrent inserts cannot cause the wrong record to be used.
- Around line 1450-1453: The call to u.rep.FindByUnique currently passes taskID
but the underlying query only filters by user_id, date_id, time_id, weather_id,
and also ignores its returned error; update the call so its signature matches
the actual 4-tuple (remove taskID) or change the repository query to include
task_id if that was intended (check the FindByUnique implementation), then
capture the returned error instead of discarding it (e.g., err :=
u.rep.FindByUnique(...)); handle errors explicitly: if err != nil and err !=
sql.ErrNoRows return the error; only when err == sql.ErrNoRows call
u.rep.Create(...); also ensure the subsequent existRow.Scan(...) error is
checked and treated as a real error (return it) rather than assuming “not found”
when Scan fails; reference symbols: FindByUnique, existRow, existShift, Scan,
u.rep.Create.
- Around line 1479-1484: The action log is created before the shift update and
both calls ignore errors; move/create the action log only after u.rep.Update
succeeds and handle errors from both calls, or better wrap u.rep.Update and
u.actionLogRepo.Create in a single DB transaction so they either both commit or
both roll back. Specifically, ensure you check the error returned by
u.rep.Update(...) (and return/handle it), then call u.actionLogRepo.Create(ctx,
existShift.ID, user.ID, dateIDInt, "UPDATE", diffPayload) and handle its error;
alternatively implement a transaction around the Update method (or add a
transactional variant) and call Create inside that transaction to guarantee
atomicity.

In `@mysql/db/create5ActonLogs.sql`:
- Around line 10-12: The foreign keys on action_logs (FOREIGN KEY (shift_id),
FOREIGN KEY (user_id), FOREIGN KEY (date_id)) lack an explicit ON DELETE policy
and will block deletions like DeleteShiftAdmin; update the CREATE statement for
the action_logs table to choose and add an explicit policy (e.g., ON DELETE
CASCADE or ON DELETE SET NULL) for each FK based on whether logs must survive
parent deletion, and if keeping logs after parent removal choose SET NULL plus
ensure diff_payload contains the snapshot data needed to render notifications.

In `@mysql/db/create6AddSlackUserIdToUsers.sql`:
- Line 1: Add a uniqueness constraint on users.slack_user_id at the schema level
by altering the migration to create a unique index/constraint (e.g., ALTER TABLE
users ADD CONSTRAINT uniq_users_slack_user_id UNIQUE (slack_user_id)); ensure
the migration handles existing duplicates first (clean or merge duplicate
slack_user_id values or fail with a clear message) before applying the
constraint so the migration doesn't error on pre-existing duplicate rows.

In `@mysql/db/update_action_log_for_shift_change_test.sql`:
- Around line 4-9: Replace the UPDATE that mutates id = 1 with an INSERT that
creates a fresh test row in action_logs: instead of running "UPDATE action_logs
... WHERE id = 1", INSERT a new row specifying action_type, diff_payload (cast
to jsonb), is_sent = false and supply safe test values for any
recipient/shift/audit columns (e.g. recipient_id, shift_id, created_at) so you
do not reuse or overwrite an existing real log; ensure the new INSERT returns or
sets a deterministic id for the smoke test if the test expects one.

---

Nitpick comments:
In `@api/lib/di/di.go`:
- Around line 48-53: The SlackService initialization (slack.NewSlackService) and
notification worker should expose explicit health signals: add a boolean "worker
enabled" metric/flag set false when slack.NewSlackService() fails (currently you
set slackService = nil) and set true when initialized, and emit/update a backlog
size metric from the notification worker when processing action_logs; update the
same pattern used in the later block (lines ~85-95) so both init failures and
per-tick delivery failures update the health flag and backlog gauge, and ensure
the health flag is accessible to your health-check/metrics system so alerts can
be raised when worker_enabled == false or backlog_size exceeds a threshold.

In `@mysql/db/create5ActonLogs.sql`:
- Around line 4-5: GetUnsentLogs and GetUnsentLogsByUserAndDate are causing
table scans because they filter on is_sent and order by created_at; add partial
indexes on action_logs to support these paths. Create an index on (created_at)
with WHERE is_sent = false (e.g., idx_action_logs_unsent_created_at) and a
composite partial index on (user_id, date_id, created_at) with WHERE is_sent =
false (e.g., idx_action_logs_unsent_user_date_created_at) so the 5-minute
notification worker can use index scans instead of full table scans.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 4e298093-e3b5-456d-aba8-aea1a8fdd101

📥 Commits

Reviewing files that changed from the base of the PR and between 6859368 and 116a2ca.

⛔ Files ignored due to path filters (1)
  • api/go.sum is excluded by !**/*.sum
📒 Files selected for processing (15)
  • .gitignore
  • api/cmd/send-notifications/main.go
  • api/env/dev.env
  • api/go.mod
  • api/lib/di/di.go
  • api/lib/entity/action_log.go
  • api/lib/entity/user.go
  • api/lib/externals/slack/slack_service.go
  • api/lib/internals/repository/action_log_repository.go
  • api/lib/usecase/notification_usecase.go
  • api/lib/usecase/shift_usecase.go
  • mysql/db/create5ActonLogs.sql
  • mysql/db/create6AddSlackUserIdToUsers.sql
  • mysql/db/update_action_log_for_shift_change_test.sql
  • mysql/db/update_shift_one_row.sql
💤 Files with no reviewable changes (1)
  • .gitignore

Comment thread api/cmd/send-notifications/main.go Outdated
Comment thread api/env/dev.env
Comment thread api/lib/entity/user.go Outdated
Comment thread api/lib/externals/slack/slack_service.go Outdated
Comment on lines +59 to +63
rows, err := n.actionLogRepo.GetUnsentLogs(ctx)
if err != nil {
return errors.Wrapf(err, "failed to get unsent logs")
}
defer rows.Close()
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Claim logs before making the Slack side effect.

This is a read-unsent → send Slack → mark-sent flow. The PR also exposes a manual send-notifications command alongside the 5-minute worker, so overlapping runners or a post-send MarkAsSent(...) failure can deliver the same notification twice.

Use row claiming (FOR UPDATE SKIP LOCKED, a lease column, or UPDATE ... RETURNING inside a transaction) before the external call.

Also applies to: 115-127

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/lib/usecase/notification_usecase.go` around lines 59 - 63, The current
flow uses n.actionLogRepo.GetUnsentLogs(ctx) to read unsent logs, then sends
Slack notifications and later marks them sent, which risks duplicate delivery;
change to claim rows before performing the Slack side-effect by
selecting/updating rows inside a transaction (e.g. using FOR UPDATE SKIP LOCKED
or an UPDATE ... RETURNING pattern) so only claimed rows are returned for
sending; specifically replace or augment GetUnsentLogs usage with a new
repository method that atomically claims and returns rows (or perform the UPDATE
... RETURNING inside the use case) and then call MarkAsSent only after
successful send for those claimed IDs (update references to
n.actionLogRepo.GetUnsentLogs and any subsequent MarkAsSent logic).

Comment thread api/lib/usecase/shift_usecase.go
Comment thread api/lib/usecase/shift_usecase.go Outdated
Comment thread mysql/db/create5ActonLogs.sql Outdated
Comment thread mysql/db/create6AddSlackUserIdToUsers.sql Outdated
Comment on lines +4 to +9
UPDATE action_logs
SET
action_type = 'UPDATE',
diff_payload = '{"changes":[{"field":"task_name","old":"テストタスク","new":"変更後タスク"}]}'::jsonb,
is_sent = false
WHERE id = 1;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Insert a dedicated test log instead of rewriting id = 1.

This mutates an existing action_logs row in place, so whatever recipient/shift metadata is already attached to id = 1 is preserved while only the payload and is_sent flag change. On a non-empty DB, that can resend a real log to the wrong user and corrupt the audit trail. Please create a fresh test row for smoke tests instead of reusing a historical one.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@mysql/db/update_action_log_for_shift_change_test.sql` around lines 4 - 9,
Replace the UPDATE that mutates id = 1 with an INSERT that creates a fresh test
row in action_logs: instead of running "UPDATE action_logs ... WHERE id = 1",
INSERT a new row specifying action_type, diff_payload (cast to jsonb), is_sent =
false and supply safe test values for any recipient/shift/audit columns (e.g.
recipient_id, shift_id, created_at) so you do not reuse or overwrite an existing
real log; ensure the new INSERT returns or sets a deterministic id for the smoke
test if the test expects one.

- os.Exit(0)を削除しdeferが実行されるように修正 (send-notifications/main.go)
- SlackUserIDのjsonタグを"-"に変更しAPIレスポンスから除外 (entity/user.go)
- loadTimeMapにdefer timeRow.Close()を追加しDBコネクションリークを修正 (notification_usecase.go)
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
api/lib/usecase/notification_usecase.go (3)

209-219: ⚠️ Potential issue | 🟠 Major

Handle iterator errors in loadTimeMap(...) as well.

timeRow.Close() is now in place, but timeRow.Err() is still not checked. Add it after the loop and return wrapped error on failure.

Patch
  for timeRow.Next() {
    ...
  }
+ if err := timeRow.Err(); err != nil {
+   return nil, errors.Wrap(err, "failed to iterate times")
+ }
  return timeMap, nil
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/lib/usecase/notification_usecase.go` around lines 209 - 219, The iterator
error from timeRow is not checked in loadTimeMap; after the for timeRow.Next()
loop and before returning timeMap, call timeRow.Err() and if non-nil return a
wrapped error (e.g., using fmt.Errorf("loadTimeMap: iterating times: %w", err))
so failures during iteration are propagated; keep existing logging for Scan
errors and ensure timeRow.Close() is still called via defer.

59-63: ⚠️ Potential issue | 🔴 Critical

Claim unsent logs before the Slack side effect to prevent duplicates.

The current read → send → mark flow can double-send when runners overlap or when MarkAsSent fails after a successful send. Claim rows atomically first (e.g., UPDATE ... SET processing=true ... RETURNING ... or FOR UPDATE SKIP LOCKED), then send, then mark sent.

Suggested direction
- rows, err := n.actionLogRepo.GetUnsentLogs(ctx)
+ rows, err := n.actionLogRepo.ClaimUnsentLogs(ctx, workerID, claimTTL)

...
- if err := n.actionLogRepo.MarkAsSent(ctx, logIDs); err != nil {
+ if err := n.actionLogRepo.MarkAsSent(ctx, logIDs); err != nil {
    return errors.Wrapf(err, "failed to mark logs as sent")
  }

Also add a recovery path (ReleaseClaim / claim timeout) for failed groups.

Also applies to: 125-127

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/lib/usecase/notification_usecase.go` around lines 59 - 63, The current
flow using n.actionLogRepo.GetUnsentLogs -> send -> MarkAsSent can double-send;
change to an atomic "claim" step in the repository (e.g., implement
ActionLogRepo.ClaimUnsentLogs or a ClaimAndReturn method that uses UPDATE ...
RETURNING or SELECT ... FOR UPDATE SKIP LOCKED) so the usecase claims rows
before performing Slack side effects (replace calls to GetUnsentLogs with the
new Claim method), then perform sends and call MarkAsSent for claimed IDs;
additionally implement a ReleaseClaim (or claim timeout) code path in the repo
and call it from the usecase if sending fails to ensure claims are released for
retry.

441-443: ⚠️ Potential issue | 🟠 Major

Use diff_payload for CREATE message text, not current task row.

CREATE notifications should render the created task name from the action log payload. Using task.Task can be stale if the task changed after the log was recorded.

Patch
  case "CREATE":
-   changeText = fmt.Sprintf("%s(新規)", task.Task)
+   newTask := task.Task
+   if changes, ok := payload["changes"].([]interface{}); ok && len(changes) > 0 {
+     if change, ok := changes[0].(map[string]interface{}); ok {
+       if v, ok := change["new"].(string); ok && v != "" {
+         newTask = v
+       }
+     }
+   }
+   changeText = fmt.Sprintf("%s(新規)", newTask)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/lib/usecase/notification_usecase.go` around lines 441 - 443, The CREATE
branch currently sets changeText using the possibly-stale task.Task; instead,
parse the action log's diff_payload and extract the created task name for the
message. In the case "CREATE" branch, unmarshal diff_payload (into a small
struct or map) to get the task name field (e.g., payload["task"] or
payload.Name) and use that in fmt.Sprintf("%s(新規)", ...); if unmarshalling or
the field is missing, fall back to task.Task to preserve behavior. Ensure you
reference diff_payload, the case "CREATE" branch, and changeText when making
this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@api/lib/usecase/notification_usecase.go`:
- Around line 239-242: The code sets user.SlackUserID from slackUserID.String
but does not guard against an empty value before calling SendMessage, causing
repeated failures; update the notification send path (where SendMessage is
invoked for the user in notification_usecase.go) to check that user.SlackUserID
is non-empty and non-whitespace (and that slackUserID.Valid is true) before
attempting to send, and if empty either skip sending with a clear processLogger
warning mentioning the user ID and notification ID or mark the notification/user
state as failed (consistent with existing failure handling); apply the same
guard to the other occurrence around the SlackUserID assignment/SendMessage (the
second block referenced at lines ~302-303).
- Around line 67-85: The rows.Next() loop that scans action logs (using
rows.Scan into actionLog and appending to logs) doesn't check for cursor/IO
errors; after the loop add a rows.Err() check and handle non-nil error (for
example logpkg.Printf("Failed reading action logs: %v", err) and return or
propagate the error as appropriate) to ensure cursor errors aren’t silently
ignored—specifically update the code that iterates over rows (uses rows.Next(),
rows.Scan, logs slice and logpkg.Printf) to call rows.Err() after the loop and
handle any returned error.

---

Duplicate comments:
In `@api/lib/usecase/notification_usecase.go`:
- Around line 209-219: The iterator error from timeRow is not checked in
loadTimeMap; after the for timeRow.Next() loop and before returning timeMap,
call timeRow.Err() and if non-nil return a wrapped error (e.g., using
fmt.Errorf("loadTimeMap: iterating times: %w", err)) so failures during
iteration are propagated; keep existing logging for Scan errors and ensure
timeRow.Close() is still called via defer.
- Around line 59-63: The current flow using n.actionLogRepo.GetUnsentLogs ->
send -> MarkAsSent can double-send; change to an atomic "claim" step in the
repository (e.g., implement ActionLogRepo.ClaimUnsentLogs or a ClaimAndReturn
method that uses UPDATE ... RETURNING or SELECT ... FOR UPDATE SKIP LOCKED) so
the usecase claims rows before performing Slack side effects (replace calls to
GetUnsentLogs with the new Claim method), then perform sends and call MarkAsSent
for claimed IDs; additionally implement a ReleaseClaim (or claim timeout) code
path in the repo and call it from the usecase if sending fails to ensure claims
are released for retry.
- Around line 441-443: The CREATE branch currently sets changeText using the
possibly-stale task.Task; instead, parse the action log's diff_payload and
extract the created task name for the message. In the case "CREATE" branch,
unmarshal diff_payload (into a small struct or map) to get the task name field
(e.g., payload["task"] or payload.Name) and use that in fmt.Sprintf("%s(新規)",
...); if unmarshalling or the field is missing, fall back to task.Task to
preserve behavior. Ensure you reference diff_payload, the case "CREATE" branch,
and changeText when making this change.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: bf2fe84c-183d-425a-8189-b9f788dfd255

📥 Commits

Reviewing files that changed from the base of the PR and between 90f07fb and 229c7e0.

📒 Files selected for processing (3)
  • api/cmd/send-notifications/main.go
  • api/lib/entity/user.go
  • api/lib/usecase/notification_usecase.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • api/lib/entity/user.go
  • api/cmd/send-notifications/main.go

Comment on lines +67 to +85
for rows.Next() {
var actionLog entity.ActionLog
err := rows.Scan(
&actionLog.ID,
&actionLog.ShiftID,
&actionLog.UserID,
&actionLog.DateID,
&actionLog.ActionType,
&actionLog.DiffPayload,
&actionLog.IsSent,
&actionLog.CreatedAt,
)
if err != nil {
logpkg.Printf("Failed to scan action log: %v", err)
continue
}
logs = append(logs, actionLog)
}

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Check rows.Err() after scanning unsent logs.

rows.Next() loop errors are currently ignored. Add a post-loop rows.Err() check so cursor/IO errors don’t silently truncate notification batches.

Patch
  for rows.Next() {
    ...
  }
+ if err := rows.Err(); err != nil {
+   return errors.Wrap(err, "failed while iterating unsent logs")
+ }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for rows.Next() {
var actionLog entity.ActionLog
err := rows.Scan(
&actionLog.ID,
&actionLog.ShiftID,
&actionLog.UserID,
&actionLog.DateID,
&actionLog.ActionType,
&actionLog.DiffPayload,
&actionLog.IsSent,
&actionLog.CreatedAt,
)
if err != nil {
logpkg.Printf("Failed to scan action log: %v", err)
continue
}
logs = append(logs, actionLog)
}
for rows.Next() {
var actionLog entity.ActionLog
err := rows.Scan(
&actionLog.ID,
&actionLog.ShiftID,
&actionLog.UserID,
&actionLog.DateID,
&actionLog.ActionType,
&actionLog.DiffPayload,
&actionLog.IsSent,
&actionLog.CreatedAt,
)
if err != nil {
logpkg.Printf("Failed to scan action log: %v", err)
continue
}
logs = append(logs, actionLog)
}
if err := rows.Err(); err != nil {
return errors.Wrap(err, "failed while iterating unsent logs")
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/lib/usecase/notification_usecase.go` around lines 67 - 85, The
rows.Next() loop that scans action logs (using rows.Scan into actionLog and
appending to logs) doesn't check for cursor/IO errors; after the loop add a
rows.Err() check and handle non-nil error (for example logpkg.Printf("Failed
reading action logs: %v", err) and return or propagate the error as appropriate)
to ensure cursor errors aren’t silently ignored—specifically update the code
that iterates over rows (uses rows.Next(), rows.Scan, logs slice and
logpkg.Printf) to call rows.Err() after the loop and handle any returned error.

Comment on lines +239 to +242
if slackUserID.Valid {
user.SlackUserID = slackUserID.String
}

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard against empty Slack user ID before sending.

If SlackUserID is missing, SendMessage will fail every cycle and the same logs will be retried forever. Validate early and decide explicit behavior (skip with clear log, or mark failed state).

Patch
  if slackUserID.Valid {
    user.SlackUserID = slackUserID.String
  }
+ if strings.TrimSpace(user.SlackUserID) == "" {
+   return errors.New("missing Slack user ID")
+ }

Also applies to: 302-303

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/lib/usecase/notification_usecase.go` around lines 239 - 242, The code
sets user.SlackUserID from slackUserID.String but does not guard against an
empty value before calling SendMessage, causing repeated failures; update the
notification send path (where SendMessage is invoked for the user in
notification_usecase.go) to check that user.SlackUserID is non-empty and
non-whitespace (and that slackUserID.Valid is true) before attempting to send,
and if empty either skip sending with a clear processLogger warning mentioning
the user ID and notification ID or mark the notification/user state as failed
(consistent with existing failure handling); apply the same guard to the other
occurrence around the SlackUserID assignment/SendMessage (the second block
referenced at lines ~302-303).

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (4)
api/lib/usecase/notification_usecase.go (4)

239-242: ⚠️ Potential issue | 🟠 Major

Guard against empty Slack user ID before sending.

If SlackUserID is missing or empty, SendMessage will fail every cycle and the same logs will be retried indefinitely. Validate early and handle explicitly (skip with a warning or mark as failed).

Suggested fix
 	if slackUserID.Valid {
 		user.SlackUserID = slackUserID.String
 	}
+	if strings.TrimSpace(user.SlackUserID) == "" {
+		logpkg.Printf("Skipping notification for user %d: missing Slack user ID", userID)
+		return nil // or return a specific error to mark logs differently
+	}

,

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/lib/usecase/notification_usecase.go` around lines 239 - 242, The code
only checks slackUserID.Valid before assigning user.SlackUserID but doesn't
guard against an empty string, which causes SendMessage to fail repeatedly;
update the logic around slackUserID and message sending (the assignment of
user.SlackUserID and the call to SendMessage) to verify both slackUserID.Valid
and slackUserID.String != "" (or user.SlackUserID != "") and if empty either
skip sending with a processLogger.Warn or mark the notification as failed in the
notification use case (the method that calls SendMessage), so you avoid retry
loops for empty Slack IDs.

209-219: ⚠️ Potential issue | 🟠 Major

Check timeRow.Err() after iterating.

The defer timeRow.Close() was added, but rows.Err() is still not checked after the loop. Iteration errors could silently result in an incomplete time map.

Suggested fix
 		timeMap[time.ID] = time
 	}
+	if err := timeRow.Err(); err != nil {
+		return nil, errors.Wrap(err, "failed to iterate times")
+	}
 	return timeMap, nil

,

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/lib/usecase/notification_usecase.go` around lines 209 - 219, The loop
over rows using timeRow.Next() doesn't check iteration errors — after the for
loop (and before returning timeMap) call timeRow.Err() and handle any non-nil
error (e.g., logpkg.Printf with the error and return the error alongside or
instead of the timeMap) so iteration errors are not silently ignored; ensure
this check occurs after the defer timeRow.Close() and before the existing return
in the function that builds timeMap.

67-84: ⚠️ Potential issue | 🟠 Major

Check rows.Err() after the scanning loop.

The loop iterates with rows.Next() but doesn't check rows.Err() afterward. Cursor or I/O errors could silently truncate the notification batch.

Suggested fix
 		logs = append(logs, actionLog)
 	}
+	if err := rows.Err(); err != nil {
+		return errors.Wrap(err, "failed while iterating unsent logs")
+	}
 
 	if len(logs) == 0 {

,

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/lib/usecase/notification_usecase.go` around lines 67 - 84, After
iterating with rows.Next() in the function that builds the action log slice (the
loop that scans into entity.ActionLog and appends to logs), check rows.Err()
immediately after the loop and handle any non-nil error (e.g., log the error
with logpkg.Printf and return or propagate the error from the surrounding
function). Ensure this check is added right after the for rows.Next() { ... }
block that uses rows.Scan so cursor/I/O errors are not silently ignored.

115-127: ⚠️ Potential issue | 🟠 Major

Race condition risk with concurrent notification workers.

The flow reads unsent logs, sends to Slack, then marks as sent. With both the 5-minute worker and manual send-notifications command, concurrent runs or a MarkAsSent failure after successful send can deliver duplicate notifications.

Consider using row claiming (FOR UPDATE SKIP LOCKED or UPDATE ... RETURNING inside a transaction) before the external Slack call.

,

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/lib/usecase/notification_usecase.go` around lines 115 - 127, The current
flow (processGroup -> sending Slack -> actionLogRepo.MarkAsSent) risks duplicate
deliveries when concurrent workers or failures occur; change to a two-step
claiming pattern: add a repository method (e.g., ClaimUnsentLogs or
ClaimLogsForProcessing) that atomically selects and claims rows using a DB-side
mechanism (FOR UPDATE SKIP LOCKED or UPDATE ... RETURNING inside a transaction)
and returns the claimed log records/IDs, then have processGroup use those
claimed records to perform external Slack sends, and finally call MarkAsSent
only for those claimed IDs; ensure the claim happens before any external call
and that MarkAsSent only marks previously-claimed IDs so concurrent runs cannot
process the same rows.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@api/lib/usecase/notification_usecase.go`:
- Around line 402-413: The formatTimeRange function reads startTime from timeMap
without checking existence which can produce an empty start string; update
formatTimeRange to check timeMap[startTimeID] (like you do for endTimeID+1),
provide a sensible fallback Time (e.g. entity.Time{Time: "0:00"} or another
default) when the key is missing, and use those validated startTime and endTime
variables in the fmt.Sprintf; reference function formatTimeRange and the keys
startTimeID, endTimeID and the timeMap variable when making the change.

---

Duplicate comments:
In `@api/lib/usecase/notification_usecase.go`:
- Around line 239-242: The code only checks slackUserID.Valid before assigning
user.SlackUserID but doesn't guard against an empty string, which causes
SendMessage to fail repeatedly; update the logic around slackUserID and message
sending (the assignment of user.SlackUserID and the call to SendMessage) to
verify both slackUserID.Valid and slackUserID.String != "" (or user.SlackUserID
!= "") and if empty either skip sending with a processLogger.Warn or mark the
notification as failed in the notification use case (the method that calls
SendMessage), so you avoid retry loops for empty Slack IDs.
- Around line 209-219: The loop over rows using timeRow.Next() doesn't check
iteration errors — after the for loop (and before returning timeMap) call
timeRow.Err() and handle any non-nil error (e.g., logpkg.Printf with the error
and return the error alongside or instead of the timeMap) so iteration errors
are not silently ignored; ensure this check occurs after the defer
timeRow.Close() and before the existing return in the function that builds
timeMap.
- Around line 67-84: After iterating with rows.Next() in the function that
builds the action log slice (the loop that scans into entity.ActionLog and
appends to logs), check rows.Err() immediately after the loop and handle any
non-nil error (e.g., log the error with logpkg.Printf and return or propagate
the error from the surrounding function). Ensure this check is added right after
the for rows.Next() { ... } block that uses rows.Scan so cursor/I/O errors are
not silently ignored.
- Around line 115-127: The current flow (processGroup -> sending Slack ->
actionLogRepo.MarkAsSent) risks duplicate deliveries when concurrent workers or
failures occur; change to a two-step claiming pattern: add a repository method
(e.g., ClaimUnsentLogs or ClaimLogsForProcessing) that atomically selects and
claims rows using a DB-side mechanism (FOR UPDATE SKIP LOCKED or UPDATE ...
RETURNING inside a transaction) and returns the claimed log records/IDs, then
have processGroup use those claimed records to perform external Slack sends, and
finally call MarkAsSent only for those claimed IDs; ensure the claim happens
before any external call and that MarkAsSent only marks previously-claimed IDs
so concurrent runs cannot process the same rows.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: fb143c8e-9796-4b8b-bb87-acbd03a3ec25

📥 Commits

Reviewing files that changed from the base of the PR and between 229c7e0 and c9e2610.

📒 Files selected for processing (1)
  • api/lib/usecase/notification_usecase.go

Comment on lines +402 to +413
// formatTimeRange 時間範囲をフォーマット
func (n *notificationUseCase) formatTimeRange(timeMap map[int]entity.Time, startTimeID, endTimeID int) string {

startTime := timeMap[startTimeID]

endTime, ok := timeMap[endTimeID+1]
if !ok {
endTime = entity.Time{Time: "0:00"}
}

return fmt.Sprintf("%s 〜 %s", startTime.Time, endTime.Time)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Inconsistent map key existence check.

endTime access checks for key existence with a fallback, but startTime access on line 405 does not. If startTimeID is not in the map, startTime.Time will be an empty string, resulting in a malformed time range like " 〜 10:00".

Suggested fix
 func (n *notificationUseCase) formatTimeRange(timeMap map[int]entity.Time, startTimeID, endTimeID int) string {
-
-	startTime := timeMap[startTimeID]
+	startTime, ok := timeMap[startTimeID]
+	if !ok {
+		startTime = entity.Time{Time: "不明"}
+	}
 
 	endTime, ok := timeMap[endTimeID+1]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@api/lib/usecase/notification_usecase.go` around lines 402 - 413, The
formatTimeRange function reads startTime from timeMap without checking existence
which can produce an empty start string; update formatTimeRange to check
timeMap[startTimeID] (like you do for endTimeID+1), provide a sensible fallback
Time (e.g. entity.Time{Time: "0:00"} or another default) when the key is
missing, and use those validated startTime and endTime variables in the
fmt.Sprintf; reference function formatTimeRange and the keys startTimeID,
endTimeID and the timeMap variable when making the change.

@taminororo taminororo requested a review from Nagi-azi April 12, 2026 23:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants