diff --git a/CHANGELOG.md b/CHANGELOG.md index 140eb40..b23d347 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -18,6 +18,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0 ### Added +- Enriched desktop notifications (PRD-v2 P0.19, task 19): the `tauri-plugin-notification` bridge now reads the user's `notifications_enabled` flag on every event (immediate respect of the Settings toggle), enriches the `DownloadCompleted` body with `{filename} · {size}` derived from the read repository's `DownloadDetailView`, and surfaces the failure reason on `DownloadFailed` as `{filename} · Error: {error_message}` (capped at 200 chars including the ellipsis to fit the OS toast and avoid leaking long URL/credential payloads). Average speed and total duration are deliberately omitted: the read model only exposes `created_at` (queue admission), so any duration computed at notification time would inflate by the time the download spent queued or paused — the bridge will reintroduce both fields once the read model surfaces a transfer-start metric (e.g. via the `HistoryEntry` produced on completion). Bursts of completions are debounced through a new `domain::notification::NotificationGrouper`: a 5 s sliding window with threshold 3 emits a single aggregated "N downloads completed" notification on the third event and silently suppresses any further completions in the same window so the OS toast stack stays clean. The grouper also detects wall-clock backwards jumps (NTP correction, manual time change) and clears its window so stale "future" timestamps cannot bias subsequent decisions. Pure domain helpers `format_size`/`format_speed`/`format_duration` (base 1024, one decimal, rounding-aware unit promotion so values like `1024 * 1024 - 1` render as `1.0 MB` instead of `1024.0 KB`, dropped zero leading components) live under `domain/notification/format.rs` to keep the formatting policy testable without an adapter and without pulling in `humansize`. The bridge call site in `lib.rs` now threads `Arc` and `Arc` so the gating + lookup share the same instances the IPC layer already mutates, no double-instantiation. Click-to-open and click-to-focus actions are blocked upstream — `tauri-plugin-notification` 2.3.3 desktop API consumes the `NotificationHandle` returned by `notify_rust` internally, so the click callback is unreachable; the limitation is documented in `notification_bridge.rs` and tracked for revisit when the plugin exposes `on_event` or when a direct `notify_rust` integration becomes worthwhile. (task 19, partial — click action deferred) - Animated tray icon while at least one download is active (PRD-v2 P0.18, task 18): the system tray now pulses an orange dot whenever the active-download set is non-empty and reverts to the default static icon as soon as the set goes back to zero. Backend ships a new `adapters/driven/tray/` sub-module split into a domain-pure `ActivityTracker` (a `HashSet` consuming `DownloadStarted` / `Resumed` / `ResumedFromWait` to add and `Paused` / `Completed` / `CompletedPersisted` / `Failed` / `Cancelled` / `Removed` / `Waiting` to remove, returning `Activated` / `Deactivated` / `NoChange` transitions), a procedural `pulse_frames()` generator that renders eight 32×32 RGBA frames in pure Rust (triangular-wave radius pulse `MIN_RADIUS=3 → MAX_RADIUS=7 → MIN_RADIUS`, no binary PNG assets to commit, full unit-test coverage of shape/colors), an `IconSwapper` trait (`show_frame(usize)` / `show_static()`) so the loop is unit-testable without a Tauri runtime, an `AnimatorCore` state machine that wraps the tracker with a frame index and exposes `handle_event` (returning `StartAnimation` / `StopAnimation` / `NoOp`) and `tick`, and a `spawn_tray_animator` async wiring that subscribes to the `EventBus` (filtering out high-frequency `DownloadProgress` / segment events at the source so they never reach the channel), forwards relevant events through an mpsc to a `tokio::select!` loop that idles the interval arm with `if core.is_animating()` so a fully idle tray costs zero timer wake-ups, and calls `swapper.show_static()` once on shutdown. The Tauri-bound `TauriIconSwapper` owns the frames as `Image::new_owned` (so the underlying RGBA buffers outlive each `set_icon` call), guards on empty frame slices, and logs `set_icon` failures via `tracing::warn` instead of unwrapping. `setup_system_tray` now returns the `TrayIcon` handle so `lib.rs` can build the swapper and spawn the animator with the same `Arc` the Tauri / notification bridges already share, with a `DEFAULT_FRAME_INTERVAL` of 200 ms. The implementation is platform-agnostic (no `cfg(target_os)` in the adapter) and relies only on the cross-platform Tauri 2 `TrayIcon::set_icon(Option)` API. (task 18) - Dynamic segment splitting (PRD-v2 P0.17, task 17): when a parallel segment finishes before its peers, the engine now re-evaluates the still-running segments, picks the slowest one whose remaining range exceeds `dynamic_split_min_remaining_mb` (default 4 MiB) and shrinks it in place — a fresh worker takes the upper half so the tail of the download accelerates instead of stalling on a single slow connection. Backend ships a domain-pure `Segment::split(at_byte, new_id)` validation method (state must be `Downloading`, split point strictly inside the unfetched range, caller-provided id must differ from the original — IDs are allocated by the engine's monotonic `next_segment_id` counter, never invented inside the domain), a new `DomainEvent::SegmentSplit { download_id, original_segment_id, new_segment_id, split_at }` forwarded as the `segment-split` Tauri event and logged in the per-download log store, two new `AppConfig` / `ConfigPatch` / `SettingsDto` fields `dynamic_split_enabled` (default `true`) and `dynamic_split_min_remaining_mb` (default `4`) wired through the toml config store, the Tauri IPC `SettingsDto`/`ConfigPatchDto` (so the frontend can both read and write them) and the new `application::services::engine_config_bridge` subscriber so live `settings_update` calls reconfigure already-running engines without a restart. `SegmentedDownloadEngine` stores `dynamic_split_enabled` / `dynamic_split_min_remaining_bytes` in `Arc` / `Arc` and exposes a `set_dynamic_split(enabled, min_remaining_mb)` setter consumed by the bridge. After a split, the engine updates the original slot's `initial_end` to `split_at` immediately on successful `end_tx.send`, so a subsequent `pick_split_target` evaluation cannot expand the worker's range past the shrunk boundary and `persist_split_meta` records the post-split topology rather than the stale one (closes coderabbit P1 + greptile P1 race). Each segment task now returns `(slot_idx, Result)`; on success the engine flips a `completed: bool` flag on the slot — `pick_split_target` skips completed slots so they cannot be re-picked, and `persist_split_meta` keeps the entry with `completed: true` and a full-range `downloaded_bytes` so a crash right after a split never loses the record of byte ranges already on disk. `pick_split_target` also gates on a 500 ms / non-zero-progress sample window: a fresh split child cannot be picked again until it has actually produced a throughput sample, preventing cascading fragmentation of the newest range. The segment worker accepts the upper bound through a `tokio::sync::watch::Receiver` instead of a frozen `u64`, re-reads it before each chunk fetch and again after every successful network read so a mid-flight shrink clamps the next write to the new boundary; per-segment progress is exposed via an `Arc` so the engine can pick the slowest candidate by throughput (`downloaded / elapsed`). After every split, the engine atomically rewrites `.vortex-meta` with the updated segment topology so resume after a crash mid-split sees a consistent state. (task 17, PR #111 review) - "Report broken plugin" action (PRD-v2 P0.16, task 16): plugins listed in *Plugins → Plugin Store* now expose a *Report broken plugin* item in their kebab menu. Clicking it opens the user's default browser at a pre-filled GitHub issue on the plugin's repository, with diagnostic metadata (plugin name + version, Vortex version, OS, optional URL under test, last 50 log lines) inlined into the issue body. Backend adds a `repository_url` field to `domain::model::plugin::PluginInfo` (parsed from the new `[plugin].repository` key in `plugin.toml`), a `domain::ports::driven::UrlOpener` port plus its platform-native `SystemUrlOpener` adapter (`xdg-open` / `open` / `cmd start`, `http(s)://` only by validation), the std-only `domain::model::plugin::build_report_broken_url` URL builder (RFC 3986 unreserved-set percent encoder, last 50 log lines, GitHub-only repository hosts, accepts `.git` suffix, rejects malformed URLs with `DomainError::ValidationError`), and a `ReportBrokenPluginCommand` handler that returns `AppError::Validation` when a manifest carries no `repository_url`. New Tauri IPC `plugin_report_broken(pluginName, logLines?, testedUrl?) → string` returns the issue URL so the UI can fall back to clipboard copy if the launcher fails. i18n (en/fr): `plugins.action.reportBroken`, `plugins.toast.reportBrokenSuccess`, `plugins.toast.reportBrokenError`. (task 16) diff --git a/src-tauri/src/adapters/driven/notification/notification_bridge.rs b/src-tauri/src/adapters/driven/notification/notification_bridge.rs index dd9acae..4824619 100644 --- a/src-tauri/src/adapters/driven/notification/notification_bridge.rs +++ b/src-tauri/src/adapters/driven/notification/notification_bridge.rs @@ -1,41 +1,280 @@ +//! Bridges domain events to OS desktop notifications. +//! +//! Reads the current `AppConfig.notifications_enabled` flag on every +//! event so toggling the setting from the UI takes effect immediately, +//! enriches the body with file name and total size, and debounces +//! bursts of completions through `NotificationGrouper`. Average speed +//! and duration are deliberately omitted until the read model surfaces +//! a dedicated transfer-start metric (see `complete_body` for context). +//! +//! ## Click action limitation +//! +//! `tauri-plugin-notification` 2.3.3 desktop API delegates to +//! `notify_rust` and intentionally drops the `NotificationHandle` so +//! the closure returned by the OS for "user clicked the toast" is +//! unreachable. PRD §7.5 mentions click-to-open and click-to-focus as +//! desired UX; that requires either replacing the plugin with direct +//! `notify_rust` use (Linux/macOS) or upstream patch. Tracked: revisit +//! when tauri-plugin-notification exposes `on_event`. + +use std::sync::{Arc, Mutex}; +use std::time::{SystemTime, UNIX_EPOCH}; + use tauri::AppHandle; use tauri_plugin_notification::NotificationExt; use tracing::warn; +use crate::domain::error::DomainError; use crate::domain::event::DomainEvent; -use crate::domain::ports::driven::EventBus; - -/// Subscribes to the EventBus and sends desktop notifications for key events. -pub fn spawn_notification_bridge(app_handle: AppHandle, event_bus: &dyn EventBus) { - event_bus.subscribe(Box::new(move |event: &DomainEvent| match event { - DomainEvent::DownloadCompleted { id } => { - if let Err(e) = app_handle - .notification() - .builder() - .title("Download Complete") - .body(format!("Download #{} finished successfully", id.0)) - .show() - { - warn!("Failed to show completion notification: {e}"); - } +use crate::domain::model::views::DownloadDetailView; +use crate::domain::notification::{NotificationDecision, NotificationGrouper, format_size}; +use crate::domain::ports::driven::{ConfigStore, DownloadReadRepository, EventBus}; + +/// Cap error messages embedded in notification bodies. Long stack traces +/// or HTML bodies returned by hosters would otherwise overflow the OS +/// toast and may leak credentials embedded in the URL. +const MAX_ERROR_BODY_CHARS: usize = 200; + +/// Subscribe to the EventBus and surface key download lifecycle events +/// as desktop notifications. +/// +/// The bridge owns its grouper state through `Arc>` because +/// `EventBus::subscribe` requires `Fn` (re-entrant per event); the +/// mutex is contended only on the notification path which is already +/// dominated by I/O latency. +pub fn spawn_notification_bridge( + app_handle: AppHandle, + event_bus: &dyn EventBus, + config_store: Arc, + read_repo: Arc, +) { + let grouper = Arc::new(Mutex::new(NotificationGrouper::new())); + + event_bus.subscribe(Box::new(move |event: &DomainEvent| { + // Side-effects that must run regardless of the user's + // notification preference (logs, observability) live before + // the gate so disabling toasts never silences error reporting. + // We deliberately omit the raw error string from the structured + // tracing field: it can carry URLs, tokens or hoster response + // bodies and global tracing logs are persisted/aggregated. The + // per-download log bridge (`download_log_bridge`) already + // captures the full error against the download id for + // correlation. + if let DomainEvent::DownloadFailed { id, .. } = event { + tracing::error!(download_id = id.0, "download failed"); + } + + if !is_notifications_enabled(config_store.as_ref()) { + return; } - DomainEvent::DownloadFailed { id, error } => { - // Don't expose raw error details in desktop notifications - // (could contain paths, credentials). Full error is in app logs. - tracing::error!(download_id = id.0, error = %error, "download failed"); - if let Err(e) = app_handle - .notification() - .builder() - .title("Download Failed") - .body(format!( - "Download #{} failed (details available in app logs)", - id.0 - )) - .show() - { - warn!("Failed to show error notification: {e}"); + + match event { + DomainEvent::DownloadCompleted { id } => { + let now = epoch_secs(); + let decision = match grouper.lock() { + Ok(mut g) => g.record(now), + Err(poisoned) => { + warn!("notification grouper mutex poisoned; recovering"); + poisoned.into_inner().record(now) + } + }; + match decision { + NotificationDecision::Suppress => {} + NotificationDecision::ShowAggregated { count } => { + send(&app_handle, "Downloads complete", &aggregated_body(count)); + } + NotificationDecision::ShowSingle => { + let detail = lookup_detail(read_repo.as_ref(), id.0); + send( + &app_handle, + "Download complete", + &complete_body(id.0, detail.as_ref()), + ); + } + } + } + DomainEvent::DownloadFailed { id, error } => { + let detail = lookup_detail(read_repo.as_ref(), id.0); + send( + &app_handle, + "Download failed", + &failed_body(id.0, detail.as_ref(), error), + ); } + _ => {} } - _ => {} })); } + +fn is_notifications_enabled(config_store: &dyn ConfigStore) -> bool { + match config_store.get_config() { + Ok(c) => c.notifications_enabled, + Err(e) => { + warn!("failed to read notifications_enabled flag: {e}"); + // Default-allow on read error so a single corrupt config write + // does not silently disable user-visible feedback. + true + } + } +} + +fn lookup_detail(read_repo: &dyn DownloadReadRepository, id: u64) -> Option { + use crate::domain::model::download::DownloadId; + match read_repo.find_download_detail(DownloadId(id)) { + Ok(view) => view, + Err(DomainError::NotFound(_)) => None, + Err(e) => { + warn!(download_id = id, error = %e, "notification: detail lookup failed"); + None + } + } +} + +fn complete_body(id: u64, detail: Option<&DownloadDetailView>) -> String { + let Some(d) = detail else { + return format!("Download #{id} finished successfully"); + }; + let mut parts: Vec = vec![d.file_name.clone()]; + if let Some(total) = d.total_bytes { + parts.push(format_size(total)); + } + // Speed + duration intentionally omitted: `DownloadDetailView` + // exposes `created_at` (queue admission) but no transfer-start + // marker, so any duration computed here would inflate by the time + // the download spent queued or paused. Re-introduce when the read + // model surfaces an active-transfer metric (e.g., via the history + // entry produced on completion). + parts.join(" · ") +} + +fn failed_body(id: u64, detail: Option<&DownloadDetailView>, error: &str) -> String { + let name = detail + .map(|d| d.file_name.clone()) + .unwrap_or_else(|| format!("#{id}")); + let truncated = truncate_error(error); + format!("{name} · Error: {truncated}") +} + +fn aggregated_body(count: usize) -> String { + format!("{count} downloads completed") +} + +fn truncate_error(error: &str) -> String { + if error.chars().count() <= MAX_ERROR_BODY_CHARS { + return error.to_string(); + } + // Reserve one slot for the ellipsis so the rendered string respects + // the configured cap exactly. `MAX_ERROR_BODY_CHARS` is a const ≥ 2 + // so the saturating subtraction is here for defence-in-depth. + let payload_chars = MAX_ERROR_BODY_CHARS.saturating_sub(1); + let mut out: String = error.chars().take(payload_chars).collect(); + out.push('…'); + out +} + +fn send(app_handle: &AppHandle, title: &str, body: &str) { + if let Err(e) = app_handle + .notification() + .builder() + .title(title) + .body(body) + .show() + { + warn!("failed to show notification '{title}': {e}"); + } +} + +fn epoch_secs() -> u64 { + SystemTime::now() + .duration_since(UNIX_EPOCH) + .map(|d| d.as_secs()) + .unwrap_or(0) +} + +#[cfg(test)] +mod tests { + use super::*; + use crate::domain::model::download::{DownloadId, DownloadState}; + use crate::domain::model::views::DownloadDetailView; + + fn detail(file_name: &str, total_bytes: Option, created_at: u64) -> DownloadDetailView { + DownloadDetailView { + id: DownloadId(7), + file_name: file_name.to_string(), + url: "https://example.com/file".into(), + source_hostname: "example.com".into(), + state: DownloadState::Completed, + progress_percent: 100.0, + speed_bytes_per_sec: 0, + downloaded_bytes: total_bytes.unwrap_or(0), + total_bytes, + eta_seconds: None, + segments: vec![], + checksum_expected: None, + checksum_computed: None, + checksum_algorithm: None, + destination_path: "/tmp/file".into(), + module_name: None, + account_name: None, + resume_supported: true, + retry_count: 0, + max_retries: 5, + created_at, + updated_at: created_at + 60, + } + } + + #[test] + fn test_complete_body_falls_back_when_detail_missing() { + assert_eq!( + complete_body(42, None), + "Download #42 finished successfully" + ); + } + + #[test] + fn test_complete_body_combines_filename_and_size() { + let d = detail("video.mp4", Some(10 * 1024 * 1024), 1_000); + assert_eq!(complete_body(7, Some(&d)), "video.mp4 · 10.0 MB"); + } + + #[test] + fn test_complete_body_renders_filename_only_when_size_unknown() { + let d = detail("stream.ts", None, 100); + assert_eq!(complete_body(1, Some(&d)), "stream.ts"); + } + + #[test] + fn test_failed_body_includes_filename_and_error() { + let d = detail("archive.zip", Some(1024), 0); + assert_eq!( + failed_body(9, Some(&d), "connection reset"), + "archive.zip · Error: connection reset" + ); + } + + #[test] + fn test_failed_body_uses_id_when_detail_missing() { + assert_eq!(failed_body(99, None, "timeout"), "#99 · Error: timeout"); + } + + #[test] + fn test_truncate_error_keeps_short_strings_verbatim() { + let short = "x".repeat(MAX_ERROR_BODY_CHARS); + assert_eq!(truncate_error(&short), short); + } + + #[test] + fn test_truncate_error_caps_at_max_chars_including_ellipsis() { + let long = "x".repeat(MAX_ERROR_BODY_CHARS * 2); + let truncated = truncate_error(&long); + assert_eq!(truncated.chars().count(), MAX_ERROR_BODY_CHARS); + assert!(truncated.ends_with('…')); + } + + #[test] + fn test_aggregated_body_format() { + assert_eq!(aggregated_body(3), "3 downloads completed"); + assert_eq!(aggregated_body(7), "7 downloads completed"); + } +} diff --git a/src-tauri/src/domain/mod.rs b/src-tauri/src/domain/mod.rs index af75978..c52240a 100644 --- a/src-tauri/src/domain/mod.rs +++ b/src-tauri/src/domain/mod.rs @@ -6,6 +6,7 @@ pub mod error; pub mod event; pub mod model; +pub mod notification; pub mod ports; pub use error::DomainError; diff --git a/src-tauri/src/domain/notification/format.rs b/src-tauri/src/domain/notification/format.rs new file mode 100644 index 0000000..d0f9cca --- /dev/null +++ b/src-tauri/src/domain/notification/format.rs @@ -0,0 +1,160 @@ +//! Pure helpers for rendering download metadata in user-facing strings. +//! +//! Base 1024 for bytes (industry standard for download managers), one +//! decimal place from KiB upwards. Duration drops zero leading components +//! ("45s", "1m 5s", "2h 30m"). Domain-pure: std-only, no allocator-heavy +//! crates such as `humansize` (avoids dependency creep for trivial logic). + +/// Format a byte count as a short human-readable string. +/// +/// Uses base 1024 (KB = 1024 B) and one decimal once the unit is at +/// least KB. Bytes (`B`) stay integer. Always `≥ 0` (input is `u64`). +/// +/// Rounding-aware unit promotion: a value like `1024 * 1024 - 1` would +/// otherwise render as `1024.0 KB` (the rendered form rounds up across +/// the unit boundary); after picking `value` we re-check the rounded +/// representation and promote one more step if it crosses `1024.0`. +pub fn format_size(bytes: u64) -> String { + const UNITS: [&str; 5] = ["B", "KB", "MB", "GB", "TB"]; + if bytes < 1024 { + return format!("{bytes} B"); + } + let mut value = bytes as f64; + let mut idx = 0; + while value >= 1024.0 && idx < UNITS.len() - 1 { + value /= 1024.0; + idx += 1; + } + // After rounding to one decimal, the displayed value can land on + // 1024.0; promote to the next unit so we never render "1024.0 KB". + let rounded = (value * 10.0).round() / 10.0; + if rounded >= 1024.0 && idx < UNITS.len() - 1 { + value /= 1024.0; + idx += 1; + } + format!("{value:.1} {}", UNITS[idx]) +} + +/// Format a transfer rate (bytes per second) by appending "/s". +pub fn format_speed(bytes_per_sec: u64) -> String { + format!("{}/s", format_size(bytes_per_sec)) +} + +/// Format a duration as compact "Hh Mm Ss". +/// +/// Leading zero components are dropped: `45` → "45s", `60` → "1m", +/// `125` → "2m 5s", `3700` → "1h 1m 40s". Pure, std-only. +pub fn format_duration(secs: u64) -> String { + if secs == 0 { + return "0s".to_string(); + } + let hours = secs / 3600; + let minutes = (secs % 3600) / 60; + let seconds = secs % 60; + let mut parts: Vec = Vec::with_capacity(3); + if hours > 0 { + parts.push(format!("{hours}h")); + } + if minutes > 0 { + parts.push(format!("{minutes}m")); + } + if seconds > 0 { + parts.push(format!("{seconds}s")); + } + parts.join(" ") +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_format_size_zero_bytes_renders_as_b_unit() { + assert_eq!(format_size(0), "0 B"); + } + + #[test] + fn test_format_size_below_kib_keeps_byte_unit() { + assert_eq!(format_size(1), "1 B"); + assert_eq!(format_size(512), "512 B"); + assert_eq!(format_size(1023), "1023 B"); + } + + #[test] + fn test_format_size_exactly_one_kib_uses_kb_unit() { + assert_eq!(format_size(1024), "1.0 KB"); + } + + #[test] + fn test_format_size_megabytes_uses_one_decimal() { + assert_eq!(format_size(1024 * 1024), "1.0 MB"); + assert_eq!(format_size(2_500_000), "2.4 MB"); + } + + #[test] + fn test_format_size_gigabytes_threshold() { + assert_eq!(format_size(1_073_741_824), "1.0 GB"); + } + + #[test] + fn test_format_size_terabytes_caps_at_tb() { + // 5 TiB + let bytes = 5_u64 * 1024_u64.pow(4); + assert_eq!(format_size(bytes), "5.0 TB"); + } + + #[test] + fn test_format_size_huge_value_does_not_overflow_unit_index() { + // Many petabytes — must stay capped at TB, not panic. + assert!(format_size(u64::MAX).ends_with(" TB")); + } + + #[test] + fn test_format_size_promotes_unit_when_rounding_crosses_boundary() { + // 1 MiB - 1 byte rounds to "1024.0 KB"; should display as "1.0 MB". + assert_eq!(format_size(1024 * 1024 - 1), "1.0 MB"); + // 1 GiB - 1 byte rounds across the MB→GB boundary. + assert_eq!(format_size(1024_u64.pow(3) - 1), "1.0 GB"); + } + + #[test] + fn test_format_speed_appends_per_second_suffix() { + assert_eq!(format_speed(0), "0 B/s"); + assert_eq!(format_speed(1024), "1.0 KB/s"); + assert_eq!(format_speed(1_500_000), "1.4 MB/s"); + } + + #[test] + fn test_format_duration_zero_returns_zero_seconds() { + assert_eq!(format_duration(0), "0s"); + } + + #[test] + fn test_format_duration_seconds_only() { + assert_eq!(format_duration(1), "1s"); + assert_eq!(format_duration(45), "45s"); + assert_eq!(format_duration(59), "59s"); + } + + #[test] + fn test_format_duration_drops_zero_seconds() { + assert_eq!(format_duration(60), "1m"); + assert_eq!(format_duration(3600), "1h"); + } + + #[test] + fn test_format_duration_combines_minutes_and_seconds() { + assert_eq!(format_duration(125), "2m 5s"); + } + + #[test] + fn test_format_duration_combines_hours_minutes_seconds() { + assert_eq!(format_duration(3700), "1h 1m 40s"); + } + + #[test] + fn test_format_duration_skips_middle_zero_minute_component() { + // 1h plus 5 seconds — minutes part is 0 and must be omitted. + assert_eq!(format_duration(3605), "1h 5s"); + } +} diff --git a/src-tauri/src/domain/notification/grouper.rs b/src-tauri/src/domain/notification/grouper.rs new file mode 100644 index 0000000..296f73c --- /dev/null +++ b/src-tauri/src/domain/notification/grouper.rs @@ -0,0 +1,251 @@ +//! Burst aggregation for desktop completion notifications. +//! +//! Rule (PRD §7.5 / task 19): if **≥3** completion notifications fire within +//! a **5 s** sliding window, the third one becomes an aggregated +//! "N downloads completed" summary and any further completions in the +//! same burst are suppressed until the window drains. +//! +//! Pure: works off externally supplied epoch seconds so tests are +//! deterministic and the bridge can be driven by either real time or a +//! fake clock. + +use std::collections::VecDeque; + +/// Sliding window (seconds) used to detect bursts of completions. +pub const GROUPING_WINDOW_SECS: u64 = 5; + +/// Number of completions in the window that triggers aggregation. +pub const GROUPING_THRESHOLD: usize = 3; + +/// What the bridge should do with a completion event. +#[derive(Debug, Clone, Copy, PartialEq, Eq)] +pub enum NotificationDecision { + /// Show a normal per-download notification. + ShowSingle, + /// Show an aggregated "N downloads completed" notification. + /// `count` is the total number of completions in the current burst, + /// including the one that just fired. + ShowAggregated { count: usize }, + /// Drop this completion silently — the burst is already represented + /// by an aggregated notification within the same window. + Suppress, +} + +/// Stateful debouncer for completion notifications. +/// +/// Keeps a fixed-capacity history of the recent completion timestamps +/// and decides per call whether to render single, aggregated, or +/// nothing. Reset is implicit: once `now` advances past the oldest +/// timestamp + `window`, that entry drops from the queue and the burst +/// effectively ends. +#[derive(Debug)] +pub struct NotificationGrouper { + window_secs: u64, + threshold: usize, + /// Timestamps of completions still relevant to the current window. + /// Kept ordered ascending; pruned on every `record`. + timestamps: VecDeque, + /// Set to `true` after we emit an aggregated notification for the + /// current burst so subsequent completions in the same window do + /// not trigger another aggregated notification (would spam the OS). + burst_aggregated: bool, +} + +impl NotificationGrouper { + /// Create a grouper with the PRD defaults (5 s window, threshold 3). + pub fn new() -> Self { + Self::with_params(GROUPING_WINDOW_SECS, GROUPING_THRESHOLD) + } + + /// Create a grouper with custom window and threshold (used by tests). + pub fn with_params(window_secs: u64, threshold: usize) -> Self { + Self { + window_secs, + threshold, + timestamps: VecDeque::new(), + burst_aggregated: false, + } + } + + /// Record a completion that occurred at `now_secs` (Unix epoch + /// seconds) and return what the bridge should do. + /// + /// Pure with respect to the supplied clock — the same input + /// sequence yields the same decisions. + pub fn record(&mut self, now_secs: u64) -> NotificationDecision { + // Wall-clock backwards jump (NTP step, manual time change): + // entries timestamped in the "future" would never be pruned by + // the new clock and would silently bias every burst decision. + // Drop them and start a fresh window from `now_secs`. + if self.timestamps.back().is_some_and(|&back| back > now_secs) { + self.timestamps.clear(); + self.burst_aggregated = false; + } + self.prune(now_secs); + // Window completely drained → previous burst ends, reset flag. + if self.timestamps.is_empty() { + self.burst_aggregated = false; + } + self.timestamps.push_back(now_secs); + + if self.burst_aggregated { + return NotificationDecision::Suppress; + } + if self.timestamps.len() >= self.threshold { + self.burst_aggregated = true; + return NotificationDecision::ShowAggregated { + count: self.timestamps.len(), + }; + } + NotificationDecision::ShowSingle + } + + /// Drop entries strictly older than `now - window`. `saturating_sub` + /// guards against clock skew where `now < window_secs`. Entries at + /// the exact window edge stay so a burst spanning `now=0..window` + /// still aggregates correctly when threshold ≥ window+1 events fire. + fn prune(&mut self, now_secs: u64) { + let cutoff = now_secs.saturating_sub(self.window_secs); + while let Some(&front) = self.timestamps.front() { + if front < cutoff { + self.timestamps.pop_front(); + } else { + break; + } + } + } +} + +impl Default for NotificationGrouper { + fn default() -> Self { + Self::new() + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_single_event_returns_show_single() { + let mut g = NotificationGrouper::new(); + assert_eq!(g.record(100), NotificationDecision::ShowSingle); + } + + #[test] + fn test_two_events_within_window_both_show_single() { + let mut g = NotificationGrouper::new(); + assert_eq!(g.record(100), NotificationDecision::ShowSingle); + assert_eq!(g.record(102), NotificationDecision::ShowSingle); + } + + #[test] + fn test_third_event_within_window_returns_aggregated_with_count_three() { + let mut g = NotificationGrouper::new(); + g.record(100); + g.record(101); + assert_eq!( + g.record(102), + NotificationDecision::ShowAggregated { count: 3 } + ); + } + + #[test] + fn test_fourth_and_subsequent_in_burst_are_suppressed() { + let mut g = NotificationGrouper::new(); + g.record(100); + g.record(101); + g.record(102); // aggregated + assert_eq!(g.record(103), NotificationDecision::Suppress); + assert_eq!(g.record(104), NotificationDecision::Suppress); + } + + #[test] + fn test_burst_resets_after_window_drains() { + let mut g = NotificationGrouper::new(); + g.record(100); + g.record(101); + g.record(102); // aggregated + // 6 s later — window of 5 s elapsed (cutoff = 108 - 5 = 103), + // entries at 100/101/102 are now strictly <= cutoff and pruned. + assert_eq!(g.record(108), NotificationDecision::ShowSingle); + } + + #[test] + fn test_two_consecutive_bursts_each_get_one_aggregated() { + let mut g = NotificationGrouper::new(); + // Burst 1 + g.record(100); + g.record(101); + let agg1 = g.record(102); + // Window drained + // Burst 2 + let s1 = g.record(200); + let s2 = g.record(201); + let agg2 = g.record(202); + assert_eq!(agg1, NotificationDecision::ShowAggregated { count: 3 }); + assert_eq!(s1, NotificationDecision::ShowSingle); + assert_eq!(s2, NotificationDecision::ShowSingle); + assert_eq!(agg2, NotificationDecision::ShowAggregated { count: 3 }); + } + + #[test] + fn test_count_grows_when_multiple_events_share_window_before_aggregation() { + // With a higher threshold, the aggregated count must reflect + // the actual number of pending entries, not a fixed value. + let mut g = NotificationGrouper::with_params(10, 4); + g.record(0); + g.record(1); + g.record(2); + assert_eq!( + g.record(3), + NotificationDecision::ShowAggregated { count: 4 } + ); + } + + #[test] + fn test_event_just_outside_window_does_not_count_toward_burst() { + let mut g = NotificationGrouper::new(); + g.record(100); + g.record(101); + // 6 s after first → first event has dropped (cutoff = 106-5 = 101, + // entry at 100 is <= cutoff), only the entry at 101 remains. The + // new entry brings count to 2 → still ShowSingle. + assert_eq!(g.record(106), NotificationDecision::ShowSingle); + } + + #[test] + fn test_clock_smaller_than_window_does_not_panic() { + let mut g = NotificationGrouper::new(); + // saturating_sub guards against `now < window`. + assert_eq!(g.record(0), NotificationDecision::ShowSingle); + assert_eq!(g.record(1), NotificationDecision::ShowSingle); + } + + #[test] + fn test_default_constructor_matches_prd_constants() { + let g = NotificationGrouper::new(); + assert_eq!(g.window_secs, GROUPING_WINDOW_SECS); + assert_eq!(g.threshold, GROUPING_THRESHOLD); + } + + #[test] + fn test_backwards_clock_jump_resets_window_and_returns_show_single() { + let mut g = NotificationGrouper::new(); + // Build up a burst at t=1000 — third event would be aggregated. + g.record(1000); + g.record(1001); + let agg = g.record(1002); + assert_eq!(agg, NotificationDecision::ShowAggregated { count: 3 }); + // Clock steps backwards (NTP correction). Stale "future" entries + // must be dropped so the next event starts a fresh window. + let after_jump = g.record(500); + assert_eq!(after_jump, NotificationDecision::ShowSingle); + // And the burst flag must reset so a new burst can aggregate. + g.record(501); + assert_eq!( + g.record(502), + NotificationDecision::ShowAggregated { count: 3 } + ); + } +} diff --git a/src-tauri/src/domain/notification/mod.rs b/src-tauri/src/domain/notification/mod.rs new file mode 100644 index 0000000..177ad62 --- /dev/null +++ b/src-tauri/src/domain/notification/mod.rs @@ -0,0 +1,17 @@ +//! Desktop notification domain helpers. +//! +//! Pure formatting + aggregation logic shared by adapters that surface +//! download lifecycle events to the user (`tauri-plugin-notification`, +//! REST WebSocket toast, future tray balloons). +//! +//! Lives in the domain layer because the rules ("aggregate ≥3 within 5s", +//! "format bytes with 1024 base") are policy decisions, not adapter +//! implementation details. + +pub mod format; +pub mod grouper; + +pub use format::{format_duration, format_size, format_speed}; +pub use grouper::{ + GROUPING_THRESHOLD, GROUPING_WINDOW_SECS, NotificationDecision, NotificationGrouper, +}; diff --git a/src-tauri/src/lib.rs b/src-tauri/src/lib.rs index a1c03b9..e37e92d 100644 --- a/src-tauri/src/lib.rs +++ b/src-tauri/src/lib.rs @@ -332,7 +332,7 @@ pub fn run() { file_storage, http_client, plugin_loader.clone(), - config_store, + config_store.clone(), credential_store, clipboard_observer, archive_extractor.clone(), @@ -347,7 +347,7 @@ pub fn run() { let query_bus = Arc::new( QueryBus::new( - download_read_repo, + download_read_repo.clone(), history_repo, stats_repo, plugin_read_repo, @@ -396,7 +396,12 @@ pub fn run() { // ── Event bridges (domain events → frontend + desktop) ── spawn_tauri_event_bridge(app_handle.clone(), event_bus.as_ref()); - spawn_notification_bridge(app_handle, event_bus.as_ref()); + spawn_notification_bridge( + app_handle, + event_bus.as_ref(), + config_store.clone(), + download_read_repo.clone(), + ); spawn_download_log_bridge(event_bus.as_ref(), download_log_store); spawn_sqlite_progress_bridge(event_bus.as_ref(), db);