Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Added

- Enriched desktop notifications (PRD-v2 P0.19, task 19): the `tauri-plugin-notification` bridge now reads the user's `notifications_enabled` flag on every event (immediate respect of the Settings toggle), enriches the `DownloadCompleted` body with `{filename} · {size}` derived from the read repository's `DownloadDetailView`, and surfaces the failure reason on `DownloadFailed` as `{filename} · Error: {error_message}` (capped at 200 chars including the ellipsis to fit the OS toast and avoid leaking long URL/credential payloads). Average speed and total duration are deliberately omitted: the read model only exposes `created_at` (queue admission), so any duration computed at notification time would inflate by the time the download spent queued or paused — the bridge will reintroduce both fields once the read model surfaces a transfer-start metric (e.g. via the `HistoryEntry` produced on completion). Bursts of completions are debounced through a new `domain::notification::NotificationGrouper`: a 5 s sliding window with threshold 3 emits a single aggregated "N downloads completed" notification on the third event and silently suppresses any further completions in the same window so the OS toast stack stays clean. The grouper also detects wall-clock backwards jumps (NTP correction, manual time change) and clears its window so stale "future" timestamps cannot bias subsequent decisions. Pure domain helpers `format_size`/`format_speed`/`format_duration` (base 1024, one decimal, rounding-aware unit promotion so values like `1024 * 1024 - 1` render as `1.0 MB` instead of `1024.0 KB`, dropped zero leading components) live under `domain/notification/format.rs` to keep the formatting policy testable without an adapter and without pulling in `humansize`. The bridge call site in `lib.rs` now threads `Arc<dyn ConfigStore>` and `Arc<dyn DownloadReadRepository>` so the gating + lookup share the same instances the IPC layer already mutates, no double-instantiation. Click-to-open and click-to-focus actions are blocked upstream — `tauri-plugin-notification` 2.3.3 desktop API consumes the `NotificationHandle` returned by `notify_rust` internally, so the click callback is unreachable; the limitation is documented in `notification_bridge.rs` and tracked for revisit when the plugin exposes `on_event` or when a direct `notify_rust` integration becomes worthwhile. (task 19, partial — click action deferred)
- Animated tray icon while at least one download is active (PRD-v2 P0.18, task 18): the system tray now pulses an orange dot whenever the active-download set is non-empty and reverts to the default static icon as soon as the set goes back to zero. Backend ships a new `adapters/driven/tray/` sub-module split into a domain-pure `ActivityTracker` (a `HashSet<DownloadId>` consuming `DownloadStarted` / `Resumed` / `ResumedFromWait` to add and `Paused` / `Completed` / `CompletedPersisted` / `Failed` / `Cancelled` / `Removed` / `Waiting` to remove, returning `Activated` / `Deactivated` / `NoChange` transitions), a procedural `pulse_frames()` generator that renders eight 32×32 RGBA frames in pure Rust (triangular-wave radius pulse `MIN_RADIUS=3 → MAX_RADIUS=7 → MIN_RADIUS`, no binary PNG assets to commit, full unit-test coverage of shape/colors), an `IconSwapper` trait (`show_frame(usize)` / `show_static()`) so the loop is unit-testable without a Tauri runtime, an `AnimatorCore` state machine that wraps the tracker with a frame index and exposes `handle_event` (returning `StartAnimation` / `StopAnimation` / `NoOp`) and `tick`, and a `spawn_tray_animator` async wiring that subscribes to the `EventBus` (filtering out high-frequency `DownloadProgress` / segment events at the source so they never reach the channel), forwards relevant events through an mpsc to a `tokio::select!` loop that idles the interval arm with `if core.is_animating()` so a fully idle tray costs zero timer wake-ups, and calls `swapper.show_static()` once on shutdown. The Tauri-bound `TauriIconSwapper` owns the frames as `Image::new_owned` (so the underlying RGBA buffers outlive each `set_icon` call), guards on empty frame slices, and logs `set_icon` failures via `tracing::warn` instead of unwrapping. `setup_system_tray` now returns the `TrayIcon` handle so `lib.rs` can build the swapper and spawn the animator with the same `Arc<dyn EventBus>` the Tauri / notification bridges already share, with a `DEFAULT_FRAME_INTERVAL` of 200 ms. The implementation is platform-agnostic (no `cfg(target_os)` in the adapter) and relies only on the cross-platform Tauri 2 `TrayIcon::set_icon(Option<Image>)` API. (task 18)
- Dynamic segment splitting (PRD-v2 P0.17, task 17): when a parallel segment finishes before its peers, the engine now re-evaluates the still-running segments, picks the slowest one whose remaining range exceeds `dynamic_split_min_remaining_mb` (default 4 MiB) and shrinks it in place — a fresh worker takes the upper half so the tail of the download accelerates instead of stalling on a single slow connection. Backend ships a domain-pure `Segment::split(at_byte, new_id)` validation method (state must be `Downloading`, split point strictly inside the unfetched range, caller-provided id must differ from the original — IDs are allocated by the engine's monotonic `next_segment_id` counter, never invented inside the domain), a new `DomainEvent::SegmentSplit { download_id, original_segment_id, new_segment_id, split_at }` forwarded as the `segment-split` Tauri event and logged in the per-download log store, two new `AppConfig` / `ConfigPatch` / `SettingsDto` fields `dynamic_split_enabled` (default `true`) and `dynamic_split_min_remaining_mb` (default `4`) wired through the toml config store, the Tauri IPC `SettingsDto`/`ConfigPatchDto` (so the frontend can both read and write them) and the new `application::services::engine_config_bridge` subscriber so live `settings_update` calls reconfigure already-running engines without a restart. `SegmentedDownloadEngine` stores `dynamic_split_enabled` / `dynamic_split_min_remaining_bytes` in `Arc<AtomicBool>` / `Arc<AtomicU64>` and exposes a `set_dynamic_split(enabled, min_remaining_mb)` setter consumed by the bridge. After a split, the engine updates the original slot's `initial_end` to `split_at` immediately on successful `end_tx.send`, so a subsequent `pick_split_target` evaluation cannot expand the worker's range past the shrunk boundary and `persist_split_meta` records the post-split topology rather than the stale one (closes coderabbit P1 + greptile P1 race). Each segment task now returns `(slot_idx, Result<u64>)`; on success the engine flips a `completed: bool` flag on the slot — `pick_split_target` skips completed slots so they cannot be re-picked, and `persist_split_meta` keeps the entry with `completed: true` and a full-range `downloaded_bytes` so a crash right after a split never loses the record of byte ranges already on disk. `pick_split_target` also gates on a 500 ms / non-zero-progress sample window: a fresh split child cannot be picked again until it has actually produced a throughput sample, preventing cascading fragmentation of the newest range. The segment worker accepts the upper bound through a `tokio::sync::watch::Receiver<u64>` instead of a frozen `u64`, re-reads it before each chunk fetch and again after every successful network read so a mid-flight shrink clamps the next write to the new boundary; per-segment progress is exposed via an `Arc<AtomicU64>` so the engine can pick the slowest candidate by throughput (`downloaded / elapsed`). After every split, the engine atomically rewrites `.vortex-meta` with the updated segment topology so resume after a crash mid-split sees a consistent state. (task 17, PR #111 review)
- "Report broken plugin" action (PRD-v2 P0.16, task 16): plugins listed in *Plugins → Plugin Store* now expose a *Report broken plugin* item in their kebab menu. Clicking it opens the user's default browser at a pre-filled GitHub issue on the plugin's repository, with diagnostic metadata (plugin name + version, Vortex version, OS, optional URL under test, last 50 log lines) inlined into the issue body. Backend adds a `repository_url` field to `domain::model::plugin::PluginInfo` (parsed from the new `[plugin].repository` key in `plugin.toml`), a `domain::ports::driven::UrlOpener` port plus its platform-native `SystemUrlOpener` adapter (`xdg-open` / `open` / `cmd start`, `http(s)://` only by validation), the std-only `domain::model::plugin::build_report_broken_url` URL builder (RFC 3986 unreserved-set percent encoder, last 50 log lines, GitHub-only repository hosts, accepts `.git` suffix, rejects malformed URLs with `DomainError::ValidationError`), and a `ReportBrokenPluginCommand` handler that returns `AppError::Validation` when a manifest carries no `repository_url`. New Tauri IPC `plugin_report_broken(pluginName, logLines?, testedUrl?) → string` returns the issue URL so the UI can fall back to clipboard copy if the launcher fails. i18n (en/fr): `plugins.action.reportBroken`, `plugins.toast.reportBrokenSuccess`, `plugins.toast.reportBrokenError`. (task 16)
Expand Down
Loading
Loading