Skip to content

fix(benchmark): repair 3 pre-existing script/download bugs#534

Merged
Alex-Wengg merged 1 commit intomainfrom
fix/benchmark-script-bugs
Apr 21, 2026
Merged

fix(benchmark): repair 3 pre-existing script/download bugs#534
Alex-Wengg merged 1 commit intomainfrom
fix/benchmark-script-bugs

Conversation

@Alex-Wengg
Copy link
Copy Markdown
Member

@Alex-Wengg Alex-Wengg commented Apr 21, 2026

Summary

Three unrelated pre-existing bugs surfaced while validating PR #515. All of them block Scripts/parakeet_subset_benchmark.sh --download from succeeding, but none are related to the v3 script-filtering work. Consolidating into one PR since each fix is ~1–3 lines.

1. Japanese TDT folder-name mismatch

Scripts/parakeet_subset_benchmark.sh verifies the Japanese TDT model at $MODELS_DIR/parakeet-tdt-ja/, but the folder was renamed to parakeet-ja in 4ef33f0 (Repo.parakeetJa.folderName = "parakeet-ja"). Result: verify_assets() always reported missing assets even on a fully provisioned machine. One-line rename to match.

2. EOU streaming CLI writes to wrong path

ParakeetEouCommand had a default / --use-cache split where the default branch produced $CWD/Models/<chunk>/<chunk>/ (double-nested, relative to CWD) as the load path, while downloadModels() called deletingLastPathComponent().deletingLastPathComponent() then DownloadUtils.downloadRepo(repo, to:) which appended folderName = "parakeet-eou-streaming/<chunk>". Net effect: files landed at $CWD/Models/parakeet-eou-streaming/<chunk>/ while loadModels() looked at $CWD/Models/<chunk>/<chunk>/ — model load failed silently.

Unified on Application Support (matches every other CoreML model in FluidAudio). --use-cache retained as a no-op flag for backward compatibility.

3. earnings22-kws dataset 404

HuggingFace consolidated argmaxinc/earnings22-kws-golden into argmaxinc/contextual-earnings22. The old id now returns 404 from the Datasets-Server REST API (no redirect follow). The new dataset has the same feature schema (audio, file_id, text, dictionary, ...), so swapping the id is sufficient — no downstream consumer changes needed.

Test plan

Ran Scripts/parakeet_subset_benchmark.sh --download end-to-end:

  • verify_assets correctly resolves parakeet-ja/ (all 5 expected files present)
  • EOU warmup: Models downloaded to ~/Library/Application Support/FluidAudio/Models/parakeet-eou-streaming/320ms, 0.00% WER on warmup file
  • earnings22-kws: 1140+ files downloaded (was 0 before), no 404
  • swift build passes

Out of scope but observed (pre-existing, unrelated):

  • ctc-earnings-benchmark --auto-download does not actually auto-download CTC-110m model
  • THCHS-30 dataset hit HF IP rate limit (429) — transient

Open in Devin Review

1. Scripts/parakeet_subset_benchmark.sh: update Japanese TDT folder
   reference from parakeet-tdt-ja to parakeet-ja to match
   Repo.parakeetJa.folderName (renamed in 4ef33f0).

2. ParakeetEouCommand: default to Application Support cache directory.
   The legacy default wrote to $CWD/Models/<chunk>/<chunk>/ (double-nested
   and relative to CWD) while DownloadUtils wrote to $CWD/Models/parakeet-
   eou-streaming/<chunk>/, causing a load-path mismatch and silent failure.
   --use-cache kept as a no-op for backward compatibility.

3. DatasetDownloader.downloadEarnings22KWS: switch dataset id from the
   discontinued argmaxinc/earnings22-kws-golden to argmaxinc/
   contextual-earnings22. HF consolidated the dataset; the old id returns
   404 from the Datasets-Server API. New dataset has identical feature
   schema (audio, file_id, text, dictionary).

Validated end-to-end via Scripts/parakeet_subset_benchmark.sh --download:
all 3 previously broken paths now complete without error.
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 potential issue.

View 3 additional findings in Devin Review.

Open in Devin Review

// Determine models path. Default: Application Support cache directory
// (matches how every other CoreML model in FluidAudio is stored).
// `--use-cache` is retained as a no-op for backward compatibility.
_ = useCache
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 --use-cache not actually a no-op: still triggers unnecessary download path

The comment on line 101 says --use-cache is retained as a no-op, and _ = useCache on line 102 is written as if to suppress an unused-variable warning. However, useCache is still referenced in the download condition on line 113: if download || useCache || !FileManager.default.fileExists(atPath: modelsUrl.path). This means passing --use-cache still unconditionally enters the download block (logging "Downloading models to: ..." and calling downloadModels), even when models already exist at the target path. While downloadModels (ParakeetEouCommand.swift:184-192) has its own file-existence check and will return early, the behavior contradicts the stated intent of making --use-cache a no-op. The _ = useCache is misleading — it signals the value is intentionally discarded, but it's actually still used.

Prompt for agents
The intent of this PR is to make --use-cache a no-op for backward compatibility (since the default path is now always Application Support). However, useCache is still referenced on line 113 in the download condition: `if download || useCache || !FileManager.default.fileExists(atPath: modelsUrl.path)`. This means --use-cache still has an effect: it unconditionally enters the download code path.

To fix: remove `useCache` from the condition on line 113 so it reads `if download || !FileManager.default.fileExists(atPath: modelsUrl.path)`. Keep the `_ = useCache` on line 102 to suppress the unused variable warning, since the variable is still set by the argument parser on line 68.

File: Sources/FluidAudioCLI/Commands/ASR/Parakeet/Streaming/ParakeetEouCommand.swift
Line 113: change `if download || useCache || !FileManager.default.fileExists(atPath: modelsUrl.path)` to `if download || !FileManager.default.fileExists(atPath: modelsUrl.path)`
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

@github-actions
Copy link
Copy Markdown

Kokoro TTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (634.8 KB)

Runtime: 0m31s

Note: Kokoro TTS uses CoreML flow matching + Vocos vocoder. CI VM lacks physical ANE — performance may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 7.03% Average Word Error Rate
WER (Med) 4.17% Median Word Error Rate
RTFx 12.34x Real-time factor (higher = faster)
Total Audio 470.6s Total audio duration processed
Total Time 38.9s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.039s Average chunk processing time
Max Chunk Time 0.078s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 0m58s • 04/21/2026, 03:50 AM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link
Copy Markdown

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (221.3 KB)

Runtime: 0m47s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality and performance may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 744.5x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 694.3x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link
Copy Markdown

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 16.88x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 11.544 18.6 Fetching diarization models
Model Compile 4.947 8.0 CoreML compilation
Audio Load 0.093 0.1 Loading audio file
Segmentation 18.634 30.0 Detecting speech regions
Embedding 31.056 50.0 Extracting speaker voices
Clustering 12.422 20.0 Grouping same speakers
Total 62.157 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 62.1s diarization time • Test runtime: 3m 21s • 04/21/2026, 03:59 AM EST

@github-actions
Copy link
Copy Markdown

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 7.5x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 4m 11s • 2026-04-21T08:01:24.683Z

@github-actions
Copy link
Copy Markdown

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Performance Metrics

Metric CI Value Expected on Apple Silicon
Median RTFx 0.04x ~2.5x
Overall RTFx 0.04x ~2.5x

Runtime: 5m39s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link
Copy Markdown

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 10.4% <20% Diarization Error Rate (lower is better)
RTFx 8.33x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 16.125 12.8 Fetching diarization models
Model Compile 6.911 5.5 CoreML compilation
Audio Load 0.164 0.1 Loading audio file
Segmentation 33.190 26.3 VAD + speech detection
Embedding 125.623 99.7 Speaker embedding extraction
Clustering (VBx) 0.144 0.1 Hungarian algorithm + VBx clustering
Total 125.959 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 10.4% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 159.0s processing • Test runtime: 2m 36s • 04/21/2026, 04:03 AM EST

@github-actions
Copy link
Copy Markdown

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 4.92x
test-other 1.56% 0.00% 3.34x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 5.43x
test-other 1.22% 0.00% 3.24x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.57x Streaming real-time factor
Avg Chunk Time 1.567s Average time to process each chunk
Max Chunk Time 2.094s Maximum chunk processing time
First Token 1.750s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.58x Streaming real-time factor
Avg Chunk Time 1.536s Average time to process each chunk
Max Chunk Time 1.866s Maximum chunk processing time
First Token 1.560s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 7m8s • 04/21/2026, 04:04 AM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@Alex-Wengg Alex-Wengg merged commit 7c9be31 into main Apr 21, 2026
12 checks passed
@Alex-Wengg Alex-Wengg deleted the fix/benchmark-script-bugs branch April 21, 2026 08:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant