Datasets:
The dataset viewer is not available for this split.
Error code: UnexpectedApiError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Whisper-RIR-Mega: Paired Clean↔Reverberant Speech Robustness Benchmark
Dataset Summary
Whisper-RIR-Mega is a benchmark dataset of paired clean and reverberant speech for evaluating ASR robustness to room acoustics. Each sample consists of:
- audio_clean: Clean speech (LibriSpeech test-clean, 16 kHz)
- audio_reverb: Same utterance convolved with one RIR from RIR-Mega (v2)
- text_ref: Ground-truth transcript
- RIR metadata:
rir_id, RT60, DRR, C50, etc. when available - Technical paper:(Whisper-RIR-Mega)
Splits are stratified by RT60 (or DRR) when metadata exists, so the benchmark is balanced across acoustic conditions.
Use this dataset to:
- Benchmark Whisper (or any ASR) on clean vs. reverberant speech and report reverb penalty (Δ WER)
- Evaluate robustness across RT60/DRR bins
- Reproduce the official Whisper-RIR-Mega leaderboard
30-Second Quickstart
from huggingface_hub import snapshot_download
from datasets import load_from_disk
import whisper
import jiwer
# Download and load (use load_dataset if your HF datasets supports it)
path = snapshot_download("mandipgoswami/whisper-rirmega-bench", repo_type="dataset")
ds = load_from_disk(path + "/hf_dataset")["test"]
model = whisper.load_model("base")
# One sample
row = ds[0]
clean_wer = jiwer.wer(row["text_ref"], model.transcribe(row["audio_clean"]["path"], language="en")["text"])
reverb_wer = jiwer.wer(row["text_ref"], model.transcribe(row["audio_reverb"]["path"], language="en")["text"])
print(f"Clean WER: {clean_wer:.4f} Reverb WER: {reverb_wer:.4f}")
Dataset Structure
| Column | Type | Description |
|---|---|---|
| sample_id | string | Unique ID (from LibriSpeech + RIR) |
| audio_clean | Audio | Clean 16 kHz audio |
| audio_reverb | Audio | Reverberant 16 kHz audio |
| text_ref | string | Reference transcript |
| rir_id | string | RIR-Mega sample ID |
| split | string | train / validation / test |
| rir_* | mixed | RIR metadata (RT60_T30_s, DRR_dB, …) |
Splits: validation and test for benchmarking; train optional (default config uses test + validation only).
How It’s Built
- Speech: LibriSpeech test-clean (CC BY 4.0), streamed from Hugging Face.
- RIRs: mandipgoswami/rirmega (v2.0.0), with metadata (RT60, DRR, C50, etc.).
- Pipeline: For each utterance we sample one RIR (stratified by RT60), convolve at 16 kHz, normalize RIR energy and peak-normalize output. No added noise by default.
- Splits: Deterministic assignment to validation/test (e.g. 20% / 80%) with optional stratification by acoustic bins.
Full reproducibility: see the GitHub repo and run:
python -m bench.build_and_publish --config configs/default.yaml
Leaderboard
The leaderboard is generated by the same pipeline and updated on each release. Example (your run may vary):
| model_id | clean | reverb | Δ WER |
|---|---|---|---|
| openai/whisper-tiny | … | … | … |
| openai/whisper-base | … | … | … |
| openai/whisper-small | … | … | … |
| openai/whisper-medium | … | … | … |
| openai/whisper-large-v3 | … | … | … |
See the Space for interactive charts (WER vs RT60/DRR) and the latest leaderboard.
Limitations
- English only (LibriSpeech).
- Single RIR per utterance in the default setup; multi-RIR variants can be built by changing
k_rirs_per_uttin the config. - RIR metadata (RT60, DRR) may be missing for some RIR-Mega samples; the pipeline stores whatever is available.
License & Citation
- Speech: LibriSpeech (CC BY 4.0).
- RIRs: RIR-Mega license (see mandipgoswami/rirmega).
- Benchmark curation: MIT (this repo).
Citation (BibTeX):
@misc{whisper-rirmega-bench,
title = {Whisper-RIR-Mega: Paired Clean-Reverberant Speech Robustness Benchmark},
author = {Goswami, Mandip},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/mandipgoswami/whisper-rirmega-bench},
note = {Dataset built with LibriSpeech and RIR-Mega.}
}
RIR-Mega citation:
@misc{goswami2025rirmega,
title = {RIR-Mega: A Large-Scale Room Impulse Response Corpus with Benchmarks},
author = {Goswami, Mandip},
year = {2025},
eprint = {2510.18917},
archivePrefix= {arXiv},
primaryClass = {cs.SD},
url = {https://arxiv.org/abs/2510.18917}
}
How to Reproduce
- Clone the repo and install:
pip install -e . - Set
HF_TOKEN(and optionally reducen_utterancesinconfigs/default.yamlfor a quick run). - Run:
python -m bench.build_and_publish --config configs/default.yaml - This builds the dataset, runs Whisper baselines, generates reports, and can push the dataset and Space to the Hub (if
HF_TOKENis set).
For a <5 minute smoke test: python scripts/sanity_check.py
- Downloads last month
- 36