Datasets:
The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
query_id: string
text_en: string
text_zh: string
target_track_ids: list<item: string>
child 0, item: string
target_count: int64
requires_motion_understanding: bool
query_type: string
temporal_relation: string
expected_temporal_anchor: string
scene: string
note: string
distractor: string
cut_lemon_q6: list<item: string>
child 0, item: string
espresso_q5: list<item: null>
child 0, item: null
espresso_q7: list<item: string>
child 0, item: string
keyboard_q5: list<item: null>
child 0, item: null
cook-spinach_q4: list<item: string>
child 0, item: string
flame_steak_q7: list<item: string>
child 0, item: string
cut_lemon_q4: list<item: string>
child 0, item: string
torchchocolate_q7: list<item: string>
child 0, item: string
americano_q8: list<item: null>
child 0, item: null
keyboard_q2: list<item: string>
child 0, item: string
split_cookie_q4: list<item: string>
child 0, item: string
sear_steak_q4: list<item: string>
child 0, item: string
flame_salmon_q3: list<item: string>
child 0, item: string
split_cookie_q7: list<item: string>
child 0, item: string
cook-spinach_q5: list<item: null>
child 0, item: null
sear_steak_q6: list<item: null>
child 0, item: null
split_cookie_q3: list<item: string>
child 0, item: string
espresso_q3: list<item: string>
child 0, item: string
cut_lemon_q8: list<item: null>
child 0, item: null
americano_q10: list<item: string>
child 0, item: string
americano_q9: list<item: string>
child 0, item: string
split_cookie_q
...
ak_q5: list<item: null>
child 0, item: null
cut_lemon_q1: list<item: string>
child 0, item: string
flame_salmon_q4: list<item: string>
child 0, item: string
cut_lemon_q5: list<item: string>
child 0, item: string
torchchocolate_q3: list<item: string>
child 0, item: string
sear_steak_q5: list<item: null>
child 0, item: null
cook-spinach_q2: list<item: string>
child 0, item: string
americano_q5: list<item: string>
child 0, item: string
cook-spinach_q3: list<item: string>
child 0, item: string
espresso_q6: list<item: null>
child 0, item: null
split_cookie_q2: list<item: string>
child 0, item: string
coffee_martini_q6: list<item: null>
child 0, item: null
keyboard_q1: list<item: string>
child 0, item: string
cut_roasted_beef_q1: list<item: string>
child 0, item: string
split_cookie_q5: list<item: null>
child 0, item: null
split_cookie_q8: list<item: null>
child 0, item: null
coffee_martini_q1: list<item: string>
child 0, item: string
americano_q2: list<item: string>
child 0, item: string
sear_steak_q1: list<item: string>
child 0, item: string
americano_q3: list<item: string>
child 0, item: string
cut_lemon_q2: list<item: string>
child 0, item: string
cut_roasted_beef_q7: list<item: string>
child 0, item: string
flame_salmon_q7: list<item: string>
child 0, item: string
cook-spinach_q6: list<item: null>
child 0, item: null
espresso_q4: list<item: string>
child 0, item: string
flame_steak_q3: list<item: string>
child 0, item: string
to
{'cut_lemon_q1': List(Value('string')), 'cut_lemon_q2': List(Value('string')), 'cut_lemon_q3': List(Value('string')), 'cut_lemon_q4': List(Value('string')), 'cut_lemon_q5': List(Value('string')), 'cut_lemon_q6': List(Value('string')), 'cut_lemon_q7': List(Value('null')), 'cut_lemon_q8': List(Value('null')), 'espresso_q1': List(Value('string')), 'espresso_q2': List(Value('string')), 'espresso_q3': List(Value('string')), 'espresso_q4': List(Value('string')), 'espresso_q5': List(Value('null')), 'espresso_q6': List(Value('null')), 'espresso_q7': List(Value('string')), 'espresso_q8': List(Value('string')), 'keyboard_q1': List(Value('string')), 'keyboard_q2': List(Value('string')), 'keyboard_q3': List(Value('string')), 'keyboard_q4': List(Value('null')), 'keyboard_q5': List(Value('null')), 'keyboard_q6': List(Value('string')), 'torchchocolate_q1': List(Value('string')), 'torchchocolate_q2': List(Value('string')), 'torchchocolate_q3': List(Value('string')), 'torchchocolate_q4': List(Value('null')), 'torchchocolate_q5': List(Value('null')), 'torchchocolate_q6': List(Value('string')), 'torchchocolate_q7': List(Value('string')), 'cook-spinach_q1': List(Value('string')), 'cook-spinach_q2': List(Value('string')), 'cook-spinach_q3': List(Value('string')), 'cook-spinach_q4': List(Value('string')), 'cook-spinach_q5': List(Value('null')), 'cook-spinach_q6': List(Value('null')), 'cook-spinach_q7': List(Value('string')), 'cut_roasted_beef_q1': List(Value('string')), 'cut_roasted_beef_q2': List
...
sear_steak_q3': List(Value('string')), 'sear_steak_q4': List(Value('string')), 'sear_steak_q5': List(Value('null')), 'sear_steak_q6': List(Value('null')), 'sear_steak_q7': List(Value('string')), 'split_cookie_q1': List(Value('string')), 'split_cookie_q2': List(Value('string')), 'split_cookie_q3': List(Value('string')), 'split_cookie_q4': List(Value('string')), 'split_cookie_q5': List(Value('null')), 'split_cookie_q6': List(Value('string')), 'split_cookie_q7': List(Value('string')), 'split_cookie_q8': List(Value('null')), 'americano_q1': List(Value('string')), 'americano_q2': List(Value('string')), 'americano_q3': List(Value('string')), 'americano_q4': List(Value('string')), 'americano_q5': List(Value('string')), 'americano_q6': List(Value('string')), 'americano_q7': List(Value('null')), 'americano_q8': List(Value('null')), 'americano_q9': List(Value('string')), 'americano_q10': List(Value('string')), 'coffee_martini_q1': List(Value('string')), 'coffee_martini_q2': List(Value('string')), 'coffee_martini_q3': List(Value('string')), 'coffee_martini_q4': List(Value('string')), 'coffee_martini_q5': List(Value('null')), 'coffee_martini_q6': List(Value('null')), 'coffee_martini_q7': List(Value('string')), 'flame_steak_q1': List(Value('string')), 'flame_steak_q2': List(Value('string')), 'flame_steak_q3': List(Value('string')), 'flame_steak_q4': List(Value('string')), 'flame_steak_q5': List(Value('null')), 'flame_steak_q6': List(Value('null')), 'flame_steak_q7': List(Value('string'))}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 295, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
query_id: string
text_en: string
text_zh: string
target_track_ids: list<item: string>
child 0, item: string
target_count: int64
requires_motion_understanding: bool
query_type: string
temporal_relation: string
expected_temporal_anchor: string
scene: string
note: string
distractor: string
cut_lemon_q6: list<item: string>
child 0, item: string
espresso_q5: list<item: null>
child 0, item: null
espresso_q7: list<item: string>
child 0, item: string
keyboard_q5: list<item: null>
child 0, item: null
cook-spinach_q4: list<item: string>
child 0, item: string
flame_steak_q7: list<item: string>
child 0, item: string
cut_lemon_q4: list<item: string>
child 0, item: string
torchchocolate_q7: list<item: string>
child 0, item: string
americano_q8: list<item: null>
child 0, item: null
keyboard_q2: list<item: string>
child 0, item: string
split_cookie_q4: list<item: string>
child 0, item: string
sear_steak_q4: list<item: string>
child 0, item: string
flame_salmon_q3: list<item: string>
child 0, item: string
split_cookie_q7: list<item: string>
child 0, item: string
cook-spinach_q5: list<item: null>
child 0, item: null
sear_steak_q6: list<item: null>
child 0, item: null
split_cookie_q3: list<item: string>
child 0, item: string
espresso_q3: list<item: string>
child 0, item: string
cut_lemon_q8: list<item: null>
child 0, item: null
americano_q10: list<item: string>
child 0, item: string
americano_q9: list<item: string>
child 0, item: string
split_cookie_q
...
ak_q5: list<item: null>
child 0, item: null
cut_lemon_q1: list<item: string>
child 0, item: string
flame_salmon_q4: list<item: string>
child 0, item: string
cut_lemon_q5: list<item: string>
child 0, item: string
torchchocolate_q3: list<item: string>
child 0, item: string
sear_steak_q5: list<item: null>
child 0, item: null
cook-spinach_q2: list<item: string>
child 0, item: string
americano_q5: list<item: string>
child 0, item: string
cook-spinach_q3: list<item: string>
child 0, item: string
espresso_q6: list<item: null>
child 0, item: null
split_cookie_q2: list<item: string>
child 0, item: string
coffee_martini_q6: list<item: null>
child 0, item: null
keyboard_q1: list<item: string>
child 0, item: string
cut_roasted_beef_q1: list<item: string>
child 0, item: string
split_cookie_q5: list<item: null>
child 0, item: null
split_cookie_q8: list<item: null>
child 0, item: null
coffee_martini_q1: list<item: string>
child 0, item: string
americano_q2: list<item: string>
child 0, item: string
sear_steak_q1: list<item: string>
child 0, item: string
americano_q3: list<item: string>
child 0, item: string
cut_lemon_q2: list<item: string>
child 0, item: string
cut_roasted_beef_q7: list<item: string>
child 0, item: string
flame_salmon_q7: list<item: string>
child 0, item: string
cook-spinach_q6: list<item: null>
child 0, item: null
espresso_q4: list<item: string>
child 0, item: string
flame_steak_q3: list<item: string>
child 0, item: string
to
{'cut_lemon_q1': List(Value('string')), 'cut_lemon_q2': List(Value('string')), 'cut_lemon_q3': List(Value('string')), 'cut_lemon_q4': List(Value('string')), 'cut_lemon_q5': List(Value('string')), 'cut_lemon_q6': List(Value('string')), 'cut_lemon_q7': List(Value('null')), 'cut_lemon_q8': List(Value('null')), 'espresso_q1': List(Value('string')), 'espresso_q2': List(Value('string')), 'espresso_q3': List(Value('string')), 'espresso_q4': List(Value('string')), 'espresso_q5': List(Value('null')), 'espresso_q6': List(Value('null')), 'espresso_q7': List(Value('string')), 'espresso_q8': List(Value('string')), 'keyboard_q1': List(Value('string')), 'keyboard_q2': List(Value('string')), 'keyboard_q3': List(Value('string')), 'keyboard_q4': List(Value('null')), 'keyboard_q5': List(Value('null')), 'keyboard_q6': List(Value('string')), 'torchchocolate_q1': List(Value('string')), 'torchchocolate_q2': List(Value('string')), 'torchchocolate_q3': List(Value('string')), 'torchchocolate_q4': List(Value('null')), 'torchchocolate_q5': List(Value('null')), 'torchchocolate_q6': List(Value('string')), 'torchchocolate_q7': List(Value('string')), 'cook-spinach_q1': List(Value('string')), 'cook-spinach_q2': List(Value('string')), 'cook-spinach_q3': List(Value('string')), 'cook-spinach_q4': List(Value('string')), 'cook-spinach_q5': List(Value('null')), 'cook-spinach_q6': List(Value('null')), 'cook-spinach_q7': List(Value('string')), 'cut_roasted_beef_q1': List(Value('string')), 'cut_roasted_beef_q2': List
...
sear_steak_q3': List(Value('string')), 'sear_steak_q4': List(Value('string')), 'sear_steak_q5': List(Value('null')), 'sear_steak_q6': List(Value('null')), 'sear_steak_q7': List(Value('string')), 'split_cookie_q1': List(Value('string')), 'split_cookie_q2': List(Value('string')), 'split_cookie_q3': List(Value('string')), 'split_cookie_q4': List(Value('string')), 'split_cookie_q5': List(Value('null')), 'split_cookie_q6': List(Value('string')), 'split_cookie_q7': List(Value('string')), 'split_cookie_q8': List(Value('null')), 'americano_q1': List(Value('string')), 'americano_q2': List(Value('string')), 'americano_q3': List(Value('string')), 'americano_q4': List(Value('string')), 'americano_q5': List(Value('string')), 'americano_q6': List(Value('string')), 'americano_q7': List(Value('null')), 'americano_q8': List(Value('null')), 'americano_q9': List(Value('string')), 'americano_q10': List(Value('string')), 'coffee_martini_q1': List(Value('string')), 'coffee_martini_q2': List(Value('string')), 'coffee_martini_q3': List(Value('string')), 'coffee_martini_q4': List(Value('string')), 'coffee_martini_q5': List(Value('null')), 'coffee_martini_q6': List(Value('null')), 'coffee_martini_q7': List(Value('string')), 'flame_steak_q1': List(Value('string')), 'flame_steak_q2': List(Value('string')), 'flame_steak_q3': List(Value('string')), 'flame_steak_q4': List(Value('string')), 'flame_steak_q5': List(Value('null')), 'flame_steak_q6': List(Value('null')), 'flame_steak_q7': List(Value('string'))}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
R4D-Bench
Summary
R4D-Bench targets spatio-temporal referring segmentation in dynamic (4D) scenes: natural-language queries paired with pixel-accurate instance masks over time (COCO-style polygons or RLE, optional PNG unions), emphasizing motion, temporal relations (before / while / after), multi-target phrases, and distractor queries. The release uses three query tiers—36 main dense-GT queries, 89 extended (full per-scene set), and 246 supplementary English-only candidates without dense mask alignment. Mask IoU and track-ID evaluation live under evaluation/; full multi-view RGB, cameras, COLMAP, and Segment-then-Splat artifacts are not required for those shipped scripts. Scene imagery, where needed, should be obtained from Neu3D / HyperNeRF (or compatible) releases under their original licenses; the minimal annotation bundle does not include Segment-then-Splat pipeline outputs.
Statistics (at a glance)
| Item | Location | Count / note |
|---|---|---|
| Dense mask GT — main release | scripts/new_predictions_ground_truth_final.json |
36 queries (3 per scene × 12 scenes), per-frame segmentation and/or PNGs under data/scenes/<scene>/query_masks/ |
| Dense mask GT — optional extension | e.g. scripts/new_predictions_ground_truth_all_queries.json |
89 queries (sum of all entries in the 12 per-scene *_queries.json files) |
| Supplementary language candidates (no dense GT) | data/queries/supplementary-queries/*.json |
246 English strings (queries[].text across files; auto-generated candidates; no dense target_track_ids alignment) |
| Evaluation metadata | evaluation/R4D-Bench_queries.json |
89 entries — same query IDs as Tier B / new_predictions_ground_truth_all_queries.json (merged from per-scene *_queries.json; order follows that dense GT file). evaluation/R4D-Bench_predictions.json maps each query_id → target_track_ids for track-ID evaluation. |
Query tiers (release semantics)
| Tier | Description | Source | Dense mask GT? |
|---|---|---|---|
| A — Main | 36 curated queries (3 / scene) | scripts/predictions_ground_truth.py without --all-queries → scripts/new_predictions_ground_truth_final.json |
Yes |
| B — Extended | 89 = every query in each scene’s *_queries.json |
scripts/predictions_ground_truth.py --all-queries → e.g. new_predictions_ground_truth_all_queries.json |
Yes |
| C — Supplementary | 246 English phrases | data/queries/supplementary-queries/ |
No |
Console shows “89 queries” only when running predictions_ground_truth.py with --all-queries, which sums all query_id entries across the 12 per-scene *_queries.json files (per-scene counts vary, e.g. americano 10, keyboard 6).
Reporting “200+” queries in a paper
If the paper claims “200+” by combining extended dense GT (89) with part of Tier C, a common breakdown is to count only the per-scene generated_queries_*.json files with 15 texts each (roughly 12 × 15 = 180), excluding longer Neu3D-prefixed files (e.g. 24 or 30 texts). That subset is not the same as the 246 total queries[].text strings in the repository; state exactly which files or tiers you count in the appendix or footnote.
Scenes (on-disk release)
There are 12 scene directories under data/scenes/:
americano, coffee_martini, cook_spinach, cut_lemon, cut_roasted_beef, espresso, flame_salmon, flame_steak, keyboard, sear_steak, split_cookie, torchchocolate
Naming conventions: Folder names use underscores. Some JSON fields and query_id prefixes use hyphens (e.g. cook-spinach, split-cookie). The scene cook_spinach ships files such as cook-spinach_queries.json and cook-spinach.json inside that folder.
Repository layout
R4D-Bench/
├── DATASET_LAYOUT.md # Path checklist: core vs optional vs offline
├── README.md
├── evaluation/
│ ├── evaluate_mask_temporal.py # mIoU, mAcc, temporal Acc, vIoU, …
│ ├── evaluate_r4dgs.py # Track-id precision / recall / F1
│ ├── R4D-Bench_queries.json # Unified query list (89) for eval + --queries-meta
│ └── R4D-Bench_predictions.json # query_id → target_track_ids (GT for track-id eval)
├── scripts/
│ ├── coco_scene_paths.py
│ ├── generate_instance_masks.py
│ ├── predictions_ground_truth.py
│ ├── enrich_ground_truth_with_mask_images.py
│ ├── new_predictions_ground_truth_final.json
│ ├── new_predictions_ground_truth_all_queries.json # present if generated
│ └── *_with_paths.json # optional enriched variants
├── data/
│ ├── scenes/<scene>/ # images, COCO JSON, tracks, *_queries.json, query_masks/
│ ├── queries/
│ │ └── supplementary-queries/ # Tier C: extra English-only candidates (no dense GT)
│ ├── all_instance_masks/ # optional regeneratable snapshot
│ └── track_metadata.csv # optional human-readable track_id → category (reference)
└── tools/ # optional local utilities
If present, FINAL_REPORT.md / benchmark.md are maintainer notes and not required to use the benchmark.
Offline archive: Material that used to live under dataset_archive/ has been removed from this tree. Nothing here depends on that path. Benchmarks use data/scenes/, scripts/new_predictions_ground_truth_*.json, and evaluation/. A minimal distribution may omit data/all_instance_masks/ (regenerate with scripts/generate_instance_masks.py). See DATASET_LAYOUT.md for the full path map.
Query-related files (what to use when)
| File | Role |
|---|---|
data/scenes/<scene>/*_queries.json |
Canonical source for each scene’s natural-language queries and target_track_ids. Used by scripts/predictions_ground_truth.py to build dense GT. |
scripts/new_predictions_ground_truth_final.json |
Dense mask GT for Tier A (36). |
scripts/new_predictions_ground_truth_all_queries.json |
Dense mask GT for Tier B (89) — full union of per-scene query lists. |
evaluation/R4D-Bench_queries.json |
Merged copy of all 89 queries (same IDs as Tier B, order aligned with new_predictions_ground_truth_all_queries.json). Pass to --queries-meta for per–query_type breakdown in mask metrics. |
evaluation/R4D-Bench_predictions.json |
Ground-truth query_id → target_track_ids for evaluate_r4dgs.py (replace with your model’s track-ID predictions when benchmarking). |
data/queries/supplementary-queries/*.json |
Tier C: auto-generated English phrases (queries[].text) only — 246 strings total; no dense masks or guaranteed track alignment. |
data/track_metadata.csv |
Optional spreadsheet mapping track_id → category and upstream dataset (HyperNeRF / Neu3D); not read by the shipped evaluation scripts—documentation / filtering only. |
Legacy split lists under data/queries/ were removed to avoid stale duplicates; use the per-scene *_queries.json files and evaluation/R4D-Bench_queries.json instead.
Temporal and spatial alignment
- Canonical time index is the integer
frame_idinground_truth.frames[]andexistence_frames, and directory namesframe_XXXXXXunderdata/scenes/<scene>/query_masks/. - Per-scene frame list comes from each scene’s COCO JSON (
images[].file_name). Filenames may look likeframe_000040.pngor Roboflow-style names; scripts resolve resolution and ordering from COCO. - We do not enforce a single official Neu3D camera ID (e.g.
cam00) or subsampling recipe for every scene. Users who pair this benchmark with original Neu3D / HyperNeRF downloads should align by visual / temporal correspondence; mask-only evaluation here depends only on the providedframe_idand mask geometry.
Mask-level evaluation
From the repository root:
# Sanity check: predictions = GT → metrics should be perfect
python evaluation/evaluate_mask_temporal.py --self-check \
--queries-meta evaluation/R4D-Bench_queries.json
# Evaluate your prediction JSON (per-query, per-frame mask paths)
python evaluation/evaluate_mask_temporal.py \
--predictions path/to/predictions.json \
--output evaluation/mask_eval_report.json \
--queries-meta evaluation/R4D-Bench_queries.json
Prediction JSON shape (examples): { "query_id": { "frames": { "1": "relative/or/abs/path.png", ... } } } or a list of { "frame_id", "mask_path" }. Paths resolve relative to --repo-root unless absolute.
Default --ground-truth is scripts/new_predictions_ground_truth_final.json (36 queries). To score all 89 extended queries, add--ground-truth scripts/new_predictions_ground_truth_all_queries.json (predictions JSON must then cover the same query_id set you care about).
Useful options: --only-query-prefix <scene>, --only-queries, --debug-query <id>, --iou-threshold 0.5. See the docstring in evaluation/evaluate_mask_temporal.py.
Track-ID evaluation (set precision / recall / F1 on track IDs, no pixels):
python evaluation/evaluate_r4dgs.py \
--queries evaluation/R4D-Bench_queries.json \
--predictions evaluation/R4D-Bench_predictions.json \
--output evaluation/track_id_eval_report.json
Replace --predictions with your model’s track-id output using the same query_id keys as in R4D-Bench_predictions.json.
Generating or refreshing masks and PNGs
scripts/generate_instance_masks.py reads each scene’s COCO JSON and writes one binary PNG per annotation instance under data/scenes/<scene>/instance_masks/<image_stem>/. That is per-instance rasterization—not the same as query-level union masks in new_predictions_ground_truth_final.json.
Required for a minimal release? No, if you ship queries + GT (segmentation and/or query_masks/) + evaluation code. Keep the script if you need to regenerate instance masks after editing COCO.
python scripts/generate_instance_masks.py --overwrite
python scripts/generate_instance_masks.py --scene americano --overwrite
python scripts/enrich_ground_truth_with_mask_images.py
python scripts/enrich_ground_truth_with_mask_images.py --only-query-prefix cut_lemon
Dependencies: Python 3.10+ recommended; numpy, Pillow. Optional: matplotlib, scikit-learn, wordcloud for auxiliary scripts.
Relationship to Segment-then-Splat (StS) and upstream data
- Segment-then-Splat is a separate public pipeline (COLMAP, multi-view masks, Gaussian training, etc.). Typical StS directories (
images/,sparse/,multiview_masks_*_merged/, PLY exports, …) are not part of the minimal R4D-Bench annotation release. - For StS-style training, follow their repository and obtain HyperNeRF / Neu3D (or compatible) imagery under the original licenses.
- Roboflow provenance and URLs are under each scene’s
README.dataset.txt/README.roboflow.txt(often CC BY 4.0 where stated—verify per scene).
Query schema (unified JSON)
Each entry in evaluation/R4D-Bench_queries.json includes among others:
query_id,scene,text_en,text_zhtarget_track_ids,target_countquery_type(A/B/C),requires_motion_understanding- Optional:
temporal_relation,expected_temporal_anchor,distractor,note
Items in scripts/new_predictions_ground_truth_final.json add ground_truth with target_tracks, existence_frames, and frames[].masks[] (segmentation, optional mask_image). Some question strings may be non-English depending on export version; canonical geometry is in segmentation / PNGs.
Citation
If you use R4D-Bench, cite this dataset repository and the original scene datasets (HyperNeRF, Neu3D, Roboflow sources as applicable). Add a BibTeX entry here after publication if desired.
License and third-party data
- Annotations and code in this repository are released under the terms of the top-level
LICENSEfile when present; until then, treat usage as license-other and contact the maintainers if unsure. - Scene imagery and upstream assets remain under their original terms (Neu3D, HyperNeRF, Roboflow, etc.). Do not redistribute raw imagery unless the upstream license allows it.
- Per-scene Roboflow metadata:
data/scenes/<scene>/README.dataset.txt.
Contact
For questions about the benchmark definition, evaluation protocol, or file formats, open an issue in the project repository or contact the maintainers.
- Downloads last month
- 33