Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 299, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 91, in _split_generators
                  inferred_arrow_schema = pa.concat_tables(pa_tables, promote_options="default").schema
                File "pyarrow/table.pxi", line 5317, in pyarrow.lib.concat_tables
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowTypeError: struct fields don't match or are in the wrong order: Input fields: struct<description: string, name: string, token: string> output fields: struct<description: string, name: string, token: string, anns: list<item: string>, data: struct<CAM_BACK: string, CAM_BACK_LEFT: string, CAM_BACK_RIGHT: string, CAM_FRONT: string, CAM_FRONT_LEFT: string, CAM_FRONT_RIGHT: string, LIDAR_TOP: string, RADAR_BACK_LEFT: string, RADAR_BACK_RIGHT: string, RADAR_FRONT: string, RADAR_FRONT_LEFT: string, RADAR_FRONT_RIGHT: string>, next: string, prev: string, scene_token: string, timestamp: int64, date_captured: string, location: string, logfile: string, vehicle: string, category_token: string, first_annotation_token: string, last_annotation_token: string, nbr_annotations: int64, channel: string, modality: string>
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 353, in get_dataset_split_names
                  info = get_dataset_config_info(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 304, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for SafeMVDrive

Dataset Description

Overview

The SafeMVDrive dataset is a collection of realism, multi-view safety-critical driving scenarios generated by the SafeMVDrive framework. The scenarios are generated starting from samples randomly selected from the nuScenes validation set, where an adversarial vehicle performs aggressive maneuvers to crash into the ego vehicle, and the ego vehicle reacts in time to avoid the collision. The dataset can be used to train and evaluate end-to-end autonomous driving models on their ability to handle safety-critical situations.

  • Curated by: Jiawei Zhou, Linye Lyu, Zhuotao Tian, Cheng Zhuo, Yu Li
  • Affiliation: Harbin Institute of Technology, Shenzhen; Zhejiang University
  • License: CC-BY-SA-4.0
  • Project Page: SafeMVDrive

Dataset Structure

The dataset consists of:

  • 41 scenes each 9 seconds long
  • Multiview video data from 6 camera perspectives
  • 3D bounding box annotations Vehicles in the scene

Usage

The SafeMVDrive dataset is in nuScenes format. Please follow instructions to use SafeMVDrive dataset to evaluate the end-to-end driving model,Uniad, in Eval Uniad.

Creation Process

Source Data

  • Built upon the nuScenes validation set (250 randomly selected samples)
  • Uses nuScenes' original sensor data and annotations as foundation

Generation Pipeline

  1. Adversarial Vehicle Selection: Selects the most threatening vehicle based on multi-view visual input using a GRPO-finetuned Vision-Language Model (VLM).

  2. Two-Stage Trajectory Generation:

    • Collision Simulation: Creates aggressive trajectories that cause the adversarial vehicle to collide with the ego vehicle.
    • Evasion Refinement: Converts collision trajectories into realistic evasive maneuvers that avoid collision while retaining safety-critical properties.
  3. Video Synthesis: Produces high-fidelity, long-horizon, multi-view driving videos with UniMLVG diffusion video generator.

Filtering

Scenarios are filtered to ensure:

  • During the collision stage, the adversarial vehicle collides with the ego vehicle, without entering non-drivable areas or colliding with any other vehicles beforehand
  • During the evasion stage, the ego vehicle successfully avoids the adversarial vehicle, without colliding with any other vehicles or entering non-drivable areas

Intended Use

  • Robustness evaluation of autonomous driving systems
  • Stress-testing end-to-end AD models (e.g., UniAD, VAD, SparseDrive, DiffusionDrive)
  • Training end-to-end driving models to learn evasive behaviors in safety-critical scenarios

Limitations

  • Although guidance signals are used to generate annotations, the framework lacks a mechanism to discard outdated or irrelevant ones—for example, vehicles that have already exited the ego vehicle’s field of view.
  • Open-loop evaluation (no reactive ego agent)
  • Rendering artifacts compared to real sensor data

Privacy

  • Based on nuScenes data which has already undergone anonymization
  • No additional privacy concerns introduced by generation process
Downloads last month
4