VisualToolBench / README.md
yunzhong-scale's picture
Update corrected data + README metadata
acb511b verified
|
raw
history blame
1.48 kB
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: turncase
      dtype: string
    - name: num_turns
      dtype: int32
    - name: prompt_category
      dtype: string
    - name: eval_focus
      dtype: string
    - name: prompt
      dtype: string
    - name: golden_answer
      dtype: string
    - name: image
      dtype: image
    - name: images
      sequence:
        dtype: image
    - name: num_images
      dtype: int32
    - name: tool_trajectory
      dtype: string
    - name: rubrics
      dtype: string
  splits:
    - name: train
      num_bytes: 5981789093
      num_examples: 1204
  download_size: 5981789093
  dataset_size: 5981789093
  configs:
    - config_name: default
      data_files:
        - split: train
          path: data/*.parquet

VisuAlToolBench is a challenging benchmark to assess tool-enabled visual perception, transformation, and reasoning in multimodal LLMs. It evaluates whether models can not only think about images but also think with images by actively manipulating visuals (e.g., crop, edit, enhance) and integrating general-purpose tools to solve complex tasks. The dataset contains single-turn and multi-turn tasks across diverse domains, each accompanied by detailed rubrics for systematic evaluation. Parquet files under data/ are auto-indexed by the Hub and power the Dataset Viewer.

Paper: BEYOND SEEING: Evaluating Multimodal LLMs on Tool-enabled Image Perception, Transformation, and Reasoning