Datasets:
Update corrected data + README metadata
Browse files- README.md +44 -72
- data/vistoolbench_all.parquet +2 -2
README.md
CHANGED
|
@@ -1,72 +1,44 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
ds = load_dataset("path/to/dataset")
|
| 46 |
-
|
| 47 |
-
# Access a sample
|
| 48 |
-
sample = ds['train'][0]
|
| 49 |
-
print(sample['prompt'])
|
| 50 |
-
print(sample['image']) # PIL Image
|
| 51 |
-
|
| 52 |
-
# Parse rubrics
|
| 53 |
-
import json
|
| 54 |
-
rubrics = json.loads(sample['rubrics'])
|
| 55 |
-
for rubric_id, rubric in rubrics.items():
|
| 56 |
-
print(f"{rubric['description']} (weight: {rubric['weight']})")
|
| 57 |
-
```
|
| 58 |
-
|
| 59 |
-
## Splits
|
| 60 |
-
|
| 61 |
-
- `train`: Full dataset (1204 samples)
|
| 62 |
-
|
| 63 |
-
## Citation
|
| 64 |
-
|
| 65 |
-
```
|
| 66 |
-
@article{guo2025beyond,
|
| 67 |
-
title={Beyond seeing: Evaluating multimodal llms on tool-enabled image perception, transformation, and reasoning},
|
| 68 |
-
author={Guo, Xingang and Tyagi, Utkarsh and Gosai, Advait and Vergara, Paula and Park, Jayeon and Montoya, Ernesto Gabriel Hern{\'a}ndez and Zhang, Chen Bo Calvin and Hu, Bin and He, Yunzhong and Liu, Bing and others},
|
| 69 |
-
journal={arXiv preprint arXiv:2510.12712},
|
| 70 |
-
year={2025}
|
| 71 |
-
}
|
| 72 |
-
```
|
|
|
|
| 1 |
+
---
|
| 2 |
+
dataset_info:
|
| 3 |
+
features:
|
| 4 |
+
- name: id
|
| 5 |
+
dtype: string
|
| 6 |
+
- name: turncase
|
| 7 |
+
dtype: string
|
| 8 |
+
- name: num_turns
|
| 9 |
+
dtype: int32
|
| 10 |
+
- name: prompt_category
|
| 11 |
+
dtype: string
|
| 12 |
+
- name: eval_focus
|
| 13 |
+
dtype: string
|
| 14 |
+
- name: prompt
|
| 15 |
+
dtype: string
|
| 16 |
+
- name: golden_answer
|
| 17 |
+
dtype: string
|
| 18 |
+
- name: image
|
| 19 |
+
dtype: image
|
| 20 |
+
- name: images
|
| 21 |
+
sequence:
|
| 22 |
+
dtype: image
|
| 23 |
+
- name: num_images
|
| 24 |
+
dtype: int32
|
| 25 |
+
- name: tool_trajectory
|
| 26 |
+
dtype: string
|
| 27 |
+
- name: rubrics
|
| 28 |
+
dtype: string
|
| 29 |
+
splits:
|
| 30 |
+
- name: train
|
| 31 |
+
num_bytes: 5981789093
|
| 32 |
+
num_examples: 1204
|
| 33 |
+
download_size: 5981789093
|
| 34 |
+
dataset_size: 5981789093
|
| 35 |
+
configs:
|
| 36 |
+
- config_name: default
|
| 37 |
+
data_files:
|
| 38 |
+
- split: train
|
| 39 |
+
path: data/*.parquet
|
| 40 |
+
---
|
| 41 |
+
|
| 42 |
+
VisuAlToolBench is a challenging benchmark to assess tool-enabled visual perception, transformation, and reasoning in multimodal LLMs. It evaluates whether models can not only think about images but also think with images by actively manipulating visuals (e.g., crop, edit, enhance) and integrating general-purpose tools to solve complex tasks. The dataset contains single-turn and multi-turn tasks across diverse domains, each accompanied by detailed rubrics for systematic evaluation. Parquet files under `data/` are auto-indexed by the Hub and power the Dataset Viewer.
|
| 43 |
+
|
| 44 |
+
Paper: BEYOND SEEING: Evaluating Multimodal LLMs on Tool-enabled Image Perception, Transformation, and Reasoning
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
data/vistoolbench_all.parquet
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
-
size
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a7717b54aeaf237de5c26a334e7049e94aba00988bf0e3d4bf989e22fe91cb87
|
| 3 |
+
size 5981789093
|