yunzhong-scale commited on
Commit
acb511b
·
verified ·
1 Parent(s): 8fed9dd

Update corrected data + README metadata

Browse files
Files changed (2) hide show
  1. README.md +44 -72
  2. data/vistoolbench_all.parquet +2 -2
README.md CHANGED
@@ -1,72 +1,44 @@
1
- # VisToolBench Dataset
2
-
3
- A benchmark dataset for evaluating vision-language models on tool-use tasks.
4
-
5
- ## Dataset Statistics
6
-
7
- - **Total samples**: 1204
8
- - **Single-turn**: 603
9
- - **Multi-turn**: 601
10
-
11
- ## Schema
12
-
13
- | Column | Type | Description |
14
- |--------|------|-------------|
15
- | `id` | string | Unique task identifier |
16
- | `turncase` | string | Either "single-turn" or "multi-turn" |
17
- | `num_turns` | int | Number of conversation turns (1 for single-turn) |
18
- | `prompt_category` | string | Task category (e.g., "medical", "scientific", "general") |
19
- | `eval_focus` | string | What aspect is being evaluated (e.g., "visual_reasoning", "tool_use") |
20
- | `prompt` | string | The user prompt/question. For multi-turn, turns are prefixed with `[Turn N]` |
21
- | `golden_answer` | string | The reference/ground-truth answer |
22
- | `image` | Image | Primary image for the task (displayed in HF viewer) |
23
- | `images` | List[Image] | All images associated with the task |
24
- | `num_images` | int | Total number of images |
25
- | `tool_trajectory` | string | JSON string of tool calls made (if applicable) |
26
- | `rubrics` | string | JSON string of evaluation rubrics with weights and metadata |
27
-
28
- ## Rubrics Format
29
-
30
- Each rubric entry contains:
31
- - `description`: What the rubric evaluates
32
- - `weight`: Importance weight (1-5)
33
- - `objective/subjective`: Whether evaluation is objective or subjective
34
- - `explicit/implicit`: Whether the answer is explicit or implicit in the image
35
- - `category`: List of categories (e.g., "instruction following", "truthfulness")
36
- - `critical`: Whether this is a critical rubric ("yes"/"no")
37
- - `final_answer`: Whether this relates to the final answer ("yes"/"no")
38
-
39
- ## Usage
40
-
41
- ```python
42
- from datasets import load_dataset
43
-
44
- # Load the dataset
45
- ds = load_dataset("path/to/dataset")
46
-
47
- # Access a sample
48
- sample = ds['train'][0]
49
- print(sample['prompt'])
50
- print(sample['image']) # PIL Image
51
-
52
- # Parse rubrics
53
- import json
54
- rubrics = json.loads(sample['rubrics'])
55
- for rubric_id, rubric in rubrics.items():
56
- print(f"{rubric['description']} (weight: {rubric['weight']})")
57
- ```
58
-
59
- ## Splits
60
-
61
- - `train`: Full dataset (1204 samples)
62
-
63
- ## Citation
64
-
65
- ```
66
- @article{guo2025beyond,
67
- title={Beyond seeing: Evaluating multimodal llms on tool-enabled image perception, transformation, and reasoning},
68
- author={Guo, Xingang and Tyagi, Utkarsh and Gosai, Advait and Vergara, Paula and Park, Jayeon and Montoya, Ernesto Gabriel Hern{\'a}ndez and Zhang, Chen Bo Calvin and Hu, Bin and He, Yunzhong and Liu, Bing and others},
69
- journal={arXiv preprint arXiv:2510.12712},
70
- year={2025}
71
- }
72
- ```
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: id
5
+ dtype: string
6
+ - name: turncase
7
+ dtype: string
8
+ - name: num_turns
9
+ dtype: int32
10
+ - name: prompt_category
11
+ dtype: string
12
+ - name: eval_focus
13
+ dtype: string
14
+ - name: prompt
15
+ dtype: string
16
+ - name: golden_answer
17
+ dtype: string
18
+ - name: image
19
+ dtype: image
20
+ - name: images
21
+ sequence:
22
+ dtype: image
23
+ - name: num_images
24
+ dtype: int32
25
+ - name: tool_trajectory
26
+ dtype: string
27
+ - name: rubrics
28
+ dtype: string
29
+ splits:
30
+ - name: train
31
+ num_bytes: 5981789093
32
+ num_examples: 1204
33
+ download_size: 5981789093
34
+ dataset_size: 5981789093
35
+ configs:
36
+ - config_name: default
37
+ data_files:
38
+ - split: train
39
+ path: data/*.parquet
40
+ ---
41
+
42
+ VisuAlToolBench is a challenging benchmark to assess tool-enabled visual perception, transformation, and reasoning in multimodal LLMs. It evaluates whether models can not only think about images but also think with images by actively manipulating visuals (e.g., crop, edit, enhance) and integrating general-purpose tools to solve complex tasks. The dataset contains single-turn and multi-turn tasks across diverse domains, each accompanied by detailed rubrics for systematic evaluation. Parquet files under `data/` are auto-indexed by the Hub and power the Dataset Viewer.
43
+
44
+ Paper: BEYOND SEEING: Evaluating Multimodal LLMs on Tool-enabled Image Perception, Transformation, and Reasoning
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/vistoolbench_all.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:80f38b0a3862c95298a9f2b28588fe7b1d6a072119ef90562240f9a561684a31
3
- size 3506927656
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a7717b54aeaf237de5c26a334e7049e94aba00988bf0e3d4bf989e22fe91cb87
3
+ size 5981789093