Datasets:
Upload folder using huggingface_hub
Browse files- README.md +70 -35
- manifest.json +24 -0
- vistoolbench_1204.parquet +3 -0
README.md
CHANGED
|
@@ -1,37 +1,72 @@
|
|
| 1 |
-
|
| 2 |
-
dataset_info:
|
| 3 |
-
features:
|
| 4 |
-
- name: id
|
| 5 |
-
dtype: string
|
| 6 |
-
- name: turncase
|
| 7 |
-
dtype: string
|
| 8 |
-
- name: prompt_category
|
| 9 |
-
dtype: string
|
| 10 |
-
- name: eval_focus
|
| 11 |
-
dtype: string
|
| 12 |
-
- name: prompt
|
| 13 |
-
dtype: string
|
| 14 |
-
- name: images_by_turn
|
| 15 |
-
sequence:
|
| 16 |
-
sequence:
|
| 17 |
-
dtype: image
|
| 18 |
-
- name: rubrics
|
| 19 |
-
sequence: string
|
| 20 |
-
splits:
|
| 21 |
-
- name: train
|
| 22 |
-
num_bytes: 3506927656
|
| 23 |
-
num_examples: 1191
|
| 24 |
-
download_size: 3506927656
|
| 25 |
-
dataset_size: 3506927656
|
| 26 |
-
configs:
|
| 27 |
-
- config_name: default
|
| 28 |
-
data_files:
|
| 29 |
-
- split: train
|
| 30 |
-
path: data/*.parquet
|
| 31 |
-
---
|
| 32 |
-
|
| 33 |
-
VisuAlToolBench is a challenging benchmark to assess tool-enabled visual perception, transformation, and reasoning in multimodal LLMs. It evaluates whether models can not only think about images but also think with images by actively manipulating visuals (e.g., crop, edit, enhance) and integrating general-purpose tools to solve complex tasks. The dataset contains single-turn and multi-turn tasks across diverse domains, each accompanied by detailed rubrics for systematic evaluation. Parquet files under `data/` are auto-indexed by the Hub and power the Dataset Viewer.
|
| 34 |
-
|
| 35 |
-
Paper: [BEYOND SEEING: Evaluating Multimodal LLMs on Tool-enabled Image Perception, Transformation, and Reasoning](https://static.scale.com/uploads/654197dc94d34f66c0f5184e/vtb_paper.pdf)
|
| 36 |
|
|
|
|
| 37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# VisToolBench Dataset
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
+
A benchmark dataset for evaluating vision-language models on tool-use tasks.
|
| 4 |
|
| 5 |
+
## Dataset Statistics
|
| 6 |
+
|
| 7 |
+
- **Total samples**: 1204
|
| 8 |
+
- **Single-turn**: 603
|
| 9 |
+
- **Multi-turn**: 601
|
| 10 |
+
|
| 11 |
+
## Schema
|
| 12 |
+
|
| 13 |
+
| Column | Type | Description |
|
| 14 |
+
|--------|------|-------------|
|
| 15 |
+
| `id` | string | Unique task identifier |
|
| 16 |
+
| `turncase` | string | Either "single-turn" or "multi-turn" |
|
| 17 |
+
| `num_turns` | int | Number of conversation turns (1 for single-turn) |
|
| 18 |
+
| `prompt_category` | string | Task category (e.g., "medical", "scientific", "general") |
|
| 19 |
+
| `eval_focus` | string | What aspect is being evaluated (e.g., "visual_reasoning", "tool_use") |
|
| 20 |
+
| `prompt` | string | The user prompt/question. For multi-turn, turns are prefixed with `[Turn N]` |
|
| 21 |
+
| `golden_answer` | string | The reference/ground-truth answer |
|
| 22 |
+
| `image` | Image | Primary image for the task (displayed in HF viewer) |
|
| 23 |
+
| `images` | List[Image] | All images associated with the task |
|
| 24 |
+
| `num_images` | int | Total number of images |
|
| 25 |
+
| `tool_trajectory` | string | JSON string of tool calls made (if applicable) |
|
| 26 |
+
| `rubrics` | string | JSON string of evaluation rubrics with weights and metadata |
|
| 27 |
+
|
| 28 |
+
## Rubrics Format
|
| 29 |
+
|
| 30 |
+
Each rubric entry contains:
|
| 31 |
+
- `description`: What the rubric evaluates
|
| 32 |
+
- `weight`: Importance weight (1-5)
|
| 33 |
+
- `objective/subjective`: Whether evaluation is objective or subjective
|
| 34 |
+
- `explicit/implicit`: Whether the answer is explicit or implicit in the image
|
| 35 |
+
- `category`: List of categories (e.g., "instruction following", "truthfulness")
|
| 36 |
+
- `critical`: Whether this is a critical rubric ("yes"/"no")
|
| 37 |
+
- `final_answer`: Whether this relates to the final answer ("yes"/"no")
|
| 38 |
+
|
| 39 |
+
## Usage
|
| 40 |
+
|
| 41 |
+
```python
|
| 42 |
+
from datasets import load_dataset
|
| 43 |
+
|
| 44 |
+
# Load the dataset
|
| 45 |
+
ds = load_dataset("path/to/dataset")
|
| 46 |
+
|
| 47 |
+
# Access a sample
|
| 48 |
+
sample = ds['train'][0]
|
| 49 |
+
print(sample['prompt'])
|
| 50 |
+
print(sample['image']) # PIL Image
|
| 51 |
+
|
| 52 |
+
# Parse rubrics
|
| 53 |
+
import json
|
| 54 |
+
rubrics = json.loads(sample['rubrics'])
|
| 55 |
+
for rubric_id, rubric in rubrics.items():
|
| 56 |
+
print(f"{rubric['description']} (weight: {rubric['weight']})")
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
## Splits
|
| 60 |
+
|
| 61 |
+
- `train`: Full dataset (1204 samples)
|
| 62 |
+
|
| 63 |
+
## Citation
|
| 64 |
+
|
| 65 |
+
```
|
| 66 |
+
@article{guo2025beyond,
|
| 67 |
+
title={Beyond seeing: Evaluating multimodal llms on tool-enabled image perception, transformation, and reasoning},
|
| 68 |
+
author={Guo, Xingang and Tyagi, Utkarsh and Gosai, Advait and Vergara, Paula and Park, Jayeon and Montoya, Ernesto Gabriel Hern{\'a}ndez and Zhang, Chen Bo Calvin and Hu, Bin and He, Yunzhong and Liu, Bing and others},
|
| 69 |
+
journal={arXiv preprint arXiv:2510.12712},
|
| 70 |
+
year={2025}
|
| 71 |
+
}
|
| 72 |
+
```
|
manifest.json
ADDED
|
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"single_json": "single_turn_data_corrected_with_rubrics_weights.json",
|
| 3 |
+
"multi_json": "multi_turn_data_corrected_with_rubrics_weights.json",
|
| 4 |
+
"counts": {
|
| 5 |
+
"single": 603,
|
| 6 |
+
"multi": 601,
|
| 7 |
+
"total": 1204
|
| 8 |
+
},
|
| 9 |
+
"columns": [
|
| 10 |
+
"id",
|
| 11 |
+
"turncase",
|
| 12 |
+
"num_turns",
|
| 13 |
+
"prompt_category",
|
| 14 |
+
"eval_focus",
|
| 15 |
+
"prompt",
|
| 16 |
+
"golden_answer",
|
| 17 |
+
"image",
|
| 18 |
+
"images",
|
| 19 |
+
"num_images",
|
| 20 |
+
"tool_trajectory",
|
| 21 |
+
"rubrics"
|
| 22 |
+
],
|
| 23 |
+
"out_parquet": "hf_upload_final_corrected/vistoolbench_1204.parquet"
|
| 24 |
+
}
|
vistoolbench_1204.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:a7717b54aeaf237de5c26a334e7049e94aba00988bf0e3d4bf989e22fe91cb87
|
| 3 |
+
size 5981789093
|