Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
Common-O / README.md
marksibrahim's picture
Update README.md
b7bb2ed verified
|
raw
history blame
2.61 kB
metadata
license: mit
language:
  - en
pretty_name: common-o
dataset_info:
  features:
    - name: image_1
      dtype: image
    - name: image_2
      dtype: image
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: objects_1
      dtype: string
    - name: objects_2
      dtype: string
    - name: num_objects_image_1
      dtype: int64
    - name: num_objects_image_2
      dtype: int64
    - name: question_template
      dtype: string
    - name: answer_type
      dtype: string
    - name: choices
      dtype: string
    - name: num_choices
      dtype: int64
    - name: num_ground_truth_objects
      dtype: int64
    - name: real_or_synthetic
      dtype: string
    - name: ground_truth_objects
      dtype: string
  splits:
    - name: main
      num_bytes: 5408696753
      num_examples: 10426
    - name: challenge
      num_bytes: 594218345
      num_examples: 12600
  download_size: 1102814055
  dataset_size: 6002915098
configs:
  - config_name: default
    data_files:
      - split: main
        path: data/main-*
      - split: challenge
        path: data/challenge-*

Common-O

measuring multimodal reasoning across scenes

Common-O, inspired by cognitive tests for humans, probes multimodal LLMs' ability to reason across scenes by asking "what’s in common?"

fair conference content copy.001

Common-O is comprised of household objects:

fair conference content copy.003

We have two subsets: Common-O (3 - 8 objects) and Common-O Complex (8 - 16 objects).

Multimodal LLMs excel at single image perception, but struggle with multi-scene reasoning

single_vs_multi_image(1)

Evaluating a Multimodal LLM on Common-O

import datasets

# get a sample
common_o = datasets.load("facebook/Common-O")["main"]
# common_o_complex = datasets.load("facebook/Common-O")["complex"]
x = common_o[3]

output: str = model(x["image_1"], x["image_2"], x["question"])

check_answer(output, x["answer"])

To check the answer, we use an exact match criteria:

import re


def check_answer(
    generation: str,
    ground_truth: List[str]
    ):
    preds = generation.split("\n")[-1]
    preds = re.sub("Answer:", "", preds)
    preds = preds.split(",")
    preds = sorted(preds, key=lambda x: x[0])
    
    ground_truth = sorted(ground_truth)
    return preds == ground_truth