jae-minkim's picture
Upload dataset
5fec889 verified
metadata
license: apache-2.0
task_categories:
  - visual-question-answering
  - object-detection
language:
  - en
tags:
  - vision
  - spatial-reasoning
  - bounding-box
  - vstar-bench
size_categories:
  - n<1K
dataset_info:
  features:
    - name: image
      dtype: image
    - name: text
      dtype: string
    - name: category
      dtype: string
    - name: question_id
      dtype: string
    - name: label
      dtype: string
    - name: bbox_target
      list:
        list: int64
    - name: bbox_category
      dtype: string
    - name: target_object
      dtype: 'null'
    - name: bbox_source
      dtype: string
    - name: original_image_size
      list: int64
  splits:
    - name: test
      num_bytes: 122037795
      num_examples: 191
  download_size: 121916164
  dataset_size: 122037795
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

vstar-bench with Bounding Box Annotations

Dataset Description

This dataset extends lmms-lab/vstar-bench by adding bounding box annotations for target objects. The bounding box information was extracted from craigwu/vstar_bench and mapped to the lmms-lab version.

Key Features

  • Visual Spatial Reasoning: Tests understanding of spatial relationships in images
  • Bounding Box Annotations: Each sample includes target object bounding boxes
  • Multiple Choice QA: 4-way multiple choice questions about spatial relationships
  • High Coverage: 100.0% of samples have bounding box annotations

Dataset Statistics

  • Total Samples: 191
  • Samples with Bbox: 191
  • Coverage: 100.0%

Dataset Structure

Each sample contains:

  • image: PIL Image object
  • text: Question text (includes target object mention)
  • bbox_target: Bounding box coordinates [x, y, width, height]
  • target_object: Name of the target object
  • label: Correct answer (A/B/C/D)
  • question_id: Unique question identifier
  • category: Question category (e.g., "relative_position")

Example

from datasets import load_dataset

dataset = load_dataset("jae-minkim/vstar-bench-with-bbox")
sample = dataset['test'][0]

print(f"Question: {sample['text']}")
print(f"Target Object: {sample['target_object']}")
print(f"Bounding Box: {sample['bbox_target']}")
print(f"Answer: {sample['label']}")

Usage

Load Dataset

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("jae-minkim/vstar-bench-with-bbox")

# Access test split
test_data = dataset['test']

# Iterate over samples
for sample in test_data:
    image = sample['image']
    bbox = sample['bbox_target']  # [x, y, width, height]
    question = sample['text']
    answer = sample['label']

Visualize Bounding Boxes

import matplotlib.pyplot as plt
import matplotlib.patches as patches

sample = dataset['test'][0]

fig, ax = plt.subplots(1, 1, figsize=(10, 8))
ax.imshow(sample['image'])

if sample['bbox_target']:
    x, y, w, h = sample['bbox_target']
    rect = patches.Rectangle(
        (x, y), w, h,
        linewidth=3, edgecolor='red', facecolor='none'
    )
    ax.add_patch(rect)
    ax.text(x, y-10, sample['target_object'], 
           color='red', fontsize=12, fontweight='bold')

ax.set_title(sample['text'])
ax.axis('off')
plt.show()

Applications

This dataset is particularly useful for:

  1. Vision-Language Model Evaluation: Testing spatial reasoning capabilities
  2. Attention Visualization: Analyzing if models attend to correct regions
  3. Token Shifting Research: Redirecting high-norm tokens to target regions
  4. Grounded QA: Question answering with explicit spatial grounding

Source Datasets

  • lmms-lab/vstar-bench: Base dataset with questions and images
  • craigwu/vstar_bench: Source of bounding box annotations

Citation

If you use this dataset, please cite the original vstar-bench paper:

@article{wu2023vstar,
  title={V*: Guided Visual Search as a Core Mechanism in Multimodal LLMs},
  author={Wu, Penghao and Xie, Saining},
  journal={arXiv preprint arXiv:2312.14135},
  year={2023}
}

License

Apache 2.0 (following the original vstar-bench license)

Acknowledgments