Datasets:
license: mit
task_categories:
- image-classification
- image-to-text
- zero-shot-image-classification
language:
- en
pretty_name: COLA
size_categories:
- 10K<n<100K
tags:
- compositionality
- vision-language
- visual-genome
- clevr
- paco
configs:
- config_name: multiobjects
data_files:
- split: val
path: data/multiobjects.parquet
- config_name: singleobjects_gqa
data_files:
- split: val
path: data/singleobjects_gqa.parquet
- config_name: singleobjects_clevr
data_files:
- split: val
path: data/singleobjects_clevr.parquet
- config_name: singleobjects_paco
data_files:
- split: val
path: data/singleobjects_paco.parquet
COLA: Compose Objects Localized with Attributes
Self-contained Hugging Face port of the COLA benchmark from the paper "How to adapt vision-language models to Compose Objects Localized with Attributes?".
- π Paper: https://arxiv.org/abs/2305.03689
- π Project page: https://cs-people.bu.edu/array/research/cola/
- π» Original code & data: https://github.com/ArijitRay1993/COLA
This repository bundles the benchmark annotations as Parquet files and the referenced
images as regular files under images/, so the dataset is fully self-contained.
Dataset Structure
.
βββ data/
β βββ multiobjects.parquet
β βββ singleobjects_gqa.parquet
β βββ singleobjects_clevr.parquet
β βββ singleobjects_paco.parquet
β βββ singleobjects_gqa_labels.json
β βββ singleobjects_clevr_labels.json
β βββ singleobjects_paco_labels.json
βββ images/
βββ vg/<vg_id>.jpg # Visual Genome images (multiobjects + GQA)
βββ clevr/valA/*.png # CLEVR-CoGenT valA
βββ clevr/valB/*.png # CLEVR-CoGenT valB
βββ coco/val2017/*.jpg # COCO val2017 (PACO)
βββ coco/train2017/*.jpg # COCO train2017 (PACO)
Image paths stored in parquet are relative to the repository root, e.g.
images/vg/2390970.jpg. Load them by joining with the local clone / snapshot path.
Configs / Splits
multiobjects (210 pairs)
A hard imageβcaption matching task. Each row contains two images and two captions whose objects/attributes are swapped: caption 1 applies to image 1 (not image 2) and vice versa.
| Field | Type | Description |
|---|---|---|
image1 |
string | Relative path to image 1 |
caption1 |
string | Caption describing image 1 |
image2 |
string | Relative path to image 2 |
caption2 |
string | Caption describing image 2 |
singleobjects_gqa (2,589 rows), singleobjects_clevr (30,000 rows), singleobjects_paco (7,921 rows)
Multi-label classification across fixed vocabularies of multi-attribute object
classes (320 for GQA, 96 for CLEVR, 400 for PACO). The label lists live at
data/singleobjects_<subset>_labels.json.
| Field | Type | Description |
|---|---|---|
image |
string | Relative path to the image |
objects_attributes |
string (JSON) | Objects + attributes annotation (GQA and CLEVR only) |
label |
list[int] | Binary indicator per class (length matches labels vocabulary) |
hard_list |
list[int] | Indicator of whether each class is "hard" for this image |
For a given class, the paper's MAP metric is computed on images where hard_list == 1
for that class. See scripts/eval.py in the original repo
for the exact metric.
Loading
from datasets import load_dataset
mo = load_dataset("array/cola", "multiobjects", split="val")
gqa = load_dataset("array/cola", "singleobjects_gqa", split="val")
clv = load_dataset("array/cola", "singleobjects_clevr", split="val")
paco = load_dataset("array/cola", "singleobjects_paco", split="val")
To open an image, resolve it against the local snapshot root:
from huggingface_hub import snapshot_download
from PIL import Image
import os
root = snapshot_download("array/cola", repo_type="dataset")
ex = mo[0]
img1 = Image.open(os.path.join(root, ex["image1"]))
img2 = Image.open(os.path.join(root, ex["image2"]))
Or, if you've cloned the repo with git lfs, just open paths directly:
Image.open(f"{REPO_DIR}/{ex['image1']}")
Licensing / Source notes
- Visual Genome, CLEVR-CoGenT, and COCO images are redistributed here under their
respective original licenses. Please refer to the upstream datasets:
- Visual Genome (CC BY 4.0)
- CLEVR-CoGenT (CC BY 4.0)
- COCO 2017 (CC BY 4.0 for annotations; Flickr terms for images)
- The COLA annotations (parquet files and label lists) are released under the MIT license, matching the original COLA repo.
Citation
@article{ray2023cola,
title = {COLA: How to adapt vision-language models to Compose Objects Localized with Attributes?},
author = {Ray, Arijit and Radenovic, Filip and Dubey, Abhimanyu and Plummer, Bryan A. and Krishna, Ranjay and Saenko, Kate},
journal = {arXiv preprint arXiv:2305.03689},
year = {2023}
}