The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: ValueError
Message: Invalid string class label data
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2092, in _iter_arrow
pa_table = cast_table_to_features(pa_table, self.features)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2197, in cast_table_to_features
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1795, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 1995, in cast_array_to_feature
return feature.cast_storage(array)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1169, in cast_storage
[self._strval2int(label) if label is not None else None for label in storage.to_pylist()]
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1098, in _strval2int
raise ValueError(f"Invalid string class label {value}")
ValueError: Invalid string class label dataNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
MMNeedle
MMNeedle is a stress test for long-context multimodal reasoning. Each example contains a sequence of haystack images created by stitching MS COCO sub-images into 1×1, 2×2, 4×4, or 8×8 grids. Given textual needle descriptions (derived from MS COCO captions), models must predict which haystack image and which sub-image cell matches the caption—or report that the needle is absent.
This dataset card accompanies the official Hugging Face release so researchers no longer need to download from Google Drive or regenerate the benchmark from MS COCO.
Dataset structure
- Sequences (
sequence_length): either a single stitched image or a set of 10 stitched images. - Grid sizes (
grid_rows,grid_cols): {1, 2, 4, 8} with square layouts. - Needles per query (
needles_per_query): {1, 2, 5}. Each query provides that many captions. - Examples per configuration: 10,000. Half contain the needle(s); half are negatives.
- Total examples: 210,000 (21 configurations × 10k samples).
Every example stores the full list of haystack image paths, the ground-truth
needle locations (image_index, row, col), the MS COCO image IDs for the
needles, the natural-language captions, and a has_needle boolean.
Usage
from datasets import load_dataset
ds = load_dataset("Wang-ML-Lab/MMNeedle", split="test")
example = ds[0]
print(example.keys())
# dict_keys(['id', 'sequence_length', 'grid_rows', 'grid_cols', 'needles_per_query',
# 'haystack_images', 'needle_locations', 'needle_image_ids',
# 'needle_captions', 'has_needle'])
Each entry in haystack_images is a PIL-compatible image object. needle_captions
contains one string per requested needle (even for negative examples, where the
corresponding location is (-1, -1, -1)).
Data fields
| Field | Type | Description |
|---|---|---|
id |
string | Unique identifier combining configuration and sample id. |
sequence_length |
int | Number of stitched haystack images shown to the model. |
grid_rows, grid_cols |
int | Dimensions of the stitched grid (each cell is 256×256 px). |
needles_per_query |
int | Number of captions provided for the sample (1, 2, or 5). |
haystack_images |
list of Image |
Ordered haystack images for the sequence. |
needle_locations |
list of dict | One dict per caption with image_index, row, and col (−1 when absent). |
needle_image_ids |
list of string | MS COCO filenames that generated each caption. |
needle_captions |
list of string | MS COCO captions used as the needle descriptions. |
has_needle |
bool | True if at least one caption corresponds to a haystack cell. |
Recommended evaluation protocol
- Feed the ordered haystack images (preserving grid layout) plus the instruction template from the MMNeedle paper to your multimodal model.
- Parse the model output into
(image_index, row, col)triples. - Compare against
needle_locationsto compute accuracy for positives and the false-positive rate for negatives.
See the repository’s needle.py for a reference implementation.
Source data
- Images & Captions: MS COCO 2014 validation split (CC BY 4.0).
- Needle Metadata: Automatically generated by the MMNeedle authors; included here as JSON files.
Licensing
All stitched haystack images inherit the Creative Commons Attribution 4.0 License from MS COCO. Attribution at minimum should cite both MMNeedle and MS COCO.
Citations
@article{wang2024mmneedle,
title={Multimodal Needle in a Haystack: Benchmarking Long-Context Capability of Multimodal Large Language Models},
author={Wang, Hengyi and Shi, Haizhou and Tan, Shiwei and Qin, Weiyi and Wang, Wenyuan and Zhang, Tunyu and Nambi, Akshay and Ganu, Tanuja and Wang, Hao},
journal={arXiv preprint arXiv:2406.11230},
year={2024}
}
@article{lin2014microsoft,
title={Microsoft COCO: Common Objects in Context},
author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C. Lawrence},
journal={ECCV},
year={2014}
}
- Downloads last month
- 32