Datasets:

Languages:
English
License:
The Dataset Viewer has been disabled on this dataset.

Evaluating Vision-Language Models on Misleading Data Visualizations (Dataset)

Overview

This dataset accompanies the paper:

“When Visuals Aren’t the Problem: Evaluating Vision-Language Models on Misleading Data Visualizations.”

The dataset is designed to evaluate whether Vision-Language Models (VLMs) can detect misleading information in data visualization-caption pairs, and whether they can correctly attribute the source of misleadingness to appropriate error types: Caption-level reasoning errors and Visualization design errors.

Unlike prior benchmarks that primarily focus on chart understanding or visual distortions, this dataset enables fine-grained analysis of misleadingness arising from both textual reasoning and visualization design choices.


Dataset Structure

2x2 misleadingness grid

The dataset follows the 2 × 2 misleadingness decomposition shown above.

2 × 2 mapping:

  • → caption-level reasoning errors, visualization is not misleading
  • → visualization design errors, caption is not misleading
  • → both caption and visualization are misleading
  • → neither caption nor visualization is misleading (control)

The exact top-level keys in data.json are:

  • Misleading_Caption_Non_Misleading_Vis
  • Non_Misleading_Caption_Misleading_Vis
  • Misleading_Caption_Misleading_Vis
  • Non_Misleading_Caption_Non_Misleading_Vis

Dataset Statistics

Subset Count
793
1110
501
611
Total 3015

Data Sources

Subset Source
X/Twitter
X/Twitter and subreddit DataIsUgly
X
subreddit DataIsBeautiful

Notes:

  • For all samples sourced from X, we use the sample IDs from Lisnic et al. [1].
  • In , the first 601 samples are from X and the remaining samples are from Reddit.

Dataset File

The dataset is provided as a single JSON file:

data.json

Structure:

{
  "data_type_name": {
    "sample_id": {
      "reasoning_error_names": [...],
      "visualization_error_names": [...],
      "text": "... (only present for Misleading_Caption_Misleading_Vis samples)"
    }
  }
}

Example:

{
  "Misleading_Caption_Non_Misleading_Vis": {
    "example_id1": {
      "reasoning_error_names": ["Cherry-picking", "Causal inference"],
      "visualization_error_names": null
    }
  },
  "Misleading_Caption_Misleading_Vis": {
    "example_id2": {
      "reasoning_error_names": ["Cherry-picking"],
      "visualization_error_names": ["Dual axis"],
      "text": "Example caption written by the authors that introduces reasoning errors."
    }
  }
}

Dataset Fields

Field Description
sample_id Identifier corresponding to the original post (tweet or Reddit post)
reasoning_error_names List of caption-level reasoning errors present in the example
visualization_error_names List of visualization design errors present in the chart
text Caption text (only provided for ■ samples)

Important Note on the text Field

The text field is only provided for ■ samples. For these samples:

  • The captions were written by the authors
  • The goal is to introduce specific reasoning errors
  • The visualization is reused while the caption introduces the misleading reasoning For the other three subsets (, , and ), the dataset does not include the caption text, and therefore the text field is not present in those entries.

Usage

The dataset can be loaded using the Hugging Face datasets library.

from huggingface_hub import hf_hub_download
import json
# Download the raw JSON file from the dataset repo
json_path = hf_hub_download(
    repo_id="MaybeMessi/MisVisBench",
    repo_type="dataset",
    filename="data.json"
)
# Load the JSON
with open(json_path, "r", encoding="utf-8") as f:
    data = json.load(f)
# Iterate through the dataset
for category_name, samples in data.items():
    for sample_id, sample in samples.items():
        reasoning_errors = sample["reasoning_error_names"]
        visualization_errors = sample["visualization_error_names"]
        print("Category:", category_name)
        print("Sample ID:", sample_id)
        print("Reasoning Errors:", reasoning_errors)
        print("Visualization Errors:", visualization_errors)
        print()

Error Taxonomy

Caption-Level Reasoning Errors

  • Cherry-picking
  • Causal inference
  • Setting an arbitrary threshold
  • Failure to account for statistical nuance
  • Incorrect reading of chart
  • Issues with data validity
  • Misrepresentation of scientific studies

Visualization Design Errors

  • Truncated axis
  • Dual axis
  • Value encoded as area or volume
  • Inverted axis
  • Uneven binning
  • Unclear encoding
  • Inappropriate encoding

Examples: Caption-Level Reasoning Errors

Visualization Caption Reasoning Error
Reminder: Just because we've hit a peak does not mean we've hit THE peak. Cherry-picking
The positive impact of the UK's vaccination efforts in one graph Causal inference
This in a country of 56 million. Lift lockdown now, the virus is just gone. Setting an arbitrary threshold
The numbers absolutely speak for themselves. Get vaccinated! Failure to account for statistical nuance
The flu is 10 times less deadly - particularly for elderly - than Covid! Incorrect reading of chart
This is a test of our humanity Issues with data validity
SARS-Co∅2 positivity rates associated with circulating 25-hydroxyvitamin D levels (https://tinyurl.com/5n9xm536) Misrepresentation of scientific studies

Examples: Visualization Design Errors

Visualization Caption Visualization Error
Respiratory deaths at 10 year low! Truncated axis
May 17 Update: US COVID-19 Test Results: Test-and-Trace Success for Smallpox Dual axis
Corona Virus Interactive Map. Value encoded as area or volume
Propaganda: RECORD NUMBER OF COVID POSITIVE CASES. Reality: Inverted axis
Interesting colour coding from the BBC Uneven binning
The Navajo Nation crushed the Covid curve. Success is possible. Unclear encoding
The worst pandemic of the most contagious disease we have seen for 100 years. Inappropriate encoding

Dataset Purpose

This dataset enables evaluation of whether models can:

  1. Detect misleading chart-caption pairs
  2. Determine whether misleadingness arises from the caption, visualization, or both
  3. Attribute misleadingness to specific error categories

This allows researchers to analyze how well VLMs handle reasoning-based misinformation versus visualization design distortions.


References

[1] Lisnic, Maxim, Cole Polychronis, Alexander Lex, and Marina Kogan. "Misleading beyond visual tricks: How people actually lie with charts." In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, pp. 1-21. 2023.


License

The dataset is released under the CC-BY-NC-SA 4.0.


Contact

For any issues related to the dataset, feel free to reach out to lalaiharsh26@gmail.com


Citation

@article{lalai2026misleadingvlm,
  title={When Visuals Aren’t the Problem: Evaluating Vision-Language Models on Misleading Data Visualizations},
  author={Lalai, Harsh Nishant and Shah, Raj Sanjay and Pfister, Hanspeter and Varma, Sashank and Guo, Grace},
  year={2026}
}
Downloads last month
69