Skip to content

Code Repository for CausalVLBench (EMNLP 2025 Main)

Notifications You must be signed in to change notification settings

Akomand/CausalVLBench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CausalVLBench: Benchmarking Visual Causal Reasoning in Large Vision-Language Models

This is the source code for the implementation of "CausalVLBench: Benchmarking Visual Causal Reasoning in Large Vision-Language Models" (EMNLP 2025 Main Conference)

Large language models (LLMs) have shown remarkable ability in various language tasks, especially with their emergent in-context learning capability. Extending LLMs to incorporate visual inputs, large vision-language models (LVLMs) have shown impressive performance in tasks such as recognition and visual question answering (VQA). Despite increasing interest in the utility of LLMs in causal reasoning tasks such as causal discovery and counterfactual reasoning, there has been relatively little work showcasing the abilities of LVLMs on visual causal reasoning tasks. We take this opportunity to formally introduce a comprehensive causal reasoning benchmark for multi-modal in-context learning from LVLMs. Our CausalVLBench encompasses three representative tasks: causal structure inference, intervention target prediction, and counterfactual prediction. We evaluate the ability of state-of-the-art open-source LVLMs on our causal reasoning tasks across three causal representation learning datasets and demonstrate their fundamental strengths and weaknesses. We hope that our benchmark elucidates the drawbacks of existing vision-language models and motivates new directions and paradigms in improving the visual causal reasoning abilities of LVLMs.

Usage

Training and evaluating

  1. Clone the repository

    git clone https://github.com/Akomand/CausalVLBench.git
    
  2. Create environment and install dependencies

    conda env create -f requirements/requirements.txt
    
  3. Generate synthetic datasets using scripts in data/data_generation/

  4. Create JSON files

    python eval_dataset.py
    
  5. Run inference script

     ./scripts/[task]/run_[model].sh
    
  6. Run evaluation script to obtain performance for all models

    python get_results.py
    

Data acknowledgements

Experiments are run using adapted versions of the following datasets to evaluate our model:

Datasets

Pendulum Dataset

Link to dataset

Water Flow Dataset

Link to dataset

Causal Circuit Dataset

Link to dataset

Citation

If you think our code and benchmark is relevant to your work, we encourage you to cite this paper:

@inproceedings{
komanduri2025causalvlbench,
title={Causal{VLB}ench: Benchmarking Visual Causal Reasoning in Large Vision-Language Models},
author={Aneesh Komanduri and Karuna Bhaila and Xintao Wu},
booktitle={The 2025 Conference on Empirical Methods in Natural Language Processing},
year={2025}
}

About

Code Repository for CausalVLBench (EMNLP 2025 Main)

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published