Yuan Zhang1,3* , Chun-Kai Fan1*, Junpeng Ma2*, Wenzhao Zheng3✉️, Tao Huang4, Kuan Cheng1,
Denis Gudovskiy5, Tomoyuki Okuno5, Yohei Nakata5, Kurt Keutzer3, Shanghang Zhang1✉️
1School of Computer Science, Peking University, 2Fudan University,
3UC Berkeley, 4The University of Sydney, 5Panasonic Holdings Corporation
🔥 [2024/10/15] We released SparseVLM and its Project Page! The Code is now open-source!
In vision-language models (VLMs), visual tokens usually consume a significant amount of computational overhead, despite their sparser information density compared to text tokens. To address this, existing methods extract more compact image representations by modifying the image encoder or projector. While some recent works further sparsify vision tokens during the decoding, they still ignore the guidance from the language tokens, which contradicts the multimodality paradigm. We argue that visual tokens should be sparsified adaptively based on the question prompt, as the model might focus on different parts (e.g., foreground or background) when dealing with various questions, as shown in Figure below. Unlike previous methods with text-agnostic visual sparsification (c) e.g., recent FastV, our SparseVLM (b) is guided by question prompts to select relevant visual patches.
- Clone this repository and navigate to SparseVLMs folder
git clone https://github.com/Gumpest/SparseVLMs.git
cd SparseVLMs
- Install necessary package
conda create -n SparseVLMs python=3.10 -y
conda activate SparseVLMs
pip install -e .
- Download Multimodal Benchmark
Please follow the detailed instruction in LLaVA-Evaluation.
Specifically, --sparse
in script indicates whether to perform sparseness, while --scale
and --bias
control the degree of token sparsity.
- Example for evaluating MME results (192 tokens, scale = 13.5, bias = 0.0):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/mme.sh
- Example for evaluating POPE results (128 tokens, scale = 9, bias = 6):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/pope.sh
- Example for evaluating TextVQA results (64 tokens, scale = 0.8, bias = 0.0):
CUDA_VISIBLE_DEVICES=0 bash scripts/v1_5/eval/textvqa.sh
This project is released under the Apache 2.0 license.
If you use SparseVLM in your research, please cite our work by using the following BibTeX entry:
@article{zhang2024sparsevlm,
title={SparseVLM: Visual Token Sparsification for Efficient Vision-Language Model Inference},
author={Zhang, Yuan and Fan, Chun-Kai and Ma, Junpeng and Zheng, Wenzhao and Huang, Tao and Cheng, Kuan and Gudovskiy, Denis and Okuno, Tomoyuki and Nakata, Yohei and Keutzer, Kurt and others},
journal={arXiv preprint arXiv:2410.04417},
year={2024}
}
We extend our gratitude to the open-source efforts of TCFormer, LLaVA, MiniGemini and VideoLLaVA.