YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

CaCoVID: Contribution-aware Token Compression for Video Understanding via Reinforcement Learning

πŸ’‘ About CaCoVID

Video large language models have demonstrated remarkable capabilities in video understanding tasks. However, the redundancy of video tokens introduces significant computational overhead during inference, limiting their practical deployment. Many compression algorithms are proposed to prioritize retaining features with the highest attention scores to minimize perturbations in attention computations. However, the correlation between attention scores and their actual contribution to correct answers remains ambiguous. To address the above limitation, we propose a novel Contribution-aware token Compression algorithm for VIDeo understanding (CaCoVID) that explicitly optimizes the token selection policy based on the contribution of tokens to correct predictions. First, we introduce a reinforcement learning-based framework that optimizes a policy network to select video token combinations with the greatest contribution to correct predictions. This paradigm shifts the focus from passive token preservation to active discovery of optimal compressed token combinations. Secondly, we propose a combinatorial policy optimization algorithm with online combination space sampling, which dramatically reduces the exploration space for video token combinations and accelerates the convergence speed of policy optimization. Codes are available at https://github.com/LivingFutureLab/CaCoVID.

πŸ“Œ Highlights

  1. Align with Accuracy. Compared to attention-based pruning strategies, CaCoVID directly optimizes the policy network with feedback from large model predictions, avoiding the potental misalignment between token attention scores and their actual contributions to correct answers.
  2. Question-aware Pruning. CaCoVID achieves question-aware token pruning by establishing interaction between video tokens and questions.
  3. Simple and Efficient. The policy network of CaCoVID is simple and efficient, achieving lower pruning latency compared to previous algorithms.
  4. Strong Performance. CaCoVID can identify the video tokens most critical to answering questions, thereby achieving higher performance at the same compression ratio.

πŸš€ Performance and Efficiency

CaCoVID directly estimates the contribution of each token to the correct answer with feedback from large models, thereby retaining the most critical video tokens to accurately answer questions. CaCoVID designs a simple and efficient policy network along with effective optimization algorithm to estimate token contribution scores to the correct answer, achieving higher average performance with lower pruning latency.

πŸ“¦ Citation

If our work is useful for your research, please consider citing as follows:

@misc{ma2026contributionawaretokencompressionefficient,
      title={Contribution-aware Token Compression for Efficient Video Understanding via Reinforcement Learning}, 
      author={Yinchao Ma and Qiang Zhou and Zhibin Wang and Xianing Chen and Hanqing Yang and Jun Song and Bo Zheng},
      year={2026},
      eprint={2602.01649},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2602.01649}, 
}
Downloads last month
18
Safetensors
Model size
8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Paper for moraleyc/CaCoVID_LLaVA-OneVision