AI & ML interests

None defined yet.

Recent Activity

Hokin  updated a dataset about 10 hours ago
Video-Reason/VBVR-Bench-Data
Hokin  updated a model about 10 hours ago
Video-Reason/VBVR-Wan2.2
View all activity

Organization Card

We bet on a future where video reasoning is the next fundamental intelligence paradigm.

🌐 Website · 💻 GitHub

About Us

Video-Reason is a research initiative dedicated to advancing video reasoning as the next foundational intelligence paradigm — where spatiotemporal, embodied world experiences are more naturally captured than through text alone. We build large-scale datasets, benchmarks, and models to systematically study and scale video reasoning capabilities.

VBVR: A Very Big Video Reasoning Suite

Our flagship project, VBVR (Very Big Video Reasoning), introduces an unprecedentedly large-scale resource for video reasoning research:

  • 200 curated reasoning tasks across 5 domains: Perception, Abstraction, Spatiality, Transformation, and Knowledge
  • 1,000,000+ video clips — approximately three orders of magnitude larger than existing datasets
  • Verifiable evaluation via rule-based, human-aligned scorers (no model-based judging)
  • Early signs of emergent generalization to unseen reasoning tasks through large-scale scaling studies

Releases

Resource Description Link
VBVR-Wan2.2 Strong baseline model fine-tuned from Wan2.2-I2V-A14B on the VBVR Dataset Model
VBVR-Dataset 1M video reasoning training samples across 100 curated task generators (~310 GB) Dataset
VBVR-Bench-Data Official benchmark test set (500 samples across in-domain and out-of-domain splits) Dataset
VBVR-Bench-Leaderboard Public leaderboard for standardized model comparison Space

Links

Citation

If you use VBVR in your research, please cite:

@article{vbvr2026,
  title   = {A Very Big Video Reasoning Suite},
  author  = {Wang, Maijunxian and Wang, Ruisi and Lin, Juyi and Ji, Ran and
             Wiedemer, Thadd{\"a}us and Gao, Qingying and Luo, Dezhi and
             Qian, Yaoyao and Huang, Lianyu and Hong, Zelong and Ge, Jiahui and
             Ma, Qianli and He, Hang and Zhou, Yifan and Guo, Lingzi and
             Mei, Lantao and Li, Jiachen and Xing, Hanwen and Zhao, Tianqi and
             Yu, Fengyuan and Xiao, Weihang and Jiao, Yizheng and
             Hou, Jianheng and Zhang, Danyang and Xu, Pengcheng and
             Zhong, Boyang and Zhao, Zehong and Fang, Gaoyun and Kitaoka, John and
             Xu, Yile and Xu, Hua and Blacutt, Kenton and Nguyen, Tin and
             Song, Siyuan and Sun, Haoran and Wen, Shaoyue and He, Linyang and
             Wang, Runming and Wang, Yanzhi and Yang, Mengyue and Ma, Ziqiao and
             Milli{\`e}re, Rapha{\"e}l and Shi, Freda and Vasconcelos, Nuno and
             Khashabi, Daniel and Yuille, Alan and Du, Yilun and Liu, Ziming and
             Lin, Dahua and Liu, Ziwei and Kumar, Vikash and Li, Yijiang and
             Yang, Lei and Cai, Zhongang and Deng, Hokin},
  journal = {arXiv preprint arXiv:2602.20159},
  year    = {2026},
  url     = {https://arxiv.org/abs/2602.20159}
}