Datasets:
Add link to paper and video classification task category (#2)
Browse files- Add link to paper and video classification task category (842e45b3a5e579b29479efb94ec71d4c68ebdc8b)
Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>
README.md
CHANGED
|
@@ -30,11 +30,13 @@ configs:
|
|
| 30 |
data_files:
|
| 31 |
- split: test
|
| 32 |
path: data/test-*
|
|
|
|
|
|
|
| 33 |
---
|
| 34 |
|
| 35 |
# VideoEval-Pro
|
| 36 |
|
| 37 |
-
VideoEval-Pro is a robust and realistic long video understanding benchmark containing open-ended, short-answer QA problems. The dataset is constructed by reformatting questions from four existing long video understanding MCQ benchmarks: Video-MME, MLVU, LVBench, and LongVideoBench into free-form questions.
|
| 38 |
|
| 39 |
The evaluation code and scripts are available at: [TIGER-AI-Lab/VideoEval-Pro](https://github.com/TIGER-AI-Lab/VideoEval-Pro)
|
| 40 |
|
|
@@ -140,4 +142,4 @@ Each example in the dataset contains:
|
|
| 140 |
--num_frames 32 \
|
| 141 |
--max_retries 10 \
|
| 142 |
--num_threads 1
|
| 143 |
-
```
|
|
|
|
| 30 |
data_files:
|
| 31 |
- split: test
|
| 32 |
path: data/test-*
|
| 33 |
+
task_categories:
|
| 34 |
+
- video-classification
|
| 35 |
---
|
| 36 |
|
| 37 |
# VideoEval-Pro
|
| 38 |
|
| 39 |
+
VideoEval-Pro is a robust and realistic long video understanding benchmark containing open-ended, short-answer QA problems. The dataset is constructed by reformatting questions from four existing long video understanding MCQ benchmarks: Video-MME, MLVU, LVBench, and LongVideoBench into free-form questions. The paper can be found [here](https://huggingface.co/papers/2505.14640).
|
| 40 |
|
| 41 |
The evaluation code and scripts are available at: [TIGER-AI-Lab/VideoEval-Pro](https://github.com/TIGER-AI-Lab/VideoEval-Pro)
|
| 42 |
|
|
|
|
| 142 |
--num_frames 32 \
|
| 143 |
--max_retries 10 \
|
| 144 |
--num_threads 1
|
| 145 |
+
```
|