Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -29,7 +29,7 @@ dataset_info:
|
|
| 29 |
download_size: 218600714
|
| 30 |
dataset_size: 1021748919.5
|
| 31 |
configs:
|
| 32 |
-
- config_name:
|
| 33 |
data_files:
|
| 34 |
- split: en
|
| 35 |
path: data/en-*
|
|
@@ -107,29 +107,29 @@ We evaluated several of the latest popular MLLMs, including both closed-source a
|
|
| 107 |
|
| 108 |
| Moldel | think | Weight | UpdateTime | RxnBench-En | RxnBench-Zh | Mean Score |
|
| 109 |
| ---- |:----:|:----:|:----:|:----:|:----:|:----:|
|
| 110 |
-
| GPT-5 |
|
| 111 |
-
| Gemini-2.5-Pro |
|
| 112 |
-
| GPT-5-mini |
|
| 113 |
-
| Seed1.5-VL-Think |
|
| 114 |
-
| GPT o3 |
|
| 115 |
-
| GPT o4 mini |
|
| 116 |
-
| InternVL3.5-241B-A28B |
|
| 117 |
-
| Intern-S1 |
|
| 118 |
-
| Seed1.5-VL |
|
| 119 |
-
| InternVL3-78b |
|
| 120 |
-
| Intern-S1-mini |
|
| 121 |
-
| GLM-4.1V-9B-Thinking |
|
| 122 |
-
| Qwen2.5-VL-72B |
|
| 123 |
-
| Qwen2.5-VL-Max |
|
| 124 |
-
| GPT-5-nano |
|
| 125 |
-
| Qwen2.5-VL-32B |
|
| 126 |
-
| Gemini-2.5-Flash |
|
| 127 |
-
| GPT-4o |
|
| 128 |
-
| Qwen2.5-VL-7b |
|
| 129 |
-
| Qwen2.5-VL-3b |
|
| 130 |
-
| GPT-4o mini |
|
| 131 |
| *Choice longest answer* | - | -| 0.4262 | 0.4525 | 0.4394 |
|
| 132 |
-
| Deepseek-VL2 |
|
| 133 |
| *Random* | - | - | 0.2500 | 0.2500 | 0.2500 |
|
| 134 |
|
| 135 |
|
|
|
|
| 29 |
download_size: 218600714
|
| 30 |
dataset_size: 1021748919.5
|
| 31 |
configs:
|
| 32 |
+
- config_name: β
|
| 33 |
data_files:
|
| 34 |
- split: en
|
| 35 |
path: data/en-*
|
|
|
|
| 107 |
|
| 108 |
| Moldel | think | Weight | UpdateTime | RxnBench-En | RxnBench-Zh | Mean Score |
|
| 109 |
| ---- |:----:|:----:|:----:|:----:|:----:|:----:|
|
| 110 |
+
| GPT-5(high) | β | Proprietary | 20250807 | **0.9279** | 0.9246 | **0.9263** |
|
| 111 |
+
| Gemini-2.5-Pro | β | Proprietary | 20250617 | 0.9095 | **0.9423** | 0.9259 |
|
| 112 |
+
| GPT-5-mini | β | Proprietary | 20250807 | 0.9108 | 0.9128 | 0.9118 |
|
| 113 |
+
| Seed1.5-VL-Think | β | Proprietary | 20250428 | 0.9056 | 0.9161 | 0.9109 |
|
| 114 |
+
| GPT o3 | β | Proprietary | 20250416 | 0.9056 | 0.9115 | 0.9086 |
|
| 115 |
+
| GPT o4 mini | β | Proprietary | 20250416 | 0.9062 | 0.9075 | 0.9069 |
|
| 116 |
+
| InternVL3.5-241B-A28B | β | Open | - | 0.9003 | 0.9062 | 0.9033 |
|
| 117 |
+
| Intern-S1 | β | Open | - | 0.8938 | 0.8944 | 0.8941 |
|
| 118 |
+
| Seed1.5-VL | Γ | Proprietary | 20250328 | 0.8518 | 0.8669 | 0.8594 |
|
| 119 |
+
| InternVL3-78b | Γ | Open | - | 0.8531 | 0.8308 | 0.8420 |
|
| 120 |
+
| Intern-S1-mini | β | Open | - | 0.8521 | 0.8282 | 0.8402 |
|
| 121 |
+
| GLM-4.1V-9B-Thinking | β | Open | - | 0.8392 | 0.8341 | 0.8367 |
|
| 122 |
+
| Qwen2.5-VL-72B | Γ | Open | - | 0.8341 | 0.8308 | 0.8325 |
|
| 123 |
+
| Qwen2.5-VL-Max | Γ | Proprietary | 20250813 | 0.8192 | 0.8262 | 0.8227 |
|
| 124 |
+
| GPT-5-nano | β | Proprietary | 20250807 | 0.7980 | 0.7941 | 0.7961 |
|
| 125 |
+
| Qwen2.5-VL-32B | Γ | Open | - | 0.7980 | 0.7908 | 0.7944 |
|
| 126 |
+
| Gemini-2.5-Flash | β | Proprietary | 20250617 | 0.6925 | 0.8557 | 0.7741 |
|
| 127 |
+
| GPT-4o | Γ | Proprietary | 20240806 | 0.7462 | 0.7436 | 0.7449 |
|
| 128 |
+
| Qwen2.5-VL-7b | Γ | Open | - | 0.7082 | 0.7233 | 0.7158 |
|
| 129 |
+
| Qwen2.5-VL-3b | Γ | Open | - | 0.6748 | 0.6643 | 0.6696 |
|
| 130 |
+
| GPT-4o mini | Γ | Proprietary | 20240718 | 0.6636 | 0.6066 | 0.6351 |
|
| 131 |
| *Choice longest answer* | - | -| 0.4262 | 0.4525 | 0.4394 |
|
| 132 |
+
| Deepseek-VL2 | Γ | Open | - | 0.4426 | 0.4216 | 0.4321 |
|
| 133 |
| *Random* | - | - | 0.2500 | 0.2500 | 0.2500 |
|
| 134 |
|
| 135 |
|