Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 12,125 Bytes
084bbff
 
4cc83f3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
084bbff
 
4cc83f3
084bbff
 
4cc83f3
084bbff
b4d218f
4cc83f3
084bbff
a8fc91e
084bbff
 
 
 
 
929525b
2d50aea
 
 
 
 
 
 
084bbff
7fa6036
d7dc3f1
7fa6036
0306db9
2d50aea
0306db9
6eb8760
2d50aea
 
355021a
2d50aea
 
bcaadc5
2d50aea
bcaadc5
04f485b
2d50aea
 
37d2f30
 
 
 
 
 
 
 
 
 
 
 
a4af6bb
 
37d2f30
997e584
c469bcf
 
 
 
 
1e14408
 
c469bcf
 
 
 
 
 
 
 
 
 
 
1e14408
 
c469bcf
 
 
 
 
 
 
 
 
 
 
 
1e14408
 
c469bcf
 
 
51d3d0e
c469bcf
2d50aea
 
 
7fa6036
9287c3e
91e981e
7bfe2c8
 
 
35272bb
7bfe2c8
35272bb
 
9848458
07c484f
43679fb
d1179da
 
 
 
 
 
3c0c43f
c30e7d4
07c484f
d1179da
c30e7d4
d1179da
8701dbc
d1179da
 
ef51d8e
d1179da
 
 
 
 
ef51d8e
3c0c43f
d1179da
b2a9cd4
 
b1cf86f
b2a9cd4
d1179da
b2a9cd4
c900dc1
d1179da
c900dc1
2d50aea
 
3c0c43f
53e4392
9287c3e
53e4392
7bfe2c8
 
 
 
 
35272bb
 
7bfe2c8
07c484f
43679fb
53e4392
 
 
 
 
 
3c0c43f
53e4392
07c484f
53e4392
 
 
8701dbc
53e4392
 
ef51d8e
53e4392
 
 
 
 
ef51d8e
3c0c43f
53e4392
b2a9cd4
 
b1cf86f
b2a9cd4
53e4392
b2a9cd4
53e4392
 
 
3fca1a8
d7dc3f1
 
 
4ff7fd1
d7dc3f1
2d50aea
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
---
dataset_info:
  features:
  - name: image
    dtype: image
  - name: question
    dtype: string
  - name: choices
    list: string
  - name: answer
    dtype: int32
  - name: meta_info
    struct:
    - name: title
      dtype: string
    - name: journal
      dtype: string
    - name: doi
      dtype: string
    - name: url
      dtype: string
  - name: question_type
    dtype: string
  splits:
  - name: en
    num_bytes: 546653187.125
    num_examples: 1525
  - name: zh
    num_bytes: 546319847.125
    num_examples: 1525
  download_size: 218606009
  dataset_size: 1092973034.25
configs:
- config_name: RxnBench-VQA
  data_files:
  - split: en
    path: data/en-*
  - split: zh
    path: data/zh-*
license: cc-by-nc-sa-4.0
task_categories:
- visual-question-answering
language:
- en
- zh
tags:
- chemistry
---

# RxnBench: A Benchmark for Chemical Reaction Figure Understanding


## ๐Ÿ“˜ Benchmark Summary

RxnBench (SF-QA) is a visual question answering (VQA) benchmark comprising 1,525 multiple-choice questions (MCQs) at the PhD-level of organic chemistry reaction understanding. 

The benchmark is built from 305 scientific figures drawn from high-impact OpenAssess journals. 
For each figure, domain experts carefully designed five multiple-choice VQA questions targeting the interpretation of organic reaction diagrams.
These questions were further refined through multiple rounds of rigorous review and revision to ensure both clarity and scientific accuracy. 
The questions cover a variety of types, including the description of chemical reaction images, extraction of reaction content, recognition of molecules or Markush structures, and determination of mechanisms.
This benchmark challenges visual-language models on their foundational knowledge of organic chemistry, multimodal contextual reasoning, and chemical reasoning skills. 


The benchmark is released in both English and Chinese versions. 



## ๐Ÿ“‘ Task Types

We categorize chemical reaction visual question answering tasks into six types:

- **Type 0 โ€” Fact Extraction**: Direct retrieval of textual or numerical information from reaction schemes.
- **Type 1 โ€” Reagent Roles and Functions Identification**: Identification of reagents and their functional roles, requiring chemical knowledge and reaction-type awareness.
- **Type 2 โ€” Reaction Mechanism and Process Understanding**: Interpretation of reaction progression, including intermediates, catalytic cycles, and mechanistic steps.
- **Type 3 โ€” Comparative Analysis and Reasoning**: Comparative evaluation, causal explanation, or outcome prediction under varying conditions.
- **Type 4 โ€” Multi-step Synthesis and Global Understanding**: Comprehension of multi-step pathways, step-to-step coherence, and overall synthetic design.
- **Type 5 โ€” Chemical Structure Recognition**: Extraction and reasoning-based parsing of chemical structures in SMILES or E-SMILES (as defined in the [MolParser](https://arxiv.org/abs/2411.11098) paper).

![output3](https://cdn-uploads.huggingface.co/production/uploads/65f7f16fb6941db5c2e7c4bf/oTOMcZE7oz-Pv4fUUpi0J.png)


## ๐ŸŽฏ Benchmark Evaluation

This benchmark evaluates model performance on multiple-choice question answering (MCQ) tasks.

We provide two versions of the prompt template, depending on the language setting.

**English Prompt**

```
Question: {question}
Choices:
A. {choice_A}
B. {choice_B}
C. {choice_C}
D. {choice_D}
Based on the image and the question, choose the most appropriate answer.
**Only output a single letter (A, B, C, or D)**. Do NOT output any other text or explanation.
```

**Chinese Prompt**

```
้—ฎ้ข˜: {question}
้€‰้กน:
A. {choice_A}
B. {choice_B}
C. {choice_C}
D. {choice_D}

่ฏทๆ นๆฎๅ›พๅƒๅ’Œ้—ฎ้ข˜๏ผŒไปŽไปฅไธŠๅ››ไธช้€‰้กนไธญ้€‰ๆ‹ฉๆœ€ๅˆ้€‚็š„็ญ”ๆกˆใ€‚
ๅช่พ“ๅ‡บๅ•ไธชๅญ—ๆฏ (A, B, C ๆˆ– D)๏ผŒไธ่ฆ่พ“ๅ‡บ้€‰้กนๅ†…ๅฎน๏ผŒไนŸไธ่ฆ่พ“ๅ‡บไปปไฝ•่งฃ้‡Šใ€‚
```

**Evaluation Protocol**

If the modelโ€™s output is not one of A, B, C, or D, we use GPT-4o to map the output to Aโ€“D based on the option content. 
The final evaluation reports the absolute accuracy of the benchmark in both English and Chinese versions.

Code Example: https://github.com/uni-parser/RxnBench

## ๐Ÿ“Š Benchmark Leaderboard

We evaluated several of the latest popular MLLMs, including both closed-source and open-source models.

|  Moldel   | Think | Weight | API-Version | RxnBench-En  | RxnBench-Zh | Mean Score |
|  ---- |:----:|:----:|:----:|:----:|:----:|:----:|
| Gemini-3-Flash-preview | โˆš | Proprietary | 20251217 | **0.9593** | **0.9652** | **0.9623** |
| Seed1.8-Think | โˆš | Proprietary | 20251218 | 0.9325 | 0.9403 | 0.9364 |
| Gemini-3-Pro-preview | โˆš | Proprietary | 20251119 | 0.9318 | 0.9403 | 0.9361 |
| GPT-5(high) | โˆš | Proprietary | 20250807 | 0.9279 | 0.9246 | 0.9263 |
| Gemini-2.5-Pro | โˆš | Proprietary | 20250617 | 0.9095 | 0.9423 | 0.9259 |
| GPT-5.1(high) | โˆš | Proprietary | 20251113 | 0.9213 | 0.9220 | 0.9216 |
| GPT-5(medium) | โˆš | Proprietary | 20250807 | 0.9207 | 0.9226 | 0.9216 |
| Qwen3-VL-235BA22B-Think | โˆš | Open | - | 0.9220 | 0.9134 | 0.9177 |
| Qwen3-VL-32B-Think | โˆš | Open | - | 0.9128 | 0.9161 | 0.9144 |
| GPT-5.1(medium) | โˆš | Proprietary | 20251113 | 0.9108 | 0.9141 | 0.9125 |
| GPT-5-mini | โˆš | Proprietary | 20250807 | 0.9108 | 0.9128 | 0.9118 |
| Seed1.5-VL-Think | โˆš | Proprietary | 20250428 | 0.9056 | 0.9161 | 0.9109 |
| GPT o3 | โˆš | Proprietary | 20250416 | 0.9056 | 0.9115 | 0.9086 |
| GPT o4 mini | โˆš | Proprietary | 20250416  | 0.9062 | 0.9075 | 0.9069 |
| InternVL3.5-241B-A28B | โˆš | Open | - | 0.9003 | 0.9062 | 0.9033 |
| Intern-S1 | โˆš | Open | - | 0.8938 | 0.8944 | 0.8941 |
| Qwen3-VL-30BA3B-Think | โˆš | Open | - | 0.8689 | 0.8590 | 0.8689 |
| Qwen3-VL-Plus | ร— | Proprietary | 20250923 | 0.8551 | 0.8656 | 0.8604 |
| Qwen3-VL-8B-Think | โˆš | Open | - | 0.8636 | 0.8564 | 0.8600 |
| Seed1.5-VL | ร— | Proprietary | 20250328 | 0.8518 | 0.8669 | 0.8594 |
| Qwen3-VL-235BA22B-Instruct | ร— | Open | - | 0.8492 | 0.8675 | 0.8584 |
| InternVL3-78b | ร— | Open | - | 0.8531 | 0.8308 | 0.8420 |
| Qwen3-VL-4B-Think | โˆš | Open | - | 0.8577 | 0.8256 | 0.8416 |
| Intern-S1-mini | โˆš | Open | - | 0.8521 | 0.8282 | 0.8402 |
| GLM-4.1V-9B-Thinking | โˆš | Open | - | 0.8392 | 0.8341 | 0.8367 |
| Qwen3-VL-32B-Instruct | ร— | Open | - | 0.8315 | 0.8407 | 0.8361 |
| Qwen2.5-VL-72B | ร— | Open | - | 0.8341 | 0.8308 | 0.8325 |
| Qwen2.5-VL-Max | ร— | Proprietary | 20250813 | 0.8192 | 0.8262 | 0.8227 |
| GPT-5-nano | โˆš | Proprietary | 20250807 | 0.7980 | 0.7941 | 0.7961 |
| Qwen2.5-VL-32B | ร— | Open | - | 0.7980 | 0.7908 | 0.7944 |
| Gemini-2.5-Flash | โˆš | Proprietary | 20250617 | 0.6925 | 0.8557 | 0.7741 |
| Qwen3-VL-8B-Instruct | ร— | Open | - | 0.7548 | 0.7495 | 0.7521 |
| Qwen3-VL-30BA3B-Instruct | ร— | Open | - | 0.7456 | 0.7436 | 0.7456 |
| GPT-4o | ร— | Proprietary | 20240806  | 0.7462 | 0.7436 | 0.7449 |
| Qwen2.5-VL-7B | ร— | Open | - | 0.7082 | 0.7233 | 0.7158 |
| Qwen3-VL-4B-Instruct | ร— | Open | - | 0.7023 | 0.7023 | 0.7023 |
| Qwen3-VL-2B-Think | โˆš | Open | - | 0.6780 | 0.6708 | 0.6744 |
| Qwen2.5-VL-3B | ร— | Open | - | 0.6748 | 0.6643 | 0.6696 |
| GPT-4o mini | ร— | Proprietary | 20240718  | 0.6636 | 0.6066 | 0.6351 |
| Qwen3-VL-2B-Instruct | ร— | Open | - | 0.5711 | 0.5928 | 0.5820 |
| *Choice longest answer* | - | - | - | 0.4262 | 0.4525 | 0.4394 |
| Deepseek-VL2 | ร— | Open | - | 0.4426 | 0.4216 | 0.4321 |
| *Random* | - | - | - | 0.2500 | 0.2500 |  0.2500 |


We also conducted separate evaluations for different task types (in RxnBench-en).

|  Moldel   | Think | Weight | API-Version | Type0  | Type1 | Type2 | Type3 | Type4 | Type5 |
|  ---- |:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|
| Gemini-3-Flash-preview | โˆš | Proprietary | 20251217 | 0.9613 | **0.9643** | **0.9764** | **0.9630** | 0.9492 | **0.9030** |
| Seed1.8-Think | โˆš | Proprietary | 20251218 | 0.9331 | 0.9484 | 0.9527 | 0.9444 | 0.9492 | 0.8284 |
| Gemini-3-Pro-preview | โˆš | Proprietary | 20251119 | **0.9648** | 0.9246 | 0.9527 | 0.9398 | 0.9322 | 0.7463 |
| GPT-5(high) | โˆš | Proprietary | 20250807 | 0.9313 | 0.9444 | 0.9527 | 0.9167 | **0.9661** | 0.8358 |
| Gemini-2.5-Pro | โˆš | Proprietary | 20250617 | 0.9331 | 0.9246 | 0.9459 | 0.9491 | 0.9322 | 0.6343 |
| GPT-5.1(high) | โˆš | Proprietary | 20251113 | 0.9243 | 0.9524 | 0.9426 | 0.9167 | 0.9661 | 0.7910  |
| GPT-5(medium) | โˆš | Proprietary | 20250807 | 0.9349 | 0.9325 | 0.9493 | 0.9167 | 0.9492 | 0.7761 |
| Qwen3-VL-235BA22B-Think | โˆš | Open | - | 0.9190 | 0.9405 | 0.9459 | 0.9213 | 0.9322 | 0.8433 |
| Qwen3-VL-32B-Think | โˆš | Open | - | 0.9296 | 0.9405 | 0.9426 | 0.9259 | 0.9153 | 0.7015 |
| GPT-5.1(medium) | โˆš | Proprietary | 20251113 | 0.9243 | 0.9365 | 0.9426 | 0.9167 | 0.9492 | 0.7090 |
| GPT-5-mini | โˆš | Proprietary | 20250807 | 0.9225 | 0.9325 | 0.9257 | 0.9259 | 0.9831 | 0.7388 |
| Seed1.5-VL-Think | โˆš | Proprietary | 20250428 | 0.8996 | 0.9365 | 0.9358 | 0.9074 | 0.9153 | 0.8060 |
| GPT o3 | โˆš | Proprietary | 20250416 | 0.9313 | 0.9325 | 0.9223 | 0.8981 | 0.9492 | 0.7090 |
| GPT o4 mini | โˆš | Proprietary | 20250416  | 0.6391 | 0.7302 | 0.7500 | 0.6667 | 0.6271 | 0.4627 |
| InternVL3.5-241B-A28B | โˆš | Open | - | 0.8944 | 0.9127 | 0.9291 | 0.9167 | 0.9153 | 0.8134 |
| Intern-S1 | โˆš | Open | - | 0.9014 | 0.9127 | 0.9223 | 0.9028 | 0.8814 | 0.7463 | 
| Qwen3-VL-30BA3B-Think | โˆš | Open | - | 0.8732 | 0.8810 | 0.9054 | 0.8843 | 0.9322 | 0.6940 |
| Qwen3-VL-Plus | ร— | Proprietary | 20250923 | 0.8275 | 0.8968 | 0.8986 | 0.8565 | 0.9153 | 0.7687 |
| Qwen3-VL-8B-Think | โˆš | Open | - | 0.8768 | 0.8730 | 0.8885 | 0.9028 | 0.8983 | 0.6567 |
| Seed1.5-VL | ร— | Proprietary | 20250328 | 0.9327 | 0.9127 | 0.9122 | 0.8472 | 0.8305 | 0.7015 |
| Qwen3-VL-235BA22B-Instruct | ร— | Open | - | 0.8204 | 0.8929 | 0.8986 | 0.8426 | 0.8814 | 0.7761 |
| InternVL3-78b | ร— | Open | - | 0.8556 | 0.8730 | 0.8885 | 0.8981 | 0.9153 | 0.6194 |
| Qwen3-VL-4B-Think | โˆš | Open | - | 0.8838 | 0.8770 | 0.8615 | 0.9074 | 0.8983 | 0.6045 |
| Intern-S1-mini | โˆš | Open | - | 0.8239 | 0.8690 | 0.8547 | 0.8611 | 0.8475 | 0.6791 |
| GLM-4.1V-9B-Thinking | โˆš | Open | - | 0.8433 | 0.8690 | 0.8649 | 0.8657 | 0.8814 | 0.6493 |
| Qwen3-VL-32B-Instruct | ร— | Open | - | 0.8169 | 0.8571 | 0.8885 | 0.8519 | 0.8305 | 0.6866 |
| Qwen2.5-VL-72B | ร— | Open | - | 0.8063 | 0.8063 | 0.8770 | 0.9088 | 0.8102 | 0.9322 | 0.7090 |
| Qwen2.5-VL-Max | ร— | Proprietary | 20250813 | 0.7958 | 0.8571 | 0.8885 | 0.8194 | 0.8983 | 0.6642 |
| GPT-5-nano | โˆš | Proprietary | 20250807 | 0.8063 | 0.8452 | 0.8311 | 0.8241 | 0.7797 | 0.5672 | 
| Qwen2.5-VL-32B | ร— | Open | - | 0.7729 | 0.8413 | 0.8750 | 0.8009 | 0.8305 | 0.6418 |
| Gemini-2.5-Flash | โˆš | Proprietary | 20250617 | 0.7799 | 0.6111 | 0.6757 | 0.6620 | 0.7627 | 0.5373 |
| Qwen3-VL-8B-Instruct | ร— | Open | - | 0.7113 | 0.8175 | 0.8446 | 0.8241 | 0.7627 | 0.5075 |
| Qwen3-VL-30BA3B-Instruct | ร— | Open | - | 0.7042 | 0.7937 | 0.8311 | 0.7824 | 0.7119 | 0.5970 | 
| GPT-4o | ร— | Proprietary | 20240806  | 0.7359 | 0.8175 | 0.7973 | 0.7500 | 0.7627 | 0.5224 |
| Qwen2.5-VL-7B | ร— | Open | - | 0.6678 | 0.7659 | 0.8041 | 0.7130 | 0.6441 | 0.5373 |
| Qwen3-VL-4B-Instruct | ร— | Open | - | 0.6708 | 0.7302 | 0.7804 | 0.7222 | 0.6610 | 0.5970 |
| Qwen3-VL-2B-Think | โˆš | Open | - | 0.7342 | 0.6706 | 0.7128 | 0.7083 | 0.6102 | 0.3657 |
| Qwen2.5-VL-3B | ร— | Open | - | 0.6426 | 0.7381 | 0.7635 | 0.6898 | 0.6610 | 0.4776 |
| GPT-4o mini | ร— | Proprietary | 20240718  | 0.6391 | 0.7302 | 0.7500 | 0.6667 | 0.6271 | 0.4627 |
| Qwen3-VL-2B-Instruct | ร— | Open | - | 0.5405 | 0.6190 | 0.6318 | 0.6250 | 0.6102 | 0.3731 |
| Deepseek-VL2 | ร— | Open | - | 0.4120 | 0.5040 | 0.4899 | 0.4907 | 0.3729 | 0.3060 |


## ๐Ÿ†• RxnBench-Doc

A single reaction image often lacks the information needed for full interpretation, requiring contextual text from the literature. Therefore, we also provide a benchmark for chemical reaction literature understanding.

https://huggingface.co/datasets/UniParser/RxnBench-Doc

## ๐Ÿ“– Citation

our paper coming soon ...