Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
AI4Industry commited on
Commit
bcaadc5
·
verified ·
1 Parent(s): b59e7a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -50,15 +50,16 @@ tags:
50
 
51
  ## 📘 Benchmark Summary
52
 
53
- RxnBench is a visual question answering (VQA) benchmark comprising 1,525 multiple-choice questions (MCQs) at the PhD-level of organic chemistry reaction understanding. All data are annotated by experts in the organic chemistry domain and subsequently validated through two rounds of expert review.
54
 
55
  The benchmark is built from 305 scientific figures drawn from high-impact OpenAssess journals.
56
  For each figure, domain experts carefully designed five multiple-choice VQA questions targeting the interpretation of organic reaction diagrams.
57
  These questions were further refined through multiple rounds of rigorous review and revision to ensure both clarity and scientific accuracy.
58
  The questions cover a variety of types, including the description of chemical reaction images, extraction of reaction content, recognition of molecules or Markush structures, and determination of mechanisms.
59
- This benchmark challenges visual-language models on their foundational knowledge of organic chemistry, multimodal contextual reasoning, and chemical reasoning skills.
60
 
61
- The benchmark is released in both English and Chinese versions.
 
62
 
63
 
64
  ## 🎯 Benchmark Evaluation
 
50
 
51
  ## 📘 Benchmark Summary
52
 
53
+ RxnBench is a visual question answering (VQA) benchmark comprising 1,525 multiple-choice questions (MCQs) at the PhD-level of organic chemistry reaction understanding.
54
 
55
  The benchmark is built from 305 scientific figures drawn from high-impact OpenAssess journals.
56
  For each figure, domain experts carefully designed five multiple-choice VQA questions targeting the interpretation of organic reaction diagrams.
57
  These questions were further refined through multiple rounds of rigorous review and revision to ensure both clarity and scientific accuracy.
58
  The questions cover a variety of types, including the description of chemical reaction images, extraction of reaction content, recognition of molecules or Markush structures, and determination of mechanisms.
59
+ This benchmark challenges visual-language models on their foundational knowledge of organic chemistry, multimodal contextual reasoning, and chemical reasoning skills.
60
 
61
+
62
+ The benchmark is released in both English and Chinese versions. All data are annotated by experts in the organic chemistry domain and subsequently validated through two rounds of expert review.
63
 
64
 
65
  ## 🎯 Benchmark Evaluation