Update README.md
Browse files
README.md
CHANGED
|
@@ -72,11 +72,51 @@ Bringing advanced AI capabilities to most environments.
|
|
| 72 |
|
| 73 |
## Benchmark Results
|
| 74 |
|
| 75 |
-
We compare Ministral 3
|
| 76 |
-
|
| 77 |
-
###
|
| 78 |
-
|
| 79 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
## Usage
|
| 82 |
|
|
|
|
| 72 |
|
| 73 |
## Benchmark Results
|
| 74 |
|
| 75 |
+
We compare Ministral 3 to similar sized models.
|
| 76 |
+
|
| 77 |
+
### Reasoning
|
| 78 |
+
|
| 79 |
+
| Model | AIME25 | AIME24 | GPQA Diamond | LiveCodeBench |
|
| 80 |
+
|---------------------------|-------------|-------------|--------------|---------------|
|
| 81 |
+
| **Ministral 3 14B** | <u>0.850</u>| <u>0.898</u>| <u>0.712</u> | <u>0.646</u> |
|
| 82 |
+
| Qwen3-14B (Thinking) | 0.737 | 0.837 | 0.663 | 0.593 |
|
| 83 |
+
| | | | | |
|
| 84 |
+
| **Ministral 3 8B** | 0.787 | <u>0.860</u>| 0.668 | <u>0.616</u> |
|
| 85 |
+
| Qwen3-VL-8B-Thinking | <u>0.798</u>| <u>0.860</u>| <u>0.671</u> | 0.580 |
|
| 86 |
+
| | | | | |
|
| 87 |
+
| **Ministral 3 3B** | <u>0.721</u>| <u>0.775</u>| 0.534 | <u>0.548</u> |
|
| 88 |
+
| Qwen3-VL-4B-Thinking | 0.697 | 0.729 | <u>0.601</u> | 0.513 |
|
| 89 |
+
|
| 90 |
+
### Instruct
|
| 91 |
+
|
| 92 |
+
| Model | Arena Hard | WildBench | MATH Maj@1 | MM MTBench |
|
| 93 |
+
|---------------------------|-------------|------------|-------------|------------------|
|
| 94 |
+
| **Ministral 3 14B** | <u>0.551</u>| <u>68.5</u>| <u>0.904</u>| <u>8.49</u> |
|
| 95 |
+
| Qwen3 14B (Non-Thinking) | 0.427 | 65.1 | 0.870 | NOT MULTIMODAL |
|
| 96 |
+
| Gemma3-12B-Instruct | 0.436 | 63.2 | 0.854 | 6.70 |
|
| 97 |
+
| | | | | |
|
| 98 |
+
| **Ministral 3 8B** | 0.509 | <u>66.8</u>| 0.876 | <u>8.08</u> |
|
| 99 |
+
| Qwen3-VL-8B-Instruct | <u>0.528</u>| 66.3 | <u>0.946</u>| 8.00 |
|
| 100 |
+
| | | | | |
|
| 101 |
+
| **Ministral 3 3B** | 0.305 | <u>56.8</u>| 0.830 | 7.83 |
|
| 102 |
+
| Qwen3-VL-4B-Instruct | <u>0.438</u>| <u>56.8</u>| <u>0.900</u>| <u>8.01</u> |
|
| 103 |
+
| Qwen3-VL-2B-Instruct | 0.163 | 42.2 | 0.786 | 6.36 |
|
| 104 |
+
| Gemma3-4B-Instruct | 0.318 | 49.1 | 0.759 | 5.23 |
|
| 105 |
+
|
| 106 |
+
### Base
|
| 107 |
+
|
| 108 |
+
| Model | Multilingual MMLU | MATH CoT 2-Shot | AGIEval 5-shot | MMLU Redux 5-shot | MMLU 5-shot | TriviaQA 5-shot |
|
| 109 |
+
|---------------------|-------------------|-----------------|----------------|-------------------|-------------|-----------------|
|
| 110 |
+
| **Ministral 3 14B** | 0.742 | <u>0.676</u> | 0.648 | 0.820 | 0.794 | 0.749 |
|
| 111 |
+
| Qwen3 14B Base | <u>0.754</u> | 0.620 | <u>0.661</u> | <u>0.837</u> | <u>0.804</u>| 0.703 |
|
| 112 |
+
| Gemma 3 12B Base | 0.690 | 0.487 | 0.587 | 0.766 | 0.745 | <u>0.788</u> |
|
| 113 |
+
| | | | | | | |
|
| 114 |
+
| **Ministral 3 8B** | <u>0.706</u> | <u>0.626</u> | 0.591 | 0.793 | <u>0.761</u>| <u>0.681</u> |
|
| 115 |
+
| Qwen 3 8B Base | 0.700 | 0.576 | <u>0.596</u> | <u>0.794</u> | 0.760 | 0.639 |
|
| 116 |
+
| | | | | | | |
|
| 117 |
+
| **Ministral 3 3B** | 0.652 | <u>0.601</u> | 0.511 | 0.735 | 0.707 | 0.592 |
|
| 118 |
+
| Qwen 3 4B Base | <u>0.677</u> | 0.405 | <u>0.570</u> | <u>0.759</u> | <u>0.713</u>| 0.530 |
|
| 119 |
+
| Gemma 3 4B Base | 0.516 | 0.294 | 0.430 | 0.626 | 0.589 | <u>0.640</u> |
|
| 120 |
|
| 121 |
## Usage
|
| 122 |
|