Add model-index metadata for benchmark results
#7
by
SeasonalFall84
- opened
This PR adds machine-readable evaluation metadata to the model card using the model-index format.
What This Adds
Structured YAML metadata for 7 benchmark(s) from the README:
- MATH-500: 98.1
- AIME24: 90.8
- AIME25: 88.0
- LCB: 69.3
- GPQA: 74.4
- HLE: 14.6
- MMLU-Pro: 81.9
Why This Helps
Adding structured benchmark metadata enables:
- Automatic Leaderboard Inclusion - Model appears on Hugging Face leaderboards and Papers with Code
- Better Discoverability - Users can search/filter models by benchmark scores
- Machine-Readable Data - Tools and APIs can query model performance programmatically
What Doesn't Change
- β Existing README content stays the same
- β Markdown benchmark tables remain unchanged
- β Only adds metadata to the YAML frontmatter
Thank you for open-sourcing INTELLECT-3! This contribution helps the community compare and discover your work.