Add model-index metadata for benchmark results

#7

This PR adds machine-readable evaluation metadata to the model card using the model-index format.

What This Adds

Structured YAML metadata for 7 benchmark(s) from the README:

  • MATH-500: 98.1
  • AIME24: 90.8
  • AIME25: 88.0
  • LCB: 69.3
  • GPQA: 74.4
  • HLE: 14.6
  • MMLU-Pro: 81.9

Why This Helps

Adding structured benchmark metadata enables:

  1. Automatic Leaderboard Inclusion - Model appears on Hugging Face leaderboards and Papers with Code
  2. Better Discoverability - Users can search/filter models by benchmark scores
  3. Machine-Readable Data - Tools and APIs can query model performance programmatically

What Doesn't Change

  • βœ… Existing README content stays the same
  • βœ… Markdown benchmark tables remain unchanged
  • βœ… Only adds metadata to the YAML frontmatter

Thank you for open-sourcing INTELLECT-3! This contribution helps the community compare and discover your work.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment