No rows found.
Training Data Scale Registry
A systematic registry of AI model training data size estimates with evidence profiles.
Dataset Description
This dataset contains structured records of AI models with:
- Token count estimates (min/max/mid)
- Evidence types (E1-E5) and strength (S-High/Medium/Low)
- Uncertainty sources (U1-U5)
- Model metadata (parameters, FLOPs, architecture)
- Raw evidence snippets
Data Collection
Data is collected from:
- Epoch AI datasets
- Hugging Face model cards
- Technical reports and system cards
- Third-party analyses
Inference Methods
Token estimates are derived using:
- Chinchilla scaling law
- Hardware back-calculation
- Parameter ratio heuristics
- Textual token clues
- Third-party analyses
Evidence Profiles
Each model includes an evidence profile indicating:
- Evidence Types: How the estimate was derived
- Evidence Strength: Confidence in the estimate
- Uncertainty Sources: What information is missing
Usage
from datasets import load_dataset
dataset = load_dataset("midah/odl-training-data")
Citation
If you use this dataset, please cite:
Training Data Scale Registry
ODL Research
2025
- Downloads last month
- 12