GPT 5.2
Collection
Distilled models and datasets for GPT 5.2.
•
8 items
•
Updated
•
1
This model was trained on 250 examples generated by GPT 5.2 (high reasoning)
Note: In this distill I fixed formatting issues found in previous gpt 5 distills. Will be going back to update the other 5.2 distills
This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library. An Ollama Modelfile is included for easy deployment.
2-bit
3-bit
4-bit
8-bit
16-bit
Base model
Qwen/Qwen3-4B-Thinking-2507