| license: apache-2.0 | |
| # TinyJSON | |
| Trained on [my `json-training` dataset](https://huggingface.co/datasets/ChristianAzinn/json-training), | |
| these are finetunes of the smallest state-of-the-art LLMs to output in structured JSON. | |
| Where their base/instruct versions have so little clue how to output JSON that forcing it using | |
| techniques like grammars simply hangs forever, these little guys (mostly) work like a charm. | |
| (SmolLM 135M still sometimes babbles on. Set a maximum token limit.) | |
| Training was done with Unsloth at 4bit (lmao), rank=8, alpha=8, for 3 epochs each. | |
| `rev1` models were trained on the first revision (11.6k rows) of `json-training`, | |
| while `rev2` models were trained on the second (20.6k rows). |