Codette LoRA Fine-Tuning
Fine-tuning repo for Codette โ a sovereign AI music production assistant built by Jonathan Harrison (Raiff's Bits).
What This Does
Trains a LoRA adapter on top of meta-llama/Llama-3.2-1B-Instruct using Codette's own framework data, so she responds with her real voice, identity, and perspectives rather than as a generic assistant.
Files
| File | Purpose |
|---|---|
train_codette_lora.py |
Training script โ runs as a HF Job |
codette_combined_train.jsonl |
2,136 training examples from Codette's framework |
Output
When training completes, the adapter is automatically pushed to:
Raiff1982/codette-llama-adapter
That adapter is then loaded by the Codette Space at:
Raiff1982/codette-ai
Training Details
- Base model: meta-llama/Llama-3.2-1B-Instruct
- Method: LoRA (r=16, alpha=16)
- Target modules: q_proj, v_proj
- Examples: 2,136
- Epochs: 3
- Hardware: CPU (HF Jobs cpu-basic)
Running the Job
See the HF Jobs documentation or follow the instructions in the Space README.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support