π§ Andyβ4-tiny π

Andyβ4-tiny is an 360 Millionβparameter specialist model tuned for Minecraft gameplay via the Mindcraft framework.
The Current version of Andy-4-tiny is Andy-4-tiny-0522.
β οΈ Certification:
Andyβ4 is not yet certified by the Mindcraft developers. Use in production at your own discretion.
π Model Specifications
π Training Regimen
Andyβ4βbaseβ1 dataset
- Epochs: 2
- Learning Rate: 5e-5
- Dataset Size: 47.4k
Andyβ4βbase-2 dataset
- Epochs: 2
- Learning Rate: 7e-5
- Dataset Size: 49.2k
Fineβtune (FT) dataset
- Epochs: 2.5
- Learning Rate: 2e-5
- Dataset Size: 4.12k
- Optimizer: AdamW_8bit with cosine decay
- Quantization: 4βbit (
bnb-4bit) for inference
- Warm Up Steps: 0.1% of each dataset
π Installation
Andy-4-tiny is an Edge-case model, built to run on the CPU and use minimal ram. These are the requirements to Run Them, not to use them while Minecraft is also running.
| Quantization |
RAM Required |
| F16 |
CPU 2GB |
| Q8_0 |
CPU 1GB |
| Q4_K_M |
CPU 0.8GB |
1. Installation directly on Ollama
- Visit Andy-4 on Ollama
- Copy the command after choosing model type / quantization
- Run the command in the terminal
- Set the profile's model to be what you installed, such as
ollama/sweaterdog/andy-4:tiny-q8_0
2. Manual Download & Modelfile
Download
- From the HF Files tab, grab your chosen
.GGUF quant weights (e.g. Andy-4-tiny.Q4_K_M.gguf).
- Download the provided
Modelfile.
Edit
Change
FROM YOUR/PATH/HERE
to
FROM /path/to/Andy-4-tiny.Q4_K_M.gguf
Optional:
Increase the parameter num_ctx to a higher value for longer conversations if you:
A. Have extra VRAM
B. Quantized the context window
C. Can use a smaller model
- Create
ollama create andy-4-tiny -f Modelfile
This registers the Andyβ4-tiny model locally.
π Acknowledgments
Click to expand
βοΈ License
See Andy 1.0 License.
This work uses data and models created by @Sweaterdog.