I just published Ellora - 6 production-ready LoRA recipes for enhancing LLMs with specific capabilities. Each recipe costs under $100 to run and includes complete training code, data generation, and evaluation.
The 6 Recipes: Recipe 1: Accuracy Recovery - Recover 75% of quantization losses with self-distillation Recipe 2: Reasoning LoRA - Add structured thinking with GRPO (0% to 60% adoption, 75% quality boost) Recipe 3: Tool Calling - Real execution on actual codebases Recipe 4: Context Extension - Scale from 32K to 2M tokens (61x increase) Recipe 5: Secure Code Generation - 97% vulnerability reduction using automated Semgrep analysis Recipe 6: Execution-Aware World Models - Teaching models runtime behavior
Why Recipes? Ellora provides methodologies, not frameworks. Use them with your existing tools (PEFT, LoRAX, vLLM, Unsloth, HuggingFace). Each recipe uses self-supervised data generation (Magpie approach) - no expensive human labeling required.
All recipes include Jupyter notebooks you can run immediately with clear success metrics.
The models come in Thinking and Instruct versions and utilize a new architecture, allowing it to have ~10x faster inference than Qwen32B. š Step-by-step Guide: https://docs.unsloth.ai/models/qwen3-next