Language Decoded
Collection
4 items โข Updated
QLoRA adapters fine-tuned on multilingual code conditions for the Language Decoded project (part of Cohere's Tiny Aya Expedition).
Does fine-tuning on non-English code improve multilingual reasoning โ and is the benefit language-dependent or structure-dependent?
All adapters are trained on CohereLabs/tiny-aya-base (3.35B parameters).
This repo is the canonical hub for all Language Decoded LoRA adapters, organized by experimental condition:
| Subdirectory | Condition | Training Data |
|---|---|---|
condition-1-en-32k/ |
Condition 1 | English Python from The Stack Dedup (full 32k corpus) |
condition-1-en-5k/ |
Condition 1 | English Python from The Stack Dedup (5k subset) |
condition-2-zh-5k/ |
Condition 2 | Chinese keyword-swapped Python (Legesher-transpiled) |
condition-2-es-5k/ |
Condition 2 | Spanish keyword-swapped Python (Legesher-transpiled) |
condition-2-ur-5k/ |
Condition 2 | Urdu keyword-swapped Python (Legesher-transpiled) |
condition-3-zh-5k/ |
Condition 3 | Transpiled + native Chinese code (blended) |
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("CohereLabs/tiny-aya-base")
tokenizer = AutoTokenizer.from_pretrained("CohereLabs/tiny-aya-base")
# Load a LoRA adapter (e.g., Condition 1 โ English code)
model = PeftModel.from_pretrained(base_model, "legesher/language-decoded-lora", subfolder="condition-1-en-5k")
# Load a language-specific adapter (e.g., Condition 2 โ Chinese keyword-swapped)
model = PeftModel.from_pretrained(base_model, "legesher/language-decoded-lora", subfolder="condition-2-zh-5k")
| Parameter | Value |
|---|---|
| Base model | CohereLabs/tiny-aya-base (3.35B params) |
| Method | QLoRA 4-bit (NF4), ~5.4GB VRAM |
| Hardware | Kaggle T4 (16GB) |
| Tokenizer | CohereLabs/tiny-aya-base |
| Transpilation tool | Legesher v0.7.3 |
| Training data | legesher/language-decoded-data |
| Parameter | Value |
|---|---|
LoRA rank (r) |
16 |
| LoRA alpha | 32 |
| LoRA dropout | 0.0 |
| Target modules | q_proj, k_proj, v_proj, o_proj, up_proj, down_proj, gate_proj |
| Bias | none |
| Task type | CAUSAL_LM |
| PEFT version | 0.18.1 |
| Quantization | NF4 (4-bit) via Unsloth |
Models are evaluated on multilingual reasoning benchmarks with dual prompts (English + language-specific):
| Benchmark | What it measures | Examples per language |
|---|---|---|
| MGSM | Math reasoning | 250 (full set) |
| X-CSQA | Commonsense reasoning | ~1,000 (full set) |
| XNLI | Natural language inference | ~5,000 (full set) |
Results will be added as evaluation completes.
@misc{language-decoded-2026,
title={Language Decoded: Investigating Language-Dependent vs. Structure-Dependent Reasoning Benefits of Code},
author={Madison Edgar and Saad Ahmed Bazaz and Tom Sherborne and Rashik Shahjahan and Khojasteh Mirza and Sarah Jawaid and Rafay Mustafa and Sohaib Ahmed Bazaz},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/legesher/language-decoded-lora}
}
Apache 2.0
Base model
CohereLabs/tiny-aya-base