See axolotl config
axolotl version: 0.13.0.dev0
base_model: meta-llama/Llama-3.1-8B
tokenizer_type: AutoTokenizer
trust_remote_code: true
strict: false
is_llama_derived_model: true
chat_template: chatml
plugins:
- axolotl.integrations.liger.LigerPlugin
special_tokens:
pad_token: "<|eot_id|>"
datasets:
- path: nvidia/Llama-Nemotron-Post-Training-Dataset
name: SFT
split: chat
type: chat_template
field_messages: input
message_property_mappings:
role: role
content: content
field_output: output
train_on_inputs: false
sequence_len: 8192
eval_sequence_len: 8192
pad_to_sequence_len: true
sample_packing: true
sample_packing_group_size: 100000
sample_packing_bin_size: 200
group_by_length: true
flash_attn: true
micro_batch_size: 1
gradient_accumulation_steps: 8
num_epochs: 5
learning_rate: 2.0e-5
optimizer: adamw_torch_fused
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1.0e-8
lr_scheduler: cosine
warmup_steps: 100
weight_decay: 0.0
bf16: true
tf32: true
gradient_checkpointing: true
activation_offloading: false
val_set_size: 0.01
eval_strategy: steps
eval_steps: 100
save_strategy: steps
save_steps: 100
save_total_limit: 3
save_only_model: false
save_safetensors: true
load_best_model_at_end: true
metric_for_best_model: eval_loss
greater_is_better: false
logging_steps: 10
output_dir: ./outputs/Llama-3.1-8B-nemotron-5epochs/
seed: 42
use_wandb: true
wandb_project: "llama31_base_nemotron"
wandb_name: "llama31-8b-base-nemotron"
outputs/Llama-3.1-8B-nemotron-5epochs/
This model is a fine-tuned version of meta-llama/Llama-3.1-8B on the nvidia/Llama-Nemotron-Post-Training-Dataset dataset. It achieves the following results on the evaluation set:
- Loss: 0.1268
- Memory/max Active (gib): 61.65
- Memory/max Allocated (gib): 61.65
- Memory/device Reserved (gib): 88.96
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1425
Training results
| Training Loss | Epoch | Step | Validation Loss | Active (gib) | Allocated (gib) | Reserved (gib) |
|---|---|---|---|---|---|---|
| No log | 0 | 0 | 3.7258 | 27.81 | 27.81 | 28.15 |
| 1.2815 | 0.3498 | 100 | 1.2255 | 61.65 | 61.65 | 87.65 |
| 0.9525 | 0.6996 | 200 | 0.9414 | 61.65 | 61.65 | 88.96 |
| 0.4994 | 1.0490 | 300 | 0.7800 | 61.65 | 61.65 | 88.27 |
| 0.395 | 1.3988 | 400 | 0.5504 | 61.65 | 61.65 | 88.96 |
| 0.2815 | 1.7486 | 500 | 0.4127 | 61.65 | 61.65 | 88.96 |
| 0.1236 | 2.0979 | 600 | 0.3046 | 61.65 | 61.65 | 88.27 |
| 0.1034 | 2.4477 | 700 | 0.2522 | 61.65 | 61.65 | 88.27 |
| 0.0795 | 2.7976 | 800 | 0.2028 | 61.65 | 61.65 | 88.27 |
| 0.0423 | 3.1469 | 900 | 0.1590 | 61.65 | 61.65 | 88.27 |
| 0.0376 | 3.4967 | 1000 | 0.1414 | 61.65 | 61.65 | 88.96 |
| 0.0322 | 3.8465 | 1100 | 0.1316 | 61.65 | 61.65 | 88.96 |
| 0.0287 | 4.1959 | 1200 | 0.1281 | 61.65 | 61.65 | 88.96 |
| 0.0272 | 4.5457 | 1300 | 0.1270 | 61.65 | 61.65 | 88.96 |
| 0.0277 | 4.8955 | 1400 | 0.1268 | 61.65 | 61.65 | 88.96 |
Framework versions
- Transformers 4.57.1
- Pytorch 2.9.0+cu130
- Datasets 4.3.0
- Tokenizers 0.22.1
- Downloads last month
- 13
Model tree for cemig-temp/llama-3.1-8B-base-nemotron-5epochs
Base model
meta-llama/Llama-3.1-8B