ranjan56cse commited on
Commit
477e349
Β·
verified Β·
1 Parent(s): 299f8bc

Upload logs/training_log_step1000.log with huggingface_hub

Browse files
Files changed (1) hide show
  1. logs/training_log_step1000.log +73 -0
logs/training_log_step1000.log ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-11-09 19:05:41,604 - INFO -
2
+ ╔══════════════════════════════════════════════════════════╗
3
+ β•‘ T5 TRAINING CONFIGURATION β•‘
4
+ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
5
+ Mode: FULL
6
+ Platform: vast
7
+ Repository: ranjan56cse/t5-base-xsum-lora
8
+ Epochs: 3
9
+ Samples: ALL (204k)
10
+ Batch size: 16
11
+ Gradient accum: 2
12
+ Effective batch: 32
13
+ Save every: 1000 steps
14
+ Expected time: ~8-10 hours
15
+
16
+ 2025-11-09 19:05:41,604 - INFO - Creating repository: ranjan56cse/t5-base-xsum-lora
17
+ 2025-11-09 19:05:41,807 - INFO - βœ… Repo: https://huggingface.co/ranjan56cse/t5-base-xsum-lora
18
+ 2025-11-09 19:05:41,807 - INFO - Loading google-t5/t5-base...
19
+ 2025-11-09 19:05:52,938 - INFO - βœ… Gradient checkpointing enabled
20
+ 2025-11-09 19:05:52,938 - INFO - Applying LoRA...
21
+ 2025-11-09 19:05:52,976 - INFO - Loading XSum dataset...
22
+ 2025-11-09 19:05:56,588 - INFO - βœ… Dataset: 204045 train, 11332 val
23
+ 2025-11-09 19:05:56,588 - INFO - Tokenizing...
24
+ 2025-11-09 19:08:02,802 - INFO - βœ… Tokenization complete
25
+ 2025-11-09 19:08:03,857 - INFO - ============================================================
26
+ 2025-11-09 19:08:03,858 - INFO - πŸš€ STARTING TRAINING (~8-10 hours)
27
+ 2025-11-09 19:08:03,859 - INFO - Effective batch size: 32
28
+ 2025-11-09 19:08:03,859 - INFO - GPU: 0.84GB allocated, 0.92GB reserved
29
+ 2025-11-09 19:08:03,859 - INFO - System: 4.1% used (17.4GB / 503.7GB)
30
+ 2025-11-09 19:08:03,859 - INFO - ============================================================
31
+ 2025-11-09 19:08:03,990 - INFO - ============================================================
32
+ 2025-11-09 19:08:03,990 - INFO - πŸš€ Training started
33
+ 2025-11-09 19:08:03,990 - INFO - Total steps: 19128
34
+ 2025-11-09 19:08:03,990 - INFO - GPU: NVIDIA GeForce RTX 3090
35
+ 2025-11-09 19:08:03,990 - INFO - GPU Memory: 0.84GB allocated, 0.92GB reserved
36
+ 2025-11-09 19:08:03,990 - INFO - System Memory: 4.1% used (17.4GB / 503.7GB)
37
+ 2025-11-09 19:08:03,991 - INFO - ============================================================
38
+ 2025-11-09 19:08:51,266 - INFO - Step 50/19128 | Loss: 12.5022 | LR: 2.88e-05 | GPU: 0.87GB
39
+ 2025-11-09 19:09:38,077 - INFO - Step 100/19128 | Loss: 10.3469 | LR: 5.82e-05 | GPU: 0.87GB
40
+ 2025-11-09 19:10:24,938 - INFO - Step 150/19128 | Loss: 4.0200 | LR: 8.82e-05 | GPU: 0.87GB
41
+ 2025-11-09 19:11:11,674 - INFO - Step 200/19128 | Loss: 0.9201 | LR: 1.18e-04 | GPU: 0.87GB
42
+ 2025-11-09 19:11:58,405 - INFO - Step 250/19128 | Loss: 0.7357 | LR: 1.48e-04 | GPU: 0.87GB
43
+ 2025-11-09 19:12:45,152 - INFO - Step 300/19128 | Loss: 0.6602 | LR: 1.77e-04 | GPU: 0.87GB
44
+ 2025-11-09 19:13:31,815 - INFO - Step 350/19128 | Loss: 0.6121 | LR: 2.07e-04 | GPU: 0.87GB
45
+ 2025-11-09 19:14:18,499 - INFO - Step 400/19128 | Loss: 0.5817 | LR: 2.37e-04 | GPU: 0.87GB
46
+ 2025-11-09 19:15:05,185 - INFO - Step 450/19128 | Loss: 0.5916 | LR: 2.67e-04 | GPU: 0.87GB
47
+ 2025-11-09 19:15:51,879 - INFO - Step 500/19128 | Loss: 0.5675 | LR: 2.97e-04 | GPU: 0.87GB
48
+ 2025-11-09 19:16:38,691 - INFO - Step 550/19128 | Loss: 0.5700 | LR: 2.99e-04 | GPU: 0.87GB
49
+ 2025-11-09 19:17:25,546 - INFO - Step 600/19128 | Loss: 0.5610 | LR: 2.98e-04 | GPU: 0.87GB
50
+ 2025-11-09 19:18:12,459 - INFO - Step 650/19128 | Loss: 0.5669 | LR: 2.98e-04 | GPU: 0.87GB
51
+ 2025-11-09 19:18:59,163 - INFO - Step 700/19128 | Loss: 0.5659 | LR: 2.97e-04 | GPU: 0.87GB
52
+ 2025-11-09 19:19:45,942 - INFO - Step 750/19128 | Loss: 0.5673 | LR: 2.96e-04 | GPU: 0.87GB
53
+ 2025-11-09 19:20:32,786 - INFO - Step 800/19128 | Loss: 0.5619 | LR: 2.95e-04 | GPU: 0.87GB
54
+ 2025-11-09 19:21:19,739 - INFO - Step 850/19128 | Loss: 0.5719 | LR: 2.94e-04 | GPU: 0.87GB
55
+ 2025-11-09 19:22:06,708 - INFO - Step 900/19128 | Loss: 0.5576 | LR: 2.94e-04 | GPU: 0.87GB
56
+ 2025-11-09 19:22:53,641 - INFO - Step 950/19128 | Loss: 0.5567 | LR: 2.93e-04 | GPU: 0.87GB
57
+ 2025-11-09 19:23:40,445 - INFO - Step 1000/19128 | Loss: 0.5597 | LR: 2.92e-04 | GPU: 0.87GB
58
+ 2025-11-09 19:25:15,772 - INFO - Step 1000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
59
+ 2025-11-09 19:25:15,772 - INFO - ============================================================
60
+ 2025-11-09 19:25:15,772 - INFO - πŸ“Š EVALUATION at step 1000
61
+ 2025-11-09 19:25:15,772 - INFO - eval_loss: 0.5003
62
+ 2025-11-09 19:25:15,772 - INFO - eval_runtime: 95.3235
63
+ 2025-11-09 19:25:15,772 - INFO - eval_samples_per_second: 118.8790
64
+ 2025-11-09 19:25:15,772 - INFO - eval_steps_per_second: 7.4380
65
+ 2025-11-09 19:25:15,772 - INFO - epoch: 0.1600
66
+ 2025-11-09 19:25:15,773 - INFO - gpu_memory_gb: 0.8662
67
+ 2025-11-09 19:25:15,773 - INFO - system_memory_percent: 6.9000
68
+ 2025-11-09 19:25:15,773 - INFO - ============================================================
69
+ 2025-11-09 19:25:15,773 - INFO - πŸ† New best eval loss: 0.5003
70
+ 2025-11-09 19:25:16,038 - INFO - ============================================================
71
+ 2025-11-09 19:25:16,038 - INFO - πŸ’Ύ Checkpoint 1: step 1000
72
+ 2025-11-09 19:25:16,038 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
73
+ 2025-11-09 19:25:16,038 - INFO - πŸ“€ Uploading checkpoint-1000 to Hub...