ranjan56cse commited on
Commit
9f209ca
Β·
verified Β·
1 Parent(s): 44cbdcf

Upload logs/training_log_step19000.log with huggingface_hub

Browse files
Files changed (1) hide show
  1. logs/training_log_step19000.log +757 -0
logs/training_log_step19000.log ADDED
@@ -0,0 +1,757 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2025-11-09 19:05:41,604 - INFO -
2
+ ╔══════════════════════════════════════════════════════════╗
3
+ β•‘ T5 TRAINING CONFIGURATION β•‘
4
+ β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
5
+ Mode: FULL
6
+ Platform: vast
7
+ Repository: ranjan56cse/t5-base-xsum-lora
8
+ Epochs: 3
9
+ Samples: ALL (204k)
10
+ Batch size: 16
11
+ Gradient accum: 2
12
+ Effective batch: 32
13
+ Save every: 1000 steps
14
+ Expected time: ~8-10 hours
15
+
16
+ 2025-11-09 19:05:41,604 - INFO - Creating repository: ranjan56cse/t5-base-xsum-lora
17
+ 2025-11-09 19:05:41,807 - INFO - βœ… Repo: https://huggingface.co/ranjan56cse/t5-base-xsum-lora
18
+ 2025-11-09 19:05:41,807 - INFO - Loading google-t5/t5-base...
19
+ 2025-11-09 19:05:52,938 - INFO - βœ… Gradient checkpointing enabled
20
+ 2025-11-09 19:05:52,938 - INFO - Applying LoRA...
21
+ 2025-11-09 19:05:52,976 - INFO - Loading XSum dataset...
22
+ 2025-11-09 19:05:56,588 - INFO - βœ… Dataset: 204045 train, 11332 val
23
+ 2025-11-09 19:05:56,588 - INFO - Tokenizing...
24
+ 2025-11-09 19:08:02,802 - INFO - βœ… Tokenization complete
25
+ 2025-11-09 19:08:03,857 - INFO - ============================================================
26
+ 2025-11-09 19:08:03,858 - INFO - πŸš€ STARTING TRAINING (~8-10 hours)
27
+ 2025-11-09 19:08:03,859 - INFO - Effective batch size: 32
28
+ 2025-11-09 19:08:03,859 - INFO - GPU: 0.84GB allocated, 0.92GB reserved
29
+ 2025-11-09 19:08:03,859 - INFO - System: 4.1% used (17.4GB / 503.7GB)
30
+ 2025-11-09 19:08:03,859 - INFO - ============================================================
31
+ 2025-11-09 19:08:03,990 - INFO - ============================================================
32
+ 2025-11-09 19:08:03,990 - INFO - πŸš€ Training started
33
+ 2025-11-09 19:08:03,990 - INFO - Total steps: 19128
34
+ 2025-11-09 19:08:03,990 - INFO - GPU: NVIDIA GeForce RTX 3090
35
+ 2025-11-09 19:08:03,990 - INFO - GPU Memory: 0.84GB allocated, 0.92GB reserved
36
+ 2025-11-09 19:08:03,990 - INFO - System Memory: 4.1% used (17.4GB / 503.7GB)
37
+ 2025-11-09 19:08:03,991 - INFO - ============================================================
38
+ 2025-11-09 19:08:51,266 - INFO - Step 50/19128 | Loss: 12.5022 | LR: 2.88e-05 | GPU: 0.87GB
39
+ 2025-11-09 19:09:38,077 - INFO - Step 100/19128 | Loss: 10.3469 | LR: 5.82e-05 | GPU: 0.87GB
40
+ 2025-11-09 19:10:24,938 - INFO - Step 150/19128 | Loss: 4.0200 | LR: 8.82e-05 | GPU: 0.87GB
41
+ 2025-11-09 19:11:11,674 - INFO - Step 200/19128 | Loss: 0.9201 | LR: 1.18e-04 | GPU: 0.87GB
42
+ 2025-11-09 19:11:58,405 - INFO - Step 250/19128 | Loss: 0.7357 | LR: 1.48e-04 | GPU: 0.87GB
43
+ 2025-11-09 19:12:45,152 - INFO - Step 300/19128 | Loss: 0.6602 | LR: 1.77e-04 | GPU: 0.87GB
44
+ 2025-11-09 19:13:31,815 - INFO - Step 350/19128 | Loss: 0.6121 | LR: 2.07e-04 | GPU: 0.87GB
45
+ 2025-11-09 19:14:18,499 - INFO - Step 400/19128 | Loss: 0.5817 | LR: 2.37e-04 | GPU: 0.87GB
46
+ 2025-11-09 19:15:05,185 - INFO - Step 450/19128 | Loss: 0.5916 | LR: 2.67e-04 | GPU: 0.87GB
47
+ 2025-11-09 19:15:51,879 - INFO - Step 500/19128 | Loss: 0.5675 | LR: 2.97e-04 | GPU: 0.87GB
48
+ 2025-11-09 19:16:38,691 - INFO - Step 550/19128 | Loss: 0.5700 | LR: 2.99e-04 | GPU: 0.87GB
49
+ 2025-11-09 19:17:25,546 - INFO - Step 600/19128 | Loss: 0.5610 | LR: 2.98e-04 | GPU: 0.87GB
50
+ 2025-11-09 19:18:12,459 - INFO - Step 650/19128 | Loss: 0.5669 | LR: 2.98e-04 | GPU: 0.87GB
51
+ 2025-11-09 19:18:59,163 - INFO - Step 700/19128 | Loss: 0.5659 | LR: 2.97e-04 | GPU: 0.87GB
52
+ 2025-11-09 19:19:45,942 - INFO - Step 750/19128 | Loss: 0.5673 | LR: 2.96e-04 | GPU: 0.87GB
53
+ 2025-11-09 19:20:32,786 - INFO - Step 800/19128 | Loss: 0.5619 | LR: 2.95e-04 | GPU: 0.87GB
54
+ 2025-11-09 19:21:19,739 - INFO - Step 850/19128 | Loss: 0.5719 | LR: 2.94e-04 | GPU: 0.87GB
55
+ 2025-11-09 19:22:06,708 - INFO - Step 900/19128 | Loss: 0.5576 | LR: 2.94e-04 | GPU: 0.87GB
56
+ 2025-11-09 19:22:53,641 - INFO - Step 950/19128 | Loss: 0.5567 | LR: 2.93e-04 | GPU: 0.87GB
57
+ 2025-11-09 19:23:40,445 - INFO - Step 1000/19128 | Loss: 0.5597 | LR: 2.92e-04 | GPU: 0.87GB
58
+ 2025-11-09 19:25:15,772 - INFO - Step 1000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
59
+ 2025-11-09 19:25:15,772 - INFO - ============================================================
60
+ 2025-11-09 19:25:15,772 - INFO - πŸ“Š EVALUATION at step 1000
61
+ 2025-11-09 19:25:15,772 - INFO - eval_loss: 0.5003
62
+ 2025-11-09 19:25:15,772 - INFO - eval_runtime: 95.3235
63
+ 2025-11-09 19:25:15,772 - INFO - eval_samples_per_second: 118.8790
64
+ 2025-11-09 19:25:15,772 - INFO - eval_steps_per_second: 7.4380
65
+ 2025-11-09 19:25:15,772 - INFO - epoch: 0.1600
66
+ 2025-11-09 19:25:15,773 - INFO - gpu_memory_gb: 0.8662
67
+ 2025-11-09 19:25:15,773 - INFO - system_memory_percent: 6.9000
68
+ 2025-11-09 19:25:15,773 - INFO - ============================================================
69
+ 2025-11-09 19:25:15,773 - INFO - πŸ† New best eval loss: 0.5003
70
+ 2025-11-09 19:25:16,038 - INFO - ============================================================
71
+ 2025-11-09 19:25:16,038 - INFO - πŸ’Ύ Checkpoint 1: step 1000
72
+ 2025-11-09 19:25:16,038 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
73
+ 2025-11-09 19:25:16,038 - INFO - πŸ“€ Uploading checkpoint-1000 to Hub...
74
+ 2025-11-09 19:25:20,110 - INFO - βœ… Checkpoint 1000 uploaded!
75
+ 2025-11-09 19:25:20,110 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
76
+ 2025-11-09 19:25:20,110 - INFO - ============================================================
77
+ 2025-11-09 19:26:07,015 - INFO - Step 1050/19128 | Loss: 0.5565 | LR: 2.91e-04 | GPU: 0.87GB
78
+ 2025-11-09 19:26:53,807 - INFO - Step 1100/19128 | Loss: 0.5767 | LR: 2.91e-04 | GPU: 0.87GB
79
+ 2025-11-09 19:27:40,531 - INFO - Step 1150/19128 | Loss: 0.5620 | LR: 2.90e-04 | GPU: 0.87GB
80
+ 2025-11-09 19:28:27,359 - INFO - Step 1200/19128 | Loss: 0.5864 | LR: 2.89e-04 | GPU: 0.87GB
81
+ 2025-11-09 19:29:14,182 - INFO - Step 1250/19128 | Loss: 0.6260 | LR: 2.88e-04 | GPU: 0.87GB
82
+ 2025-11-09 19:30:01,074 - INFO - Step 1300/19128 | Loss: 0.7742 | LR: 2.87e-04 | GPU: 0.87GB
83
+ 2025-11-09 19:30:48,073 - INFO - Step 1350/19128 | Loss: 1.1101 | LR: 2.87e-04 | GPU: 0.87GB
84
+ 2025-11-09 19:31:34,986 - INFO - Step 1400/19128 | Loss: 1.3211 | LR: 2.86e-04 | GPU: 0.87GB
85
+ 2025-11-09 19:32:21,930 - INFO - Step 1450/19128 | Loss: 1.4130 | LR: 2.85e-04 | GPU: 0.87GB
86
+ 2025-11-09 19:33:08,830 - INFO - Step 1500/19128 | Loss: 1.4265 | LR: 2.84e-04 | GPU: 0.87GB
87
+ 2025-11-09 19:33:55,803 - INFO - Step 1550/19128 | Loss: 1.4700 | LR: 2.83e-04 | GPU: 0.87GB
88
+ 2025-11-09 19:34:42,910 - INFO - Step 1600/19128 | Loss: 1.4561 | LR: 2.83e-04 | GPU: 0.87GB
89
+ 2025-11-09 19:35:29,939 - INFO - Step 1650/19128 | Loss: 1.4693 | LR: 2.82e-04 | GPU: 0.87GB
90
+ 2025-11-09 19:36:16,685 - INFO - Step 1700/19128 | Loss: 1.4729 | LR: 2.81e-04 | GPU: 0.87GB
91
+ 2025-11-09 19:37:03,396 - INFO - Step 1750/19128 | Loss: 1.4599 | LR: 2.80e-04 | GPU: 0.87GB
92
+ 2025-11-09 19:37:50,039 - INFO - Step 1800/19128 | Loss: 1.4725 | LR: 2.79e-04 | GPU: 0.87GB
93
+ 2025-11-09 19:38:36,721 - INFO - Step 1850/19128 | Loss: 1.4503 | LR: 2.79e-04 | GPU: 0.87GB
94
+ 2025-11-09 19:39:23,367 - INFO - Step 1900/19128 | Loss: 1.4812 | LR: 2.78e-04 | GPU: 0.87GB
95
+ 2025-11-09 19:40:10,030 - INFO - Step 1950/19128 | Loss: 1.4761 | LR: 2.77e-04 | GPU: 0.87GB
96
+ 2025-11-09 19:40:56,713 - INFO - Step 2000/19128 | Loss: 1.4960 | LR: 2.76e-04 | GPU: 0.87GB
97
+ 2025-11-09 19:42:31,551 - INFO - Step 2000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
98
+ 2025-11-09 19:42:31,551 - INFO - ============================================================
99
+ 2025-11-09 19:42:31,551 - INFO - πŸ“Š EVALUATION at step 2000
100
+ 2025-11-09 19:42:31,551 - INFO - eval_loss: 1.2512
101
+ 2025-11-09 19:42:31,551 - INFO - eval_runtime: 94.8348
102
+ 2025-11-09 19:42:31,551 - INFO - eval_samples_per_second: 119.4920
103
+ 2025-11-09 19:42:31,551 - INFO - eval_steps_per_second: 7.4760
104
+ 2025-11-09 19:42:31,551 - INFO - epoch: 0.3100
105
+ 2025-11-09 19:42:31,551 - INFO - gpu_memory_gb: 0.8662
106
+ 2025-11-09 19:42:31,551 - INFO - system_memory_percent: 13.2000
107
+ 2025-11-09 19:42:31,551 - INFO - ============================================================
108
+ 2025-11-09 19:42:31,768 - INFO - ============================================================
109
+ 2025-11-09 19:42:31,768 - INFO - πŸ’Ύ Checkpoint 2: step 2000
110
+ 2025-11-09 19:42:31,769 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
111
+ 2025-11-09 19:42:31,769 - INFO - πŸ“€ Uploading checkpoint-2000 to Hub...
112
+ 2025-11-09 19:42:36,341 - INFO - βœ… Checkpoint 2000 uploaded!
113
+ 2025-11-09 19:42:36,342 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
114
+ 2025-11-09 19:42:36,342 - INFO - ============================================================
115
+ 2025-11-09 19:43:23,118 - INFO - Step 2050/19128 | Loss: 1.4488 | LR: 2.75e-04 | GPU: 0.87GB
116
+ 2025-11-09 19:44:09,811 - INFO - Step 2100/19128 | Loss: 1.4550 | LR: 2.75e-04 | GPU: 0.87GB
117
+ 2025-11-09 19:44:56,495 - INFO - Step 2150/19128 | Loss: 1.4353 | LR: 2.74e-04 | GPU: 0.87GB
118
+ 2025-11-09 19:45:43,252 - INFO - Step 2200/19128 | Loss: 1.4524 | LR: 2.73e-04 | GPU: 0.87GB
119
+ 2025-11-09 19:46:30,038 - INFO - Step 2250/19128 | Loss: 1.4701 | LR: 2.72e-04 | GPU: 0.87GB
120
+ 2025-11-09 19:47:16,729 - INFO - Step 2300/19128 | Loss: 1.4734 | LR: 2.71e-04 | GPU: 0.87GB
121
+ 2025-11-09 19:48:03,415 - INFO - Step 2350/19128 | Loss: 1.5035 | LR: 2.71e-04 | GPU: 0.87GB
122
+ 2025-11-09 19:48:50,056 - INFO - Step 2400/19128 | Loss: 1.4513 | LR: 2.70e-04 | GPU: 0.87GB
123
+ 2025-11-09 19:49:36,603 - INFO - Step 2450/19128 | Loss: 1.4641 | LR: 2.69e-04 | GPU: 0.87GB
124
+ 2025-11-09 19:50:23,155 - INFO - Step 2500/19128 | Loss: 1.4585 | LR: 2.68e-04 | GPU: 0.87GB
125
+ 2025-11-09 19:51:09,800 - INFO - Step 2550/19128 | Loss: 1.4673 | LR: 2.67e-04 | GPU: 0.87GB
126
+ 2025-11-09 19:51:56,482 - INFO - Step 2600/19128 | Loss: 1.4671 | LR: 2.67e-04 | GPU: 0.87GB
127
+ 2025-11-09 19:52:43,089 - INFO - Step 2650/19128 | Loss: 1.4702 | LR: 2.66e-04 | GPU: 0.87GB
128
+ 2025-11-09 19:53:29,716 - INFO - Step 2700/19128 | Loss: 1.4612 | LR: 2.65e-04 | GPU: 0.87GB
129
+ 2025-11-09 19:54:16,277 - INFO - Step 2750/19128 | Loss: 1.4713 | LR: 2.64e-04 | GPU: 0.87GB
130
+ 2025-11-09 19:55:02,907 - INFO - Step 2800/19128 | Loss: 1.4573 | LR: 2.64e-04 | GPU: 0.87GB
131
+ 2025-11-09 19:55:49,565 - INFO - Step 2850/19128 | Loss: 1.4586 | LR: 2.63e-04 | GPU: 0.87GB
132
+ 2025-11-09 19:56:36,226 - INFO - Step 2900/19128 | Loss: 1.4674 | LR: 2.62e-04 | GPU: 0.87GB
133
+ 2025-11-09 19:57:22,928 - INFO - Step 2950/19128 | Loss: 1.4466 | LR: 2.61e-04 | GPU: 0.87GB
134
+ 2025-11-09 19:58:09,596 - INFO - Step 3000/19128 | Loss: 1.4897 | LR: 2.60e-04 | GPU: 0.87GB
135
+ 2025-11-09 19:59:44,409 - INFO - Step 3000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
136
+ 2025-11-09 19:59:44,409 - INFO - ============================================================
137
+ 2025-11-09 19:59:44,409 - INFO - πŸ“Š EVALUATION at step 3000
138
+ 2025-11-09 19:59:44,410 - INFO - eval_loss: 1.2418
139
+ 2025-11-09 19:59:44,410 - INFO - eval_runtime: 94.8105
140
+ 2025-11-09 19:59:44,410 - INFO - eval_samples_per_second: 119.5230
141
+ 2025-11-09 19:59:44,410 - INFO - eval_steps_per_second: 7.4780
142
+ 2025-11-09 19:59:44,410 - INFO - epoch: 0.4700
143
+ 2025-11-09 19:59:44,410 - INFO - gpu_memory_gb: 0.8662
144
+ 2025-11-09 19:59:44,410 - INFO - system_memory_percent: 6.7000
145
+ 2025-11-09 19:59:44,410 - INFO - ============================================================
146
+ 2025-11-09 19:59:44,634 - INFO - ============================================================
147
+ 2025-11-09 19:59:44,634 - INFO - πŸ’Ύ Checkpoint 3: step 3000
148
+ 2025-11-09 19:59:44,635 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
149
+ 2025-11-09 19:59:44,635 - INFO - πŸ“€ Uploading checkpoint-3000 to Hub...
150
+ 2025-11-09 19:59:48,888 - INFO - βœ… Checkpoint 3000 uploaded!
151
+ 2025-11-09 19:59:48,888 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
152
+ 2025-11-09 19:59:48,888 - INFO - ============================================================
153
+ 2025-11-09 20:00:35,640 - INFO - Step 3050/19128 | Loss: 1.4621 | LR: 2.60e-04 | GPU: 0.87GB
154
+ 2025-11-09 20:01:22,207 - INFO - Step 3100/19128 | Loss: 1.4443 | LR: 2.59e-04 | GPU: 0.87GB
155
+ 2025-11-09 20:02:08,745 - INFO - Step 3150/19128 | Loss: 1.4314 | LR: 2.58e-04 | GPU: 0.87GB
156
+ 2025-11-09 20:02:55,306 - INFO - Step 3200/19128 | Loss: 1.4172 | LR: 2.57e-04 | GPU: 0.87GB
157
+ 2025-11-09 20:03:41,847 - INFO - Step 3250/19128 | Loss: 1.4878 | LR: 2.56e-04 | GPU: 0.87GB
158
+ 2025-11-09 20:04:28,392 - INFO - Step 3300/19128 | Loss: 1.4344 | LR: 2.56e-04 | GPU: 0.87GB
159
+ 2025-11-09 20:05:14,921 - INFO - Step 3350/19128 | Loss: 1.4634 | LR: 2.55e-04 | GPU: 0.87GB
160
+ 2025-11-09 20:06:01,450 - INFO - Step 3400/19128 | Loss: 1.4679 | LR: 2.54e-04 | GPU: 0.87GB
161
+ 2025-11-09 20:06:48,065 - INFO - Step 3450/19128 | Loss: 1.4641 | LR: 2.53e-04 | GPU: 0.87GB
162
+ 2025-11-09 20:07:34,593 - INFO - Step 3500/19128 | Loss: 1.4396 | LR: 2.52e-04 | GPU: 0.87GB
163
+ 2025-11-09 20:08:21,159 - INFO - Step 3550/19128 | Loss: 1.4850 | LR: 2.52e-04 | GPU: 0.87GB
164
+ 2025-11-09 20:09:07,759 - INFO - Step 3600/19128 | Loss: 1.4355 | LR: 2.51e-04 | GPU: 0.87GB
165
+ 2025-11-09 20:09:54,480 - INFO - Step 3650/19128 | Loss: 1.4419 | LR: 2.50e-04 | GPU: 0.87GB
166
+ 2025-11-09 20:10:41,194 - INFO - Step 3700/19128 | Loss: 1.4224 | LR: 2.49e-04 | GPU: 0.87GB
167
+ 2025-11-09 20:11:27,870 - INFO - Step 3750/19128 | Loss: 1.4473 | LR: 2.48e-04 | GPU: 0.87GB
168
+ 2025-11-09 20:12:14,633 - INFO - Step 3800/19128 | Loss: 1.4341 | LR: 2.48e-04 | GPU: 0.87GB
169
+ 2025-11-09 20:13:01,358 - INFO - Step 3850/19128 | Loss: 1.4463 | LR: 2.47e-04 | GPU: 0.87GB
170
+ 2025-11-09 20:13:47,961 - INFO - Step 3900/19128 | Loss: 1.4348 | LR: 2.46e-04 | GPU: 0.87GB
171
+ 2025-11-09 20:14:34,584 - INFO - Step 3950/19128 | Loss: 1.4326 | LR: 2.45e-04 | GPU: 0.87GB
172
+ 2025-11-09 20:15:21,213 - INFO - Step 4000/19128 | Loss: 1.4586 | LR: 2.44e-04 | GPU: 0.87GB
173
+ 2025-11-09 20:16:56,031 - INFO - Step 4000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
174
+ 2025-11-09 20:16:56,032 - INFO - ============================================================
175
+ 2025-11-09 20:16:56,032 - INFO - πŸ“Š EVALUATION at step 4000
176
+ 2025-11-09 20:16:56,032 - INFO - eval_loss: 1.2330
177
+ 2025-11-09 20:16:56,032 - INFO - eval_runtime: 94.8153
178
+ 2025-11-09 20:16:56,032 - INFO - eval_samples_per_second: 119.5170
179
+ 2025-11-09 20:16:56,032 - INFO - eval_steps_per_second: 7.4780
180
+ 2025-11-09 20:16:56,032 - INFO - epoch: 0.6300
181
+ 2025-11-09 20:16:56,032 - INFO - gpu_memory_gb: 0.8662
182
+ 2025-11-09 20:16:56,032 - INFO - system_memory_percent: 6.9000
183
+ 2025-11-09 20:16:56,032 - INFO - ============================================================
184
+ 2025-11-09 20:16:56,240 - INFO - ============================================================
185
+ 2025-11-09 20:16:56,241 - INFO - πŸ’Ύ Checkpoint 4: step 4000
186
+ 2025-11-09 20:16:56,241 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
187
+ 2025-11-09 20:16:56,241 - INFO - πŸ“€ Uploading checkpoint-4000 to Hub...
188
+ 2025-11-09 20:17:00,190 - INFO - βœ… Checkpoint 4000 uploaded!
189
+ 2025-11-09 20:17:00,190 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
190
+ 2025-11-09 20:17:00,190 - INFO - ============================================================
191
+ 2025-11-09 20:17:47,036 - INFO - Step 4050/19128 | Loss: 1.4624 | LR: 2.44e-04 | GPU: 0.87GB
192
+ 2025-11-09 20:18:33,726 - INFO - Step 4100/19128 | Loss: 1.4550 | LR: 2.43e-04 | GPU: 0.87GB
193
+ 2025-11-09 20:19:20,355 - INFO - Step 4150/19128 | Loss: 1.4294 | LR: 2.42e-04 | GPU: 0.87GB
194
+ 2025-11-09 20:20:06,989 - INFO - Step 4200/19128 | Loss: 1.4675 | LR: 2.41e-04 | GPU: 0.87GB
195
+ 2025-11-09 20:20:53,597 - INFO - Step 4250/19128 | Loss: 1.4320 | LR: 2.40e-04 | GPU: 0.87GB
196
+ 2025-11-09 20:21:40,182 - INFO - Step 4300/19128 | Loss: 1.4357 | LR: 2.40e-04 | GPU: 0.87GB
197
+ 2025-11-09 20:22:26,684 - INFO - Step 4350/19128 | Loss: 1.4419 | LR: 2.39e-04 | GPU: 0.87GB
198
+ 2025-11-09 20:23:13,218 - INFO - Step 4400/19128 | Loss: 1.4272 | LR: 2.38e-04 | GPU: 0.87GB
199
+ 2025-11-09 20:23:59,888 - INFO - Step 4450/19128 | Loss: 1.4133 | LR: 2.37e-04 | GPU: 0.87GB
200
+ 2025-11-09 20:24:46,653 - INFO - Step 4500/19128 | Loss: 1.4340 | LR: 2.36e-04 | GPU: 0.87GB
201
+ 2025-11-09 20:25:33,287 - INFO - Step 4550/19128 | Loss: 1.4218 | LR: 2.36e-04 | GPU: 0.87GB
202
+ 2025-11-09 20:26:19,993 - INFO - Step 4600/19128 | Loss: 1.4682 | LR: 2.35e-04 | GPU: 0.87GB
203
+ 2025-11-09 20:27:06,680 - INFO - Step 4650/19128 | Loss: 1.4333 | LR: 2.34e-04 | GPU: 0.87GB
204
+ 2025-11-09 20:27:53,348 - INFO - Step 4700/19128 | Loss: 1.4359 | LR: 2.33e-04 | GPU: 0.87GB
205
+ 2025-11-09 20:28:39,968 - INFO - Step 4750/19128 | Loss: 1.4054 | LR: 2.32e-04 | GPU: 0.87GB
206
+ 2025-11-09 20:29:26,496 - INFO - Step 4800/19128 | Loss: 1.4215 | LR: 2.32e-04 | GPU: 0.87GB
207
+ 2025-11-09 20:30:13,206 - INFO - Step 4850/19128 | Loss: 1.4471 | LR: 2.31e-04 | GPU: 0.87GB
208
+ 2025-11-09 20:30:59,857 - INFO - Step 4900/19128 | Loss: 1.4238 | LR: 2.30e-04 | GPU: 0.87GB
209
+ 2025-11-09 20:31:46,547 - INFO - Step 4950/19128 | Loss: 1.4218 | LR: 2.29e-04 | GPU: 0.87GB
210
+ 2025-11-09 20:32:33,138 - INFO - Step 5000/19128 | Loss: 1.4419 | LR: 2.28e-04 | GPU: 0.87GB
211
+ 2025-11-09 20:34:08,183 - INFO - Step 5000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
212
+ 2025-11-09 20:34:08,183 - INFO - ============================================================
213
+ 2025-11-09 20:34:08,183 - INFO - πŸ“Š EVALUATION at step 5000
214
+ 2025-11-09 20:34:08,183 - INFO - eval_loss: 1.2248
215
+ 2025-11-09 20:34:08,183 - INFO - eval_runtime: 95.0420
216
+ 2025-11-09 20:34:08,183 - INFO - eval_samples_per_second: 119.2310
217
+ 2025-11-09 20:34:08,183 - INFO - eval_steps_per_second: 7.4600
218
+ 2025-11-09 20:34:08,183 - INFO - epoch: 0.7800
219
+ 2025-11-09 20:34:08,183 - INFO - gpu_memory_gb: 0.8662
220
+ 2025-11-09 20:34:08,183 - INFO - system_memory_percent: 6.8000
221
+ 2025-11-09 20:34:08,183 - INFO - ============================================================
222
+ 2025-11-09 20:34:08,403 - INFO - ============================================================
223
+ 2025-11-09 20:34:08,403 - INFO - πŸ’Ύ Checkpoint 5: step 5000
224
+ 2025-11-09 20:34:08,403 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
225
+ 2025-11-09 20:34:08,403 - INFO - πŸ“€ Uploading checkpoint-5000 to Hub...
226
+ 2025-11-09 20:34:12,224 - INFO - βœ… Checkpoint 5000 uploaded!
227
+ 2025-11-09 20:34:12,224 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
228
+ 2025-11-09 20:34:12,224 - INFO - ============================================================
229
+ 2025-11-09 20:34:58,972 - INFO - Step 5050/19128 | Loss: 1.4405 | LR: 2.28e-04 | GPU: 0.87GB
230
+ 2025-11-09 20:35:45,646 - INFO - Step 5100/19128 | Loss: 1.4490 | LR: 2.27e-04 | GPU: 0.87GB
231
+ 2025-11-09 20:36:32,304 - INFO - Step 5150/19128 | Loss: 1.4233 | LR: 2.26e-04 | GPU: 0.87GB
232
+ 2025-11-09 20:37:19,019 - INFO - Step 5200/19128 | Loss: 1.4230 | LR: 2.25e-04 | GPU: 0.87GB
233
+ 2025-11-09 20:38:05,553 - INFO - Step 5250/19128 | Loss: 1.4315 | LR: 2.24e-04 | GPU: 0.87GB
234
+ 2025-11-09 20:38:52,072 - INFO - Step 5300/19128 | Loss: 1.4180 | LR: 2.24e-04 | GPU: 0.87GB
235
+ 2025-11-09 20:39:38,580 - INFO - Step 5350/19128 | Loss: 1.4056 | LR: 2.23e-04 | GPU: 0.87GB
236
+ 2025-11-09 20:40:25,413 - INFO - Step 5400/19128 | Loss: 1.4351 | LR: 2.22e-04 | GPU: 0.87GB
237
+ 2025-11-09 20:41:12,273 - INFO - Step 5450/19128 | Loss: 1.4377 | LR: 2.21e-04 | GPU: 0.87GB
238
+ 2025-11-09 20:41:59,185 - INFO - Step 5500/19128 | Loss: 1.4065 | LR: 2.20e-04 | GPU: 0.87GB
239
+ 2025-11-09 20:42:46,056 - INFO - Step 5550/19128 | Loss: 1.4246 | LR: 2.20e-04 | GPU: 0.87GB
240
+ 2025-11-09 20:43:32,914 - INFO - Step 5600/19128 | Loss: 1.4607 | LR: 2.19e-04 | GPU: 0.87GB
241
+ 2025-11-09 20:44:19,762 - INFO - Step 5650/19128 | Loss: 1.4211 | LR: 2.18e-04 | GPU: 0.87GB
242
+ 2025-11-09 20:45:06,645 - INFO - Step 5700/19128 | Loss: 1.4475 | LR: 2.17e-04 | GPU: 0.87GB
243
+ 2025-11-09 20:45:53,490 - INFO - Step 5750/19128 | Loss: 1.3977 | LR: 2.16e-04 | GPU: 0.87GB
244
+ 2025-11-09 20:46:40,332 - INFO - Step 5800/19128 | Loss: 1.4034 | LR: 2.16e-04 | GPU: 0.87GB
245
+ 2025-11-09 20:47:27,248 - INFO - Step 5850/19128 | Loss: 1.4237 | LR: 2.15e-04 | GPU: 0.87GB
246
+ 2025-11-09 20:48:14,096 - INFO - Step 5900/19128 | Loss: 1.4371 | LR: 2.14e-04 | GPU: 0.87GB
247
+ 2025-11-09 20:49:00,911 - INFO - Step 5950/19128 | Loss: 1.4416 | LR: 2.13e-04 | GPU: 0.87GB
248
+ 2025-11-09 20:49:47,743 - INFO - Step 6000/19128 | Loss: 1.4164 | LR: 2.12e-04 | GPU: 0.87GB
249
+ 2025-11-09 20:51:22,757 - INFO - Step 6000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
250
+ 2025-11-09 20:51:22,757 - INFO - ============================================================
251
+ 2025-11-09 20:51:22,757 - INFO - πŸ“Š EVALUATION at step 6000
252
+ 2025-11-09 20:51:22,757 - INFO - eval_loss: 1.2173
253
+ 2025-11-09 20:51:22,757 - INFO - eval_runtime: 95.0111
254
+ 2025-11-09 20:51:22,757 - INFO - eval_samples_per_second: 119.2700
255
+ 2025-11-09 20:51:22,757 - INFO - eval_steps_per_second: 7.4620
256
+ 2025-11-09 20:51:22,757 - INFO - epoch: 0.9400
257
+ 2025-11-09 20:51:22,757 - INFO - gpu_memory_gb: 0.8662
258
+ 2025-11-09 20:51:22,757 - INFO - system_memory_percent: 6.9000
259
+ 2025-11-09 20:51:22,757 - INFO - ============================================================
260
+ 2025-11-09 20:51:22,967 - INFO - ============================================================
261
+ 2025-11-09 20:51:22,967 - INFO - πŸ’Ύ Checkpoint 6: step 6000
262
+ 2025-11-09 20:51:22,967 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
263
+ 2025-11-09 20:51:22,967 - INFO - πŸ“€ Uploading checkpoint-6000 to Hub...
264
+ 2025-11-09 20:51:26,803 - INFO - βœ… Checkpoint 6000 uploaded!
265
+ 2025-11-09 20:51:26,804 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
266
+ 2025-11-09 20:51:26,804 - INFO - ============================================================
267
+ 2025-11-09 20:52:13,740 - INFO - Step 6050/19128 | Loss: 1.3970 | LR: 2.12e-04 | GPU: 0.87GB
268
+ 2025-11-09 20:53:00,601 - INFO - Step 6100/19128 | Loss: 1.4268 | LR: 2.11e-04 | GPU: 0.87GB
269
+ 2025-11-09 20:53:47,421 - INFO - Step 6150/19128 | Loss: 1.4388 | LR: 2.10e-04 | GPU: 0.87GB
270
+ 2025-11-09 20:54:34,236 - INFO - Step 6200/19128 | Loss: 1.4208 | LR: 2.09e-04 | GPU: 0.87GB
271
+ 2025-11-09 20:55:21,070 - INFO - Step 6250/19128 | Loss: 1.4352 | LR: 2.09e-04 | GPU: 0.87GB
272
+ 2025-11-09 20:56:07,893 - INFO - Step 6300/19128 | Loss: 1.4053 | LR: 2.08e-04 | GPU: 0.87GB
273
+ 2025-11-09 20:56:54,729 - INFO - Step 6350/19128 | Loss: 1.4242 | LR: 2.07e-04 | GPU: 0.87GB
274
+ 2025-11-09 20:57:42,159 - INFO - Step 6400/19128 | Loss: 1.4197 | LR: 2.06e-04 | GPU: 0.87GB
275
+ 2025-11-09 20:58:28,931 - INFO - Step 6450/19128 | Loss: 1.4225 | LR: 2.05e-04 | GPU: 0.87GB
276
+ 2025-11-09 20:59:15,830 - INFO - Step 6500/19128 | Loss: 1.4188 | LR: 2.05e-04 | GPU: 0.87GB
277
+ 2025-11-09 21:00:02,658 - INFO - Step 6550/19128 | Loss: 1.4347 | LR: 2.04e-04 | GPU: 0.87GB
278
+ 2025-11-09 21:00:49,521 - INFO - Step 6600/19128 | Loss: 1.4371 | LR: 2.03e-04 | GPU: 0.87GB
279
+ 2025-11-09 21:01:36,328 - INFO - Step 6650/19128 | Loss: 1.4228 | LR: 2.02e-04 | GPU: 0.87GB
280
+ 2025-11-09 21:02:23,188 - INFO - Step 6700/19128 | Loss: 1.4289 | LR: 2.01e-04 | GPU: 0.87GB
281
+ 2025-11-09 21:03:10,045 - INFO - Step 6750/19128 | Loss: 1.4224 | LR: 2.01e-04 | GPU: 0.87GB
282
+ 2025-11-09 21:03:56,874 - INFO - Step 6800/19128 | Loss: 1.4783 | LR: 2.00e-04 | GPU: 0.87GB
283
+ 2025-11-09 21:04:43,802 - INFO - Step 6850/19128 | Loss: 1.4469 | LR: 1.99e-04 | GPU: 0.87GB
284
+ 2025-11-09 21:05:30,629 - INFO - Step 6900/19128 | Loss: 1.4335 | LR: 1.98e-04 | GPU: 0.87GB
285
+ 2025-11-09 21:06:17,469 - INFO - Step 6950/19128 | Loss: 1.3973 | LR: 1.97e-04 | GPU: 0.87GB
286
+ 2025-11-09 21:07:04,306 - INFO - Step 7000/19128 | Loss: 1.4493 | LR: 1.97e-04 | GPU: 0.87GB
287
+ 2025-11-09 21:08:39,314 - INFO - Step 7000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
288
+ 2025-11-09 21:08:39,314 - INFO - ============================================================
289
+ 2025-11-09 21:08:39,314 - INFO - πŸ“Š EVALUATION at step 7000
290
+ 2025-11-09 21:08:39,314 - INFO - eval_loss: 1.2105
291
+ 2025-11-09 21:08:39,314 - INFO - eval_runtime: 95.0044
292
+ 2025-11-09 21:08:39,314 - INFO - eval_samples_per_second: 119.2790
293
+ 2025-11-09 21:08:39,314 - INFO - eval_steps_per_second: 7.4630
294
+ 2025-11-09 21:08:39,314 - INFO - epoch: 1.1000
295
+ 2025-11-09 21:08:39,314 - INFO - gpu_memory_gb: 0.8662
296
+ 2025-11-09 21:08:39,314 - INFO - system_memory_percent: 6.9000
297
+ 2025-11-09 21:08:39,314 - INFO - ============================================================
298
+ 2025-11-09 21:08:39,534 - INFO - ============================================================
299
+ 2025-11-09 21:08:39,534 - INFO - πŸ’Ύ Checkpoint 7: step 7000
300
+ 2025-11-09 21:08:39,535 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
301
+ 2025-11-09 21:08:39,535 - INFO - πŸ“€ Uploading checkpoint-7000 to Hub...
302
+ 2025-11-09 21:08:43,730 - INFO - βœ… Checkpoint 7000 uploaded!
303
+ 2025-11-09 21:08:43,730 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
304
+ 2025-11-09 21:08:43,730 - INFO - ============================================================
305
+ 2025-11-09 21:09:30,675 - INFO - Step 7050/19128 | Loss: 1.4059 | LR: 1.96e-04 | GPU: 0.87GB
306
+ 2025-11-09 21:10:17,507 - INFO - Step 7100/19128 | Loss: 1.3989 | LR: 1.95e-04 | GPU: 0.87GB
307
+ 2025-11-09 21:11:04,358 - INFO - Step 7150/19128 | Loss: 1.3932 | LR: 1.94e-04 | GPU: 0.87GB
308
+ 2025-11-09 21:11:51,196 - INFO - Step 7200/19128 | Loss: 1.4181 | LR: 1.93e-04 | GPU: 0.87GB
309
+ 2025-11-09 21:12:37,995 - INFO - Step 7250/19128 | Loss: 1.4567 | LR: 1.93e-04 | GPU: 0.87GB
310
+ 2025-11-09 21:13:24,824 - INFO - Step 7300/19128 | Loss: 1.4207 | LR: 1.92e-04 | GPU: 0.87GB
311
+ 2025-11-09 21:14:11,635 - INFO - Step 7350/19128 | Loss: 1.4020 | LR: 1.91e-04 | GPU: 0.87GB
312
+ 2025-11-09 21:14:58,467 - INFO - Step 7400/19128 | Loss: 1.4161 | LR: 1.90e-04 | GPU: 0.87GB
313
+ 2025-11-09 21:15:45,268 - INFO - Step 7450/19128 | Loss: 1.4214 | LR: 1.89e-04 | GPU: 0.87GB
314
+ 2025-11-09 21:16:32,091 - INFO - Step 7500/19128 | Loss: 1.4018 | LR: 1.89e-04 | GPU: 0.87GB
315
+ 2025-11-09 21:17:18,897 - INFO - Step 7550/19128 | Loss: 1.3888 | LR: 1.88e-04 | GPU: 0.87GB
316
+ 2025-11-09 21:18:05,711 - INFO - Step 7600/19128 | Loss: 1.4376 | LR: 1.87e-04 | GPU: 0.87GB
317
+ 2025-11-09 21:18:52,582 - INFO - Step 7650/19128 | Loss: 1.4172 | LR: 1.86e-04 | GPU: 0.87GB
318
+ 2025-11-09 21:19:39,428 - INFO - Step 7700/19128 | Loss: 1.4116 | LR: 1.85e-04 | GPU: 0.87GB
319
+ 2025-11-09 21:20:26,289 - INFO - Step 7750/19128 | Loss: 1.4148 | LR: 1.85e-04 | GPU: 0.87GB
320
+ 2025-11-09 21:21:13,120 - INFO - Step 7800/19128 | Loss: 1.4197 | LR: 1.84e-04 | GPU: 0.87GB
321
+ 2025-11-09 21:22:00,036 - INFO - Step 7850/19128 | Loss: 1.4202 | LR: 1.83e-04 | GPU: 0.87GB
322
+ 2025-11-09 21:22:46,828 - INFO - Step 7900/19128 | Loss: 1.4046 | LR: 1.82e-04 | GPU: 0.87GB
323
+ 2025-11-09 21:23:33,694 - INFO - Step 7950/19128 | Loss: 1.3885 | LR: 1.82e-04 | GPU: 0.87GB
324
+ 2025-11-09 21:24:20,616 - INFO - Step 8000/19128 | Loss: 1.4116 | LR: 1.81e-04 | GPU: 0.87GB
325
+ 2025-11-09 21:25:55,818 - INFO - Step 8000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
326
+ 2025-11-09 21:25:55,819 - INFO - ============================================================
327
+ 2025-11-09 21:25:55,819 - INFO - πŸ“Š EVALUATION at step 8000
328
+ 2025-11-09 21:25:55,819 - INFO - eval_loss: 1.2042
329
+ 2025-11-09 21:25:55,819 - INFO - eval_runtime: 95.1992
330
+ 2025-11-09 21:25:55,819 - INFO - eval_samples_per_second: 119.0350
331
+ 2025-11-09 21:25:55,819 - INFO - eval_steps_per_second: 7.4480
332
+ 2025-11-09 21:25:55,819 - INFO - epoch: 1.2500
333
+ 2025-11-09 21:25:55,819 - INFO - gpu_memory_gb: 0.8662
334
+ 2025-11-09 21:25:55,819 - INFO - system_memory_percent: 7.0000
335
+ 2025-11-09 21:25:55,819 - INFO - ============================================================
336
+ 2025-11-09 21:25:56,041 - INFO - ============================================================
337
+ 2025-11-09 21:25:56,041 - INFO - πŸ’Ύ Checkpoint 8: step 8000
338
+ 2025-11-09 21:25:56,041 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
339
+ 2025-11-09 21:25:56,041 - INFO - πŸ“€ Uploading checkpoint-8000 to Hub...
340
+ 2025-11-09 21:26:00,738 - INFO - βœ… Checkpoint 8000 uploaded!
341
+ 2025-11-09 21:26:00,738 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
342
+ 2025-11-09 21:26:00,738 - INFO - ============================================================
343
+ 2025-11-09 21:26:47,780 - INFO - Step 8050/19128 | Loss: 1.4231 | LR: 1.80e-04 | GPU: 0.87GB
344
+ 2025-11-09 21:27:34,641 - INFO - Step 8100/19128 | Loss: 1.3973 | LR: 1.79e-04 | GPU: 0.87GB
345
+ 2025-11-09 21:28:21,484 - INFO - Step 8150/19128 | Loss: 1.4082 | LR: 1.78e-04 | GPU: 0.87GB
346
+ 2025-11-09 21:29:08,354 - INFO - Step 8200/19128 | Loss: 1.4026 | LR: 1.78e-04 | GPU: 0.87GB
347
+ 2025-11-09 21:29:55,181 - INFO - Step 8250/19128 | Loss: 1.4261 | LR: 1.77e-04 | GPU: 0.87GB
348
+ 2025-11-09 21:30:41,985 - INFO - Step 8300/19128 | Loss: 1.4162 | LR: 1.76e-04 | GPU: 0.87GB
349
+ 2025-11-09 21:31:28,809 - INFO - Step 8350/19128 | Loss: 1.4007 | LR: 1.75e-04 | GPU: 0.87GB
350
+ 2025-11-09 21:32:15,629 - INFO - Step 8400/19128 | Loss: 1.4075 | LR: 1.74e-04 | GPU: 0.87GB
351
+ 2025-11-09 21:33:02,411 - INFO - Step 8450/19128 | Loss: 1.3911 | LR: 1.74e-04 | GPU: 0.87GB
352
+ 2025-11-09 21:33:49,222 - INFO - Step 8500/19128 | Loss: 1.3977 | LR: 1.73e-04 | GPU: 0.87GB
353
+ 2025-11-09 21:34:36,010 - INFO - Step 8550/19128 | Loss: 1.4008 | LR: 1.72e-04 | GPU: 0.87GB
354
+ 2025-11-09 21:35:22,826 - INFO - Step 8600/19128 | Loss: 1.3947 | LR: 1.71e-04 | GPU: 0.87GB
355
+ 2025-11-09 21:36:09,612 - INFO - Step 8650/19128 | Loss: 1.3825 | LR: 1.70e-04 | GPU: 0.87GB
356
+ 2025-11-09 21:36:56,293 - INFO - Step 8700/19128 | Loss: 1.4022 | LR: 1.70e-04 | GPU: 0.87GB
357
+ 2025-11-09 21:37:42,785 - INFO - Step 8750/19128 | Loss: 1.3865 | LR: 1.69e-04 | GPU: 0.87GB
358
+ 2025-11-09 21:38:29,318 - INFO - Step 8800/19128 | Loss: 1.4350 | LR: 1.68e-04 | GPU: 0.87GB
359
+ 2025-11-09 21:39:16,016 - INFO - Step 8850/19128 | Loss: 1.4100 | LR: 1.67e-04 | GPU: 0.87GB
360
+ 2025-11-09 21:40:02,580 - INFO - Step 8900/19128 | Loss: 1.3888 | LR: 1.66e-04 | GPU: 0.87GB
361
+ 2025-11-09 21:40:49,082 - INFO - Step 8950/19128 | Loss: 1.4151 | LR: 1.66e-04 | GPU: 0.87GB
362
+ 2025-11-09 21:41:35,633 - INFO - Step 9000/19128 | Loss: 1.3786 | LR: 1.65e-04 | GPU: 0.87GB
363
+ 2025-11-09 21:43:10,410 - INFO - Step 9000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
364
+ 2025-11-09 21:43:10,410 - INFO - ============================================================
365
+ 2025-11-09 21:43:10,410 - INFO - πŸ“Š EVALUATION at step 9000
366
+ 2025-11-09 21:43:10,410 - INFO - eval_loss: 1.1985
367
+ 2025-11-09 21:43:10,410 - INFO - eval_runtime: 94.7740
368
+ 2025-11-09 21:43:10,410 - INFO - eval_samples_per_second: 119.5690
369
+ 2025-11-09 21:43:10,410 - INFO - eval_steps_per_second: 7.4810
370
+ 2025-11-09 21:43:10,410 - INFO - epoch: 1.4100
371
+ 2025-11-09 21:43:10,410 - INFO - gpu_memory_gb: 0.8662
372
+ 2025-11-09 21:43:10,410 - INFO - system_memory_percent: 7.0000
373
+ 2025-11-09 21:43:10,410 - INFO - ============================================================
374
+ 2025-11-09 21:43:10,628 - INFO - ============================================================
375
+ 2025-11-09 21:43:10,628 - INFO - πŸ’Ύ Checkpoint 9: step 9000
376
+ 2025-11-09 21:43:10,629 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
377
+ 2025-11-09 21:43:10,629 - INFO - πŸ“€ Uploading checkpoint-9000 to Hub...
378
+ 2025-11-09 21:43:14,790 - INFO - βœ… Checkpoint 9000 uploaded!
379
+ 2025-11-09 21:43:14,790 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
380
+ 2025-11-09 21:43:14,790 - INFO - ============================================================
381
+ 2025-11-09 21:44:01,513 - INFO - Step 9050/19128 | Loss: 1.3709 | LR: 1.64e-04 | GPU: 0.87GB
382
+ 2025-11-09 21:44:48,323 - INFO - Step 9100/19128 | Loss: 1.4021 | LR: 1.63e-04 | GPU: 0.87GB
383
+ 2025-11-09 21:45:35,034 - INFO - Step 9150/19128 | Loss: 1.4054 | LR: 1.62e-04 | GPU: 0.87GB
384
+ 2025-11-09 21:46:21,722 - INFO - Step 9200/19128 | Loss: 1.3909 | LR: 1.62e-04 | GPU: 0.87GB
385
+ 2025-11-09 21:47:08,425 - INFO - Step 9250/19128 | Loss: 1.3949 | LR: 1.61e-04 | GPU: 0.87GB
386
+ 2025-11-09 21:47:55,127 - INFO - Step 9300/19128 | Loss: 1.3927 | LR: 1.60e-04 | GPU: 0.87GB
387
+ 2025-11-09 21:48:41,911 - INFO - Step 9350/19128 | Loss: 1.4053 | LR: 1.59e-04 | GPU: 0.87GB
388
+ 2025-11-09 21:49:28,598 - INFO - Step 9400/19128 | Loss: 1.4217 | LR: 1.58e-04 | GPU: 0.87GB
389
+ 2025-11-09 21:50:15,238 - INFO - Step 9450/19128 | Loss: 1.4174 | LR: 1.58e-04 | GPU: 0.87GB
390
+ 2025-11-09 21:51:01,924 - INFO - Step 9500/19128 | Loss: 1.4396 | LR: 1.57e-04 | GPU: 0.87GB
391
+ 2025-11-09 21:51:48,569 - INFO - Step 9550/19128 | Loss: 1.3969 | LR: 1.56e-04 | GPU: 0.87GB
392
+ 2025-11-09 21:52:35,302 - INFO - Step 9600/19128 | Loss: 1.3905 | LR: 1.55e-04 | GPU: 0.87GB
393
+ 2025-11-09 21:53:22,061 - INFO - Step 9650/19128 | Loss: 1.3712 | LR: 1.54e-04 | GPU: 0.87GB
394
+ 2025-11-09 21:54:08,755 - INFO - Step 9700/19128 | Loss: 1.3787 | LR: 1.54e-04 | GPU: 0.87GB
395
+ 2025-11-09 21:54:55,348 - INFO - Step 9750/19128 | Loss: 1.3795 | LR: 1.53e-04 | GPU: 0.87GB
396
+ 2025-11-09 21:55:41,952 - INFO - Step 9800/19128 | Loss: 1.4115 | LR: 1.52e-04 | GPU: 0.87GB
397
+ 2025-11-09 21:56:28,677 - INFO - Step 9850/19128 | Loss: 1.3968 | LR: 1.51e-04 | GPU: 0.87GB
398
+ 2025-11-09 21:57:15,360 - INFO - Step 9900/19128 | Loss: 1.4296 | LR: 1.51e-04 | GPU: 0.87GB
399
+ 2025-11-09 21:58:02,048 - INFO - Step 9950/19128 | Loss: 1.3983 | LR: 1.50e-04 | GPU: 0.87GB
400
+ 2025-11-09 21:58:48,785 - INFO - Step 10000/19128 | Loss: 1.3813 | LR: 1.49e-04 | GPU: 0.87GB
401
+ 2025-11-09 22:00:23,984 - INFO - Step 10000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
402
+ 2025-11-09 22:00:23,984 - INFO - ============================================================
403
+ 2025-11-09 22:00:23,984 - INFO - πŸ“Š EVALUATION at step 10000
404
+ 2025-11-09 22:00:23,984 - INFO - eval_loss: 1.1934
405
+ 2025-11-09 22:00:23,985 - INFO - eval_runtime: 95.1956
406
+ 2025-11-09 22:00:23,985 - INFO - eval_samples_per_second: 119.0390
407
+ 2025-11-09 22:00:23,985 - INFO - eval_steps_per_second: 7.4480
408
+ 2025-11-09 22:00:23,985 - INFO - epoch: 1.5700
409
+ 2025-11-09 22:00:23,985 - INFO - gpu_memory_gb: 0.8662
410
+ 2025-11-09 22:00:23,985 - INFO - system_memory_percent: 6.9000
411
+ 2025-11-09 22:00:23,985 - INFO - ============================================================
412
+ 2025-11-09 22:00:24,209 - INFO - ============================================================
413
+ 2025-11-09 22:00:24,210 - INFO - πŸ’Ύ Checkpoint 10: step 10000
414
+ 2025-11-09 22:00:24,210 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
415
+ 2025-11-09 22:00:24,210 - INFO - πŸ“€ Uploading checkpoint-10000 to Hub...
416
+ 2025-11-09 22:00:28,612 - INFO - βœ… Checkpoint 10000 uploaded!
417
+ 2025-11-09 22:00:28,612 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
418
+ 2025-11-09 22:00:28,612 - INFO - ============================================================
419
+ 2025-11-09 22:01:15,428 - INFO - Step 10050/19128 | Loss: 1.3868 | LR: 1.48e-04 | GPU: 0.87GB
420
+ 2025-11-09 22:02:02,217 - INFO - Step 10100/19128 | Loss: 1.4145 | LR: 1.47e-04 | GPU: 0.87GB
421
+ 2025-11-09 22:02:49,097 - INFO - Step 10150/19128 | Loss: 1.3916 | LR: 1.47e-04 | GPU: 0.87GB
422
+ 2025-11-09 22:03:35,906 - INFO - Step 10200/19128 | Loss: 1.3796 | LR: 1.46e-04 | GPU: 0.87GB
423
+ 2025-11-09 22:04:22,598 - INFO - Step 10250/19128 | Loss: 1.4049 | LR: 1.45e-04 | GPU: 0.87GB
424
+ 2025-11-09 22:05:09,296 - INFO - Step 10300/19128 | Loss: 1.3931 | LR: 1.44e-04 | GPU: 0.87GB
425
+ 2025-11-09 22:05:55,990 - INFO - Step 10350/19128 | Loss: 1.3685 | LR: 1.43e-04 | GPU: 0.87GB
426
+ 2025-11-09 22:06:42,775 - INFO - Step 10400/19128 | Loss: 1.3856 | LR: 1.43e-04 | GPU: 0.87GB
427
+ 2025-11-09 22:07:29,629 - INFO - Step 10450/19128 | Loss: 1.3871 | LR: 1.42e-04 | GPU: 0.87GB
428
+ 2025-11-09 22:08:16,457 - INFO - Step 10500/19128 | Loss: 1.3822 | LR: 1.41e-04 | GPU: 0.87GB
429
+ 2025-11-09 22:09:03,144 - INFO - Step 10550/19128 | Loss: 1.3909 | LR: 1.40e-04 | GPU: 0.87GB
430
+ 2025-11-09 22:09:49,938 - INFO - Step 10600/19128 | Loss: 1.3876 | LR: 1.39e-04 | GPU: 0.87GB
431
+ 2025-11-09 22:10:36,673 - INFO - Step 10650/19128 | Loss: 1.3611 | LR: 1.39e-04 | GPU: 0.87GB
432
+ 2025-11-09 22:11:23,326 - INFO - Step 10700/19128 | Loss: 1.3871 | LR: 1.38e-04 | GPU: 0.87GB
433
+ 2025-11-09 22:12:09,970 - INFO - Step 10750/19128 | Loss: 1.3808 | LR: 1.37e-04 | GPU: 0.87GB
434
+ 2025-11-09 22:12:56,625 - INFO - Step 10800/19128 | Loss: 1.3733 | LR: 1.36e-04 | GPU: 0.87GB
435
+ 2025-11-09 22:13:43,366 - INFO - Step 10850/19128 | Loss: 1.3835 | LR: 1.35e-04 | GPU: 0.87GB
436
+ 2025-11-09 22:14:30,081 - INFO - Step 10900/19128 | Loss: 1.3768 | LR: 1.35e-04 | GPU: 0.87GB
437
+ 2025-11-09 22:15:16,779 - INFO - Step 10950/19128 | Loss: 1.3826 | LR: 1.34e-04 | GPU: 0.87GB
438
+ 2025-11-09 22:16:03,476 - INFO - Step 11000/19128 | Loss: 1.3699 | LR: 1.33e-04 | GPU: 0.87GB
439
+ 2025-11-09 22:17:38,447 - INFO - Step 11000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
440
+ 2025-11-09 22:17:38,447 - INFO - ============================================================
441
+ 2025-11-09 22:17:38,447 - INFO - πŸ“Š EVALUATION at step 11000
442
+ 2025-11-09 22:17:38,447 - INFO - eval_loss: 1.1889
443
+ 2025-11-09 22:17:38,447 - INFO - eval_runtime: 94.9678
444
+ 2025-11-09 22:17:38,447 - INFO - eval_samples_per_second: 119.3250
445
+ 2025-11-09 22:17:38,447 - INFO - eval_steps_per_second: 7.4660
446
+ 2025-11-09 22:17:38,447 - INFO - epoch: 1.7300
447
+ 2025-11-09 22:17:38,447 - INFO - gpu_memory_gb: 0.8662
448
+ 2025-11-09 22:17:38,447 - INFO - system_memory_percent: 6.9000
449
+ 2025-11-09 22:17:38,447 - INFO - ============================================================
450
+ 2025-11-09 22:17:38,673 - INFO - ============================================================
451
+ 2025-11-09 22:17:38,673 - INFO - πŸ’Ύ Checkpoint 11: step 11000
452
+ 2025-11-09 22:17:38,673 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
453
+ 2025-11-09 22:17:38,673 - INFO - πŸ“€ Uploading checkpoint-11000 to Hub...
454
+ 2025-11-09 22:17:43,042 - INFO - βœ… Checkpoint 11000 uploaded!
455
+ 2025-11-09 22:17:43,042 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
456
+ 2025-11-09 22:17:43,043 - INFO - ============================================================
457
+ 2025-11-09 22:18:29,752 - INFO - Step 11050/19128 | Loss: 1.3872 | LR: 1.32e-04 | GPU: 0.87GB
458
+ 2025-11-09 22:19:16,312 - INFO - Step 11100/19128 | Loss: 1.3747 | LR: 1.31e-04 | GPU: 0.87GB
459
+ 2025-11-09 22:20:02,785 - INFO - Step 11150/19128 | Loss: 1.3827 | LR: 1.31e-04 | GPU: 0.87GB
460
+ 2025-11-09 22:20:49,333 - INFO - Step 11200/19128 | Loss: 1.4229 | LR: 1.30e-04 | GPU: 0.87GB
461
+ 2025-11-09 22:21:35,883 - INFO - Step 11250/19128 | Loss: 1.3915 | LR: 1.29e-04 | GPU: 0.87GB
462
+ 2025-11-09 22:22:22,469 - INFO - Step 11300/19128 | Loss: 1.3880 | LR: 1.28e-04 | GPU: 0.87GB
463
+ 2025-11-09 22:23:08,998 - INFO - Step 11350/19128 | Loss: 1.3952 | LR: 1.27e-04 | GPU: 0.87GB
464
+ 2025-11-09 22:23:55,526 - INFO - Step 11400/19128 | Loss: 1.3712 | LR: 1.27e-04 | GPU: 0.87GB
465
+ 2025-11-09 22:24:42,016 - INFO - Step 11450/19128 | Loss: 1.3949 | LR: 1.26e-04 | GPU: 0.87GB
466
+ 2025-11-09 22:25:28,546 - INFO - Step 11500/19128 | Loss: 1.3744 | LR: 1.25e-04 | GPU: 0.87GB
467
+ 2025-11-09 22:26:15,071 - INFO - Step 11550/19128 | Loss: 1.3609 | LR: 1.24e-04 | GPU: 0.87GB
468
+ 2025-11-09 22:27:01,631 - INFO - Step 11600/19128 | Loss: 1.3655 | LR: 1.23e-04 | GPU: 0.87GB
469
+ 2025-11-09 22:27:48,189 - INFO - Step 11650/19128 | Loss: 1.3702 | LR: 1.23e-04 | GPU: 0.87GB
470
+ 2025-11-09 22:28:34,787 - INFO - Step 11700/19128 | Loss: 1.3929 | LR: 1.22e-04 | GPU: 0.87GB
471
+ 2025-11-09 22:29:21,408 - INFO - Step 11750/19128 | Loss: 1.3611 | LR: 1.21e-04 | GPU: 0.87GB
472
+ 2025-11-09 22:30:08,007 - INFO - Step 11800/19128 | Loss: 1.3700 | LR: 1.20e-04 | GPU: 0.87GB
473
+ 2025-11-09 22:30:54,691 - INFO - Step 11850/19128 | Loss: 1.4018 | LR: 1.19e-04 | GPU: 0.87GB
474
+ 2025-11-09 22:31:41,246 - INFO - Step 11900/19128 | Loss: 1.3757 | LR: 1.19e-04 | GPU: 0.87GB
475
+ 2025-11-09 22:32:27,708 - INFO - Step 11950/19128 | Loss: 1.3949 | LR: 1.18e-04 | GPU: 0.87GB
476
+ 2025-11-09 22:33:14,224 - INFO - Step 12000/19128 | Loss: 1.3671 | LR: 1.17e-04 | GPU: 0.87GB
477
+ 2025-11-09 22:34:48,964 - INFO - Step 12000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
478
+ 2025-11-09 22:34:48,965 - INFO - ============================================================
479
+ 2025-11-09 22:34:48,965 - INFO - πŸ“Š EVALUATION at step 12000
480
+ 2025-11-09 22:34:48,965 - INFO - eval_loss: 1.1849
481
+ 2025-11-09 22:34:48,965 - INFO - eval_runtime: 94.7376
482
+ 2025-11-09 22:34:48,965 - INFO - eval_samples_per_second: 119.6150
483
+ 2025-11-09 22:34:48,965 - INFO - eval_steps_per_second: 7.4840
484
+ 2025-11-09 22:34:48,965 - INFO - epoch: 1.8800
485
+ 2025-11-09 22:34:48,965 - INFO - gpu_memory_gb: 0.8662
486
+ 2025-11-09 22:34:48,965 - INFO - system_memory_percent: 6.9000
487
+ 2025-11-09 22:34:48,965 - INFO - ============================================================
488
+ 2025-11-09 22:34:49,177 - INFO - ============================================================
489
+ 2025-11-09 22:34:49,177 - INFO - πŸ’Ύ Checkpoint 12: step 12000
490
+ 2025-11-09 22:34:49,177 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
491
+ 2025-11-09 22:34:49,178 - INFO - πŸ“€ Uploading checkpoint-12000 to Hub...
492
+ 2025-11-09 22:34:53,733 - INFO - βœ… Checkpoint 12000 uploaded!
493
+ 2025-11-09 22:34:53,734 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
494
+ 2025-11-09 22:34:53,734 - INFO - ============================================================
495
+ 2025-11-09 22:35:40,414 - INFO - Step 12050/19128 | Loss: 1.3686 | LR: 1.16e-04 | GPU: 0.87GB
496
+ 2025-11-09 22:36:26,905 - INFO - Step 12100/19128 | Loss: 1.3721 | LR: 1.15e-04 | GPU: 0.87GB
497
+ 2025-11-09 22:37:13,356 - INFO - Step 12150/19128 | Loss: 1.3638 | LR: 1.15e-04 | GPU: 0.87GB
498
+ 2025-11-09 22:37:59,819 - INFO - Step 12200/19128 | Loss: 1.3750 | LR: 1.14e-04 | GPU: 0.87GB
499
+ 2025-11-09 22:38:46,442 - INFO - Step 12250/19128 | Loss: 1.3774 | LR: 1.13e-04 | GPU: 0.87GB
500
+ 2025-11-09 22:39:33,109 - INFO - Step 12300/19128 | Loss: 1.3897 | LR: 1.12e-04 | GPU: 0.87GB
501
+ 2025-11-09 22:40:19,769 - INFO - Step 12350/19128 | Loss: 1.3690 | LR: 1.11e-04 | GPU: 0.87GB
502
+ 2025-11-09 22:41:06,229 - INFO - Step 12400/19128 | Loss: 1.8621 | LR: 1.11e-04 | GPU: 0.87GB
503
+ 2025-11-09 22:41:52,297 - INFO - Step 12450/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
504
+ 2025-11-09 22:42:38,429 - INFO - Step 12500/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
505
+ 2025-11-09 22:43:24,413 - INFO - Step 12550/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
506
+ 2025-11-09 22:44:10,502 - INFO - Step 12600/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
507
+ 2025-11-09 22:44:56,519 - INFO - Step 12650/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
508
+ 2025-11-09 22:45:42,600 - INFO - Step 12700/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
509
+ 2025-11-09 22:46:28,739 - INFO - Step 12750/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
510
+ 2025-11-09 22:47:15,174 - INFO - Step 12800/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
511
+ 2025-11-09 22:48:01,279 - INFO - Step 12850/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
512
+ 2025-11-09 22:48:47,320 - INFO - Step 12900/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
513
+ 2025-11-09 22:49:33,465 - INFO - Step 12950/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
514
+ 2025-11-09 22:50:19,542 - INFO - Step 13000/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
515
+ 2025-11-09 22:51:53,018 - INFO - Step 13000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
516
+ 2025-11-09 22:51:53,018 - INFO - ============================================================
517
+ 2025-11-09 22:51:53,018 - INFO - πŸ“Š EVALUATION at step 13000
518
+ 2025-11-09 22:51:53,018 - INFO - eval_loss: nan
519
+ 2025-11-09 22:51:53,018 - INFO - eval_runtime: 93.4731
520
+ 2025-11-09 22:51:53,018 - INFO - eval_samples_per_second: 121.2330
521
+ 2025-11-09 22:51:53,018 - INFO - eval_steps_per_second: 7.5850
522
+ 2025-11-09 22:51:53,018 - INFO - epoch: 2.0400
523
+ 2025-11-09 22:51:53,018 - INFO - gpu_memory_gb: 0.8662
524
+ 2025-11-09 22:51:53,018 - INFO - system_memory_percent: 7.0000
525
+ 2025-11-09 22:51:53,018 - INFO - ============================================================
526
+ 2025-11-09 22:51:53,241 - INFO - ============================================================
527
+ 2025-11-09 22:51:53,241 - INFO - πŸ’Ύ Checkpoint 13: step 13000
528
+ 2025-11-09 22:51:53,241 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
529
+ 2025-11-09 22:51:53,242 - INFO - πŸ“€ Uploading checkpoint-13000 to Hub...
530
+ 2025-11-09 22:51:57,319 - INFO - βœ… Checkpoint 13000 uploaded!
531
+ 2025-11-09 22:51:57,319 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
532
+ 2025-11-09 22:51:57,319 - INFO - ============================================================
533
+ 2025-11-09 22:52:43,577 - INFO - Step 13050/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
534
+ 2025-11-09 22:53:29,698 - INFO - Step 13100/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
535
+ 2025-11-09 22:54:15,799 - INFO - Step 13150/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
536
+ 2025-11-09 22:55:01,894 - INFO - Step 13200/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
537
+ 2025-11-09 22:55:47,892 - INFO - Step 13250/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
538
+ 2025-11-09 22:56:33,900 - INFO - Step 13300/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
539
+ 2025-11-09 22:57:19,898 - INFO - Step 13350/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
540
+ 2025-11-09 22:58:05,896 - INFO - Step 13400/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
541
+ 2025-11-09 22:58:51,876 - INFO - Step 13450/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
542
+ 2025-11-09 22:59:37,902 - INFO - Step 13500/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
543
+ 2025-11-09 23:00:23,910 - INFO - Step 13550/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
544
+ 2025-11-09 23:01:09,930 - INFO - Step 13600/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
545
+ 2025-11-09 23:01:56,001 - INFO - Step 13650/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
546
+ 2025-11-09 23:02:41,996 - INFO - Step 13700/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
547
+ 2025-11-09 23:03:27,968 - INFO - Step 13750/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
548
+ 2025-11-09 23:04:14,021 - INFO - Step 13800/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
549
+ 2025-11-09 23:05:00,113 - INFO - Step 13850/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
550
+ 2025-11-09 23:05:46,309 - INFO - Step 13900/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
551
+ 2025-11-09 23:06:32,328 - INFO - Step 13950/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
552
+ 2025-11-09 23:07:18,342 - INFO - Step 14000/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
553
+ 2025-11-09 23:08:51,807 - INFO - Step 14000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
554
+ 2025-11-09 23:08:51,807 - INFO - ============================================================
555
+ 2025-11-09 23:08:51,808 - INFO - πŸ“Š EVALUATION at step 14000
556
+ 2025-11-09 23:08:51,808 - INFO - eval_loss: nan
557
+ 2025-11-09 23:08:51,808 - INFO - eval_runtime: 93.4623
558
+ 2025-11-09 23:08:51,808 - INFO - eval_samples_per_second: 121.2470
559
+ 2025-11-09 23:08:51,808 - INFO - eval_steps_per_second: 7.5860
560
+ 2025-11-09 23:08:51,808 - INFO - epoch: 2.2000
561
+ 2025-11-09 23:08:51,808 - INFO - gpu_memory_gb: 0.8662
562
+ 2025-11-09 23:08:51,808 - INFO - system_memory_percent: 4.5000
563
+ 2025-11-09 23:08:51,808 - INFO - ============================================================
564
+ 2025-11-09 23:08:52,044 - INFO - ============================================================
565
+ 2025-11-09 23:08:52,044 - INFO - πŸ’Ύ Checkpoint 14: step 14000
566
+ 2025-11-09 23:08:52,044 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
567
+ 2025-11-09 23:08:52,044 - INFO - πŸ“€ Uploading checkpoint-14000 to Hub...
568
+ 2025-11-09 23:08:55,662 - INFO - βœ… Checkpoint 14000 uploaded!
569
+ 2025-11-09 23:08:55,662 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
570
+ 2025-11-09 23:08:55,662 - INFO - ============================================================
571
+ 2025-11-09 23:09:41,850 - INFO - Step 14050/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
572
+ 2025-11-09 23:10:27,930 - INFO - Step 14100/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
573
+ 2025-11-09 23:11:13,930 - INFO - Step 14150/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
574
+ 2025-11-09 23:11:59,909 - INFO - Step 14200/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
575
+ 2025-11-09 23:12:45,878 - INFO - Step 14250/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
576
+ 2025-11-09 23:13:31,857 - INFO - Step 14300/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
577
+ 2025-11-09 23:14:17,824 - INFO - Step 14350/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
578
+ 2025-11-09 23:15:03,920 - INFO - Step 14400/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
579
+ 2025-11-09 23:15:49,932 - INFO - Step 14450/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
580
+ 2025-11-09 23:16:35,935 - INFO - Step 14500/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
581
+ 2025-11-09 23:17:21,918 - INFO - Step 14550/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
582
+ 2025-11-09 23:18:07,902 - INFO - Step 14600/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
583
+ 2025-11-09 23:18:53,866 - INFO - Step 14650/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
584
+ 2025-11-09 23:19:39,854 - INFO - Step 14700/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
585
+ 2025-11-09 23:20:25,831 - INFO - Step 14750/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
586
+ 2025-11-09 23:21:11,921 - INFO - Step 14800/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
587
+ 2025-11-09 23:21:58,105 - INFO - Step 14850/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
588
+ 2025-11-09 23:22:44,174 - INFO - Step 14900/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
589
+ 2025-11-09 23:23:30,203 - INFO - Step 14950/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
590
+ 2025-11-09 23:24:16,303 - INFO - Step 15000/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
591
+ 2025-11-09 23:25:49,734 - INFO - Step 15000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
592
+ 2025-11-09 23:25:49,735 - INFO - ============================================================
593
+ 2025-11-09 23:25:49,735 - INFO - πŸ“Š EVALUATION at step 15000
594
+ 2025-11-09 23:25:49,735 - INFO - eval_loss: nan
595
+ 2025-11-09 23:25:49,735 - INFO - eval_runtime: 93.4289
596
+ 2025-11-09 23:25:49,735 - INFO - eval_samples_per_second: 121.2900
597
+ 2025-11-09 23:25:49,735 - INFO - eval_steps_per_second: 7.5890
598
+ 2025-11-09 23:25:49,735 - INFO - epoch: 2.3500
599
+ 2025-11-09 23:25:49,735 - INFO - gpu_memory_gb: 0.8662
600
+ 2025-11-09 23:25:49,735 - INFO - system_memory_percent: 4.5000
601
+ 2025-11-09 23:25:49,735 - INFO - ============================================================
602
+ 2025-11-09 23:25:49,944 - INFO - ============================================================
603
+ 2025-11-09 23:25:49,944 - INFO - πŸ’Ύ Checkpoint 15: step 15000
604
+ 2025-11-09 23:25:49,945 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
605
+ 2025-11-09 23:25:49,945 - INFO - πŸ“€ Uploading checkpoint-15000 to Hub...
606
+ 2025-11-09 23:25:53,264 - INFO - βœ… Checkpoint 15000 uploaded!
607
+ 2025-11-09 23:25:53,264 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
608
+ 2025-11-09 23:25:53,264 - INFO - ============================================================
609
+ 2025-11-09 23:26:39,492 - INFO - Step 15050/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
610
+ 2025-11-09 23:27:25,541 - INFO - Step 15100/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
611
+ 2025-11-09 23:28:11,500 - INFO - Step 15150/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
612
+ 2025-11-09 23:28:57,471 - INFO - Step 15200/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
613
+ 2025-11-09 23:29:43,437 - INFO - Step 15250/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
614
+ 2025-11-09 23:30:29,411 - INFO - Step 15300/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
615
+ 2025-11-09 23:31:15,372 - INFO - Step 15350/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
616
+ 2025-11-09 23:32:01,346 - INFO - Step 15400/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
617
+ 2025-11-09 23:32:47,304 - INFO - Step 15450/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
618
+ 2025-11-09 23:33:33,265 - INFO - Step 15500/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
619
+ 2025-11-09 23:34:19,238 - INFO - Step 15550/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
620
+ 2025-11-09 23:35:05,235 - INFO - Step 15600/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
621
+ 2025-11-09 23:35:51,212 - INFO - Step 15650/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
622
+ 2025-11-09 23:36:37,214 - INFO - Step 15700/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
623
+ 2025-11-09 23:37:23,193 - INFO - Step 15750/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
624
+ 2025-11-09 23:38:09,169 - INFO - Step 15800/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
625
+ 2025-11-09 23:38:55,138 - INFO - Step 15850/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
626
+ 2025-11-09 23:39:41,206 - INFO - Step 15900/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
627
+ 2025-11-09 23:40:27,185 - INFO - Step 15950/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
628
+ 2025-11-09 23:41:13,195 - INFO - Step 16000/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
629
+ 2025-11-09 23:42:46,638 - INFO - Step 16000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
630
+ 2025-11-09 23:42:46,639 - INFO - ============================================================
631
+ 2025-11-09 23:42:46,639 - INFO - πŸ“Š EVALUATION at step 16000
632
+ 2025-11-09 23:42:46,639 - INFO - eval_loss: nan
633
+ 2025-11-09 23:42:46,639 - INFO - eval_runtime: 93.4410
634
+ 2025-11-09 23:42:46,639 - INFO - eval_samples_per_second: 121.2740
635
+ 2025-11-09 23:42:46,639 - INFO - eval_steps_per_second: 7.5880
636
+ 2025-11-09 23:42:46,639 - INFO - epoch: 2.5100
637
+ 2025-11-09 23:42:46,639 - INFO - gpu_memory_gb: 0.8662
638
+ 2025-11-09 23:42:46,639 - INFO - system_memory_percent: 4.5000
639
+ 2025-11-09 23:42:46,639 - INFO - ============================================================
640
+ 2025-11-09 23:42:46,856 - INFO - ============================================================
641
+ 2025-11-09 23:42:46,856 - INFO - πŸ’Ύ Checkpoint 16: step 16000
642
+ 2025-11-09 23:42:46,856 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
643
+ 2025-11-09 23:42:46,857 - INFO - πŸ“€ Uploading checkpoint-16000 to Hub...
644
+ 2025-11-09 23:42:50,362 - INFO - βœ… Checkpoint 16000 uploaded!
645
+ 2025-11-09 23:42:50,362 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
646
+ 2025-11-09 23:42:50,362 - INFO - ============================================================
647
+ 2025-11-09 23:43:36,453 - INFO - Step 16050/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
648
+ 2025-11-09 23:44:22,557 - INFO - Step 16100/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
649
+ 2025-11-09 23:45:08,631 - INFO - Step 16150/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
650
+ 2025-11-09 23:45:54,752 - INFO - Step 16200/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
651
+ 2025-11-09 23:46:40,828 - INFO - Step 16250/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
652
+ 2025-11-09 23:47:26,921 - INFO - Step 16300/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
653
+ 2025-11-09 23:48:13,006 - INFO - Step 16350/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
654
+ 2025-11-09 23:48:59,108 - INFO - Step 16400/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
655
+ 2025-11-09 23:49:45,197 - INFO - Step 16450/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
656
+ 2025-11-09 23:50:31,295 - INFO - Step 16500/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
657
+ 2025-11-09 23:51:17,390 - INFO - Step 16550/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
658
+ 2025-11-09 23:52:03,493 - INFO - Step 16600/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
659
+ 2025-11-09 23:52:49,580 - INFO - Step 16650/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
660
+ 2025-11-09 23:53:35,582 - INFO - Step 16700/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
661
+ 2025-11-09 23:54:21,649 - INFO - Step 16750/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
662
+ 2025-11-09 23:55:07,722 - INFO - Step 16800/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
663
+ 2025-11-09 23:55:53,787 - INFO - Step 16850/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
664
+ 2025-11-09 23:56:39,973 - INFO - Step 16900/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
665
+ 2025-11-09 23:57:25,950 - INFO - Step 16950/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
666
+ 2025-11-09 23:58:11,920 - INFO - Step 17000/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
667
+ 2025-11-09 23:59:45,343 - INFO - Step 17000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
668
+ 2025-11-09 23:59:45,343 - INFO - ============================================================
669
+ 2025-11-09 23:59:45,344 - INFO - πŸ“Š EVALUATION at step 17000
670
+ 2025-11-09 23:59:45,344 - INFO - eval_loss: nan
671
+ 2025-11-09 23:59:45,344 - INFO - eval_runtime: 93.4207
672
+ 2025-11-09 23:59:45,344 - INFO - eval_samples_per_second: 121.3010
673
+ 2025-11-09 23:59:45,344 - INFO - eval_steps_per_second: 7.5890
674
+ 2025-11-09 23:59:45,344 - INFO - epoch: 2.6700
675
+ 2025-11-09 23:59:45,344 - INFO - gpu_memory_gb: 0.8662
676
+ 2025-11-09 23:59:45,344 - INFO - system_memory_percent: 4.5000
677
+ 2025-11-09 23:59:45,344 - INFO - ============================================================
678
+ 2025-11-09 23:59:45,569 - INFO - ============================================================
679
+ 2025-11-09 23:59:45,569 - INFO - πŸ’Ύ Checkpoint 17: step 17000
680
+ 2025-11-09 23:59:45,570 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
681
+ 2025-11-09 23:59:45,570 - INFO - πŸ“€ Uploading checkpoint-17000 to Hub...
682
+ 2025-11-09 23:59:48,742 - INFO - βœ… Checkpoint 17000 uploaded!
683
+ 2025-11-09 23:59:48,742 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
684
+ 2025-11-09 23:59:48,743 - INFO - ============================================================
685
+ 2025-11-10 00:00:34,945 - INFO - Step 17050/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
686
+ 2025-11-10 00:01:20,924 - INFO - Step 17100/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
687
+ 2025-11-10 00:02:06,989 - INFO - Step 17150/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
688
+ 2025-11-10 00:02:53,095 - INFO - Step 17200/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
689
+ 2025-11-10 00:03:39,191 - INFO - Step 17250/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
690
+ 2025-11-10 00:04:25,278 - INFO - Step 17300/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
691
+ 2025-11-10 00:05:11,360 - INFO - Step 17350/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
692
+ 2025-11-10 00:05:57,350 - INFO - Step 17400/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
693
+ 2025-11-10 00:06:43,421 - INFO - Step 17450/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
694
+ 2025-11-10 00:07:29,522 - INFO - Step 17500/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
695
+ 2025-11-10 00:08:15,601 - INFO - Step 17550/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
696
+ 2025-11-10 00:09:01,680 - INFO - Step 17600/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
697
+ 2025-11-10 00:09:47,756 - INFO - Step 17650/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
698
+ 2025-11-10 00:10:33,763 - INFO - Step 17700/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
699
+ 2025-11-10 00:11:19,740 - INFO - Step 17750/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
700
+ 2025-11-10 00:12:05,720 - INFO - Step 17800/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
701
+ 2025-11-10 00:12:51,689 - INFO - Step 17850/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
702
+ 2025-11-10 00:13:37,765 - INFO - Step 17900/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
703
+ 2025-11-10 00:14:23,734 - INFO - Step 17950/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
704
+ 2025-11-10 00:15:09,741 - INFO - Step 18000/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
705
+ 2025-11-10 00:16:43,137 - INFO - Step 18000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
706
+ 2025-11-10 00:16:43,137 - INFO - ============================================================
707
+ 2025-11-10 00:16:43,137 - INFO - πŸ“Š EVALUATION at step 18000
708
+ 2025-11-10 00:16:43,137 - INFO - eval_loss: nan
709
+ 2025-11-10 00:16:43,137 - INFO - eval_runtime: 93.3927
710
+ 2025-11-10 00:16:43,137 - INFO - eval_samples_per_second: 121.3370
711
+ 2025-11-10 00:16:43,137 - INFO - eval_steps_per_second: 7.5920
712
+ 2025-11-10 00:16:43,137 - INFO - epoch: 2.8200
713
+ 2025-11-10 00:16:43,137 - INFO - gpu_memory_gb: 0.8662
714
+ 2025-11-10 00:16:43,137 - INFO - system_memory_percent: 4.5000
715
+ 2025-11-10 00:16:43,137 - INFO - ============================================================
716
+ 2025-11-10 00:16:43,342 - INFO - ============================================================
717
+ 2025-11-10 00:16:43,342 - INFO - πŸ’Ύ Checkpoint 18: step 18000
718
+ 2025-11-10 00:16:43,342 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
719
+ 2025-11-10 00:16:43,343 - INFO - πŸ“€ Uploading checkpoint-18000 to Hub...
720
+ 2025-11-10 00:16:46,820 - INFO - βœ… Checkpoint 18000 uploaded!
721
+ 2025-11-10 00:16:46,821 - INFO - πŸ“‚ https://huggingface.co/ranjan56cse/t5-base-xsum-lora
722
+ 2025-11-10 00:16:46,821 - INFO - ============================================================
723
+ 2025-11-10 00:17:33,142 - INFO - Step 18050/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
724
+ 2025-11-10 00:18:19,206 - INFO - Step 18100/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
725
+ 2025-11-10 00:19:05,169 - INFO - Step 18150/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
726
+ 2025-11-10 00:19:51,150 - INFO - Step 18200/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
727
+ 2025-11-10 00:20:37,103 - INFO - Step 18250/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
728
+ 2025-11-10 00:21:23,068 - INFO - Step 18300/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
729
+ 2025-11-10 00:22:09,041 - INFO - Step 18350/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
730
+ 2025-11-10 00:22:55,013 - INFO - Step 18400/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
731
+ 2025-11-10 00:23:40,953 - INFO - Step 18450/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
732
+ 2025-11-10 00:24:26,918 - INFO - Step 18500/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
733
+ 2025-11-10 00:25:12,940 - INFO - Step 18550/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
734
+ 2025-11-10 00:25:59,016 - INFO - Step 18600/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
735
+ 2025-11-10 00:26:45,084 - INFO - Step 18650/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
736
+ 2025-11-10 00:27:31,176 - INFO - Step 18700/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
737
+ 2025-11-10 00:28:17,268 - INFO - Step 18750/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
738
+ 2025-11-10 00:29:03,381 - INFO - Step 18800/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
739
+ 2025-11-10 00:29:49,464 - INFO - Step 18850/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
740
+ 2025-11-10 00:30:35,581 - INFO - Step 18900/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
741
+ 2025-11-10 00:31:21,554 - INFO - Step 18950/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
742
+ 2025-11-10 00:32:07,529 - INFO - Step 19000/19128 | Loss: 0.0000 | LR: 1.11e-04 | GPU: 0.87GB
743
+ 2025-11-10 00:33:40,915 - INFO - Step 19000/19128 | Loss: 0.0000 | LR: 0.00e+00 | GPU: 0.87GB
744
+ 2025-11-10 00:33:40,916 - INFO - ============================================================
745
+ 2025-11-10 00:33:40,916 - INFO - πŸ“Š EVALUATION at step 19000
746
+ 2025-11-10 00:33:40,916 - INFO - eval_loss: nan
747
+ 2025-11-10 00:33:40,916 - INFO - eval_runtime: 93.3840
748
+ 2025-11-10 00:33:40,916 - INFO - eval_samples_per_second: 121.3480
749
+ 2025-11-10 00:33:40,916 - INFO - eval_steps_per_second: 7.5920
750
+ 2025-11-10 00:33:40,916 - INFO - epoch: 2.9800
751
+ 2025-11-10 00:33:40,916 - INFO - gpu_memory_gb: 0.8662
752
+ 2025-11-10 00:33:40,916 - INFO - system_memory_percent: 4.5000
753
+ 2025-11-10 00:33:40,916 - INFO - ============================================================
754
+ 2025-11-10 00:33:41,136 - INFO - ============================================================
755
+ 2025-11-10 00:33:41,136 - INFO - πŸ’Ύ Checkpoint 19: step 19000
756
+ 2025-11-10 00:33:41,137 - INFO - GPU: 0.87GB allocated, 1.15GB reserved
757
+ 2025-11-10 00:33:41,137 - INFO - πŸ“€ Uploading checkpoint-19000 to Hub...