Datasets:

Languages:
English
ArXiv:
License:
bhatta1 commited on
Commit
16069a4
·
verified ·
1 Parent(s): c8535c6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -60,8 +60,10 @@ We evaluated our ablation models using lm-evaluation-harness on two categories
60
 
61
  Since ablations are performed by training ‘small’ models (1.4B parameter models) for a ‘few billion’ tokens (typically 35B tokens), it is important to identify benchmarks that provide good signal at this relatively small scale. Similar to FineWeb, we used the following criteria for selecting the 11 High-Signal/Early-Signal tasks: accuracy above random guessing, accuracy monotonically increasing over training epochs, and small variance across runs. These are shown in Fig 3 and cover Commonsense Reasoning, Reading Comprehension, World Knowledge and Language Understanding task categories. We used both the zero-shot as well as few-shot variations of these tasks.
62
 
63
-
64
 
 
 
65
  Figure 3 : High Signal Tasks — provide good signal at relatively small scale (of 1.4B models trained on 35B to 100B tokens)
66
 
67
  The High-Signal tasks were used to analyze individual ingredients and possible recipe combinations via ablations. After we narrowed a few candidate recipes using these signals, we used the extended set of benchmarks to evaluate the model’s ability to generalize.
 
60
 
61
  Since ablations are performed by training ‘small’ models (1.4B parameter models) for a ‘few billion’ tokens (typically 35B tokens), it is important to identify benchmarks that provide good signal at this relatively small scale. Similar to FineWeb, we used the following criteria for selecting the 11 High-Signal/Early-Signal tasks: accuracy above random guessing, accuracy monotonically increasing over training epochs, and small variance across runs. These are shown in Fig 3 and cover Commonsense Reasoning, Reading Comprehension, World Knowledge and Language Understanding task categories. We used both the zero-shot as well as few-shot variations of these tasks.
62
 
63
+
64
 
65
+ ![HighSignal.png](HighSignal.png)
66
+
67
  Figure 3 : High Signal Tasks — provide good signal at relatively small scale (of 1.4B models trained on 35B to 100B tokens)
68
 
69
  The High-Signal tasks were used to analyze individual ingredients and possible recipe combinations via ablations. After we narrowed a few candidate recipes using these signals, we used the extended set of benchmarks to evaluate the model’s ability to generalize.