Datasets:

Languages:
English
ArXiv:
License:
File size: 18,190 Bytes
1525fad
313ddd5
1525fad
 
 
 
 
 
10efc05
 
 
 
 
 
0e014d0
10efc05
0e014d0
10efc05
5126464
10efc05
5126464
10efc05
5126464
10efc05
5126464
10efc05
5126464
10efc05
 
 
8cadebb
10efc05
7936d62
10efc05
 
 
0c97feb
 
 
 
 
 
381e892
0c97feb
 
 
017962c
2ce0bbd
7936d62
0c97feb
 
 
 
 
 
 
 
 
 
 
 
 
16069a4
4332e45
16069a4
7936d62
0c97feb
 
 
 
 
 
 
4332e45
0c97feb
7936d62
0c97feb
 
 
 
 
9468c5f
0c97feb
7936d62
0c97feb
 
 
9468c5f
0c97feb
7936d62
0c97feb
 
 
eeb62e5
 
4e4bf6d
13e98ad
7936d62
0c97feb
863ca46
 
 
 
 
 
7936d62
863ca46
90ac41b
863ca46
 
 
 
 
7936d62
863ca46
7936d62
863ca46
 
fe5f643
96911ea
7936d62
863ca46
cb7ca42
e1044ac
 
863ca46
 
 
13f5e11
 
 
 
9468c5f
13f5e11
7936d62
13f5e11
 
 
 
 
 
 
d3a7659
13f5e11
d3a7659
13f5e11
d3a7659
13f5e11
d3a7659
13f5e11
 
 
 
 
d3a7659
13f5e11
 
 
 
 
 
d5b6fef
dec2d82
13f5e11
7936d62
13f5e11
 
 
a9a2af8
13f5e11
10efc05
a7a5dec
4b91f06
 
 
a7a5dec
4b91f06
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
**What is it?**

Data quantity and quality play a vital role in determining the performance of Large Language Models (LLMs). High-quality data, in particular, can significantly boost the LLM’s ability to generalize on a wide range of downstream tasks. To cater to the requirements of the Granite models, we focused on a goal to produce a 10T dataset, named, GneissWeb, that is of higher quality than all other datasets of similar size available. Gneiss, pronounced as nice, is a strong and durable rock that is used for building and construction.

The GneissWeb recipe consists of sharded exact substring deduplication and a judiciously constructed ensemble of quality filters. We present the key evaluations that guided our design choices and provide filtering thresholds that can be used to filter the dataset to match the token and quality needs of Stage-1 (early pre-training) or Stage-2 (annealing) datasets. 

Our evaluations demonstrate that GneissWeb outperforms state-of-the-art large open datasets (5T+ tokens). Specifically, ablation models trained on GneissWeb outperform those trained on FineWeb.V1.1 by 2.14 percentage points in terms of average score computed on a set of 11 benchmarks (both zero-shot and few-shot) commonly used to evaluate pre-train datasets. When the evaluation set is extended to 20 benchmarks (both zero-shot and few-shot), ablation models trained on GneissWeb outperform those trained on FineWeb.V1.1 by 1.49 percentage points. In future, we plan to release a detailed technical paper with fine grained details and the IBM Data Prep Kit to create the GneissWeb dataset.

**The GneissWeb Recipe in a Nutshell : Building on Top of FineWeb**

Hugging Face introduced FineWeb V1.1, a large-scale dataset for LLM pre-training, consisting of 15 trillion tokens (44TB disk space). FineWeb is derived from 96 Common Crawl snapshots, focusing on English text by applying a series of processing steps, mainly including language classification, deduplication, and heuristic rule-based quality filters. Models trained on  FineWeb are shown to outperform those trained on other publicly available datasets — C4, RefinedWeb, Dolma, RedPajamav2, SlimPajama, and The Pile. 

We started with the goal of distilling 10T+ high quality tokens from FineWeb V1.1, so that we get sufficiently large number of quality tokens suitable for Stage-1 pre-training. Unlike the FineWeb.Edu families, which rely on a single quality annotator and perform aggressive filtering, we developed a multi-faceted ensemble of quality annotators to enable fine-grained quality filtering. This allowed us to achieve a finer trade-off  between the quality and quantity of the tokens retained. While the GneissWeb recipe is focused at obtaining 10T+ high quality tokens suitable for Stage-1 pre-training, it is also possible to adapt the recipe by tuning filtering parameters to produce smaller and higher quality datasets fit for Stage-2 kind of training.

  **An Overview of the GneissWeb Recipe**

  The GneissWeb dataset was obtained by applying the following processing steps to Fineweb :

     - Exact substring deduplication at line level

     - Custom built Fasttext quality filter

     - Custom built Fasttext category classifier

     - Custom built Category-aware readability score quality filter

     - Custom built Category-aware extreme_tokenized quality filter

These were applied in the order shown in the Fig 1

<img src="GneissWeb3.png" alt="GneissWeb3" style="width:1000px;"/>

**Figure 1 :**  GneissWeb recipe 

 The net impact was that the dataset size of 15T tokens was filtered down to approx 10T tokens. In subsequent sections we describe the overall performance obtained using GneissWeb compared to other baselines.  We then dive deeper into each of these processing steps in detail and the impact they have individually through a series of ablations. 

&nbsp;&nbsp;**Evaluation  Strategy**

To compare GneissWeb against the baselines, we trained decoder models of sizes 1.4B, 3B and 7B parameters on a Llama architecture. These were trained on 35B (roughly Chinchilla optimal) tokens to obtain signals and select hyperparameters for each processing step. We further trained ablation models on 100B (roughly 3x Chinchilla optimal) as well as 350B tokens to validate the performance of each processing step. The data was tokenized using a starcoder tokenizer and training was done with a sequence length of 8192.   

The baselines from which equivalent data was subsampled and used for this comparison included:

<img src="Fig2.jpg" alt="Fig2.jpg" style="width:1000px;"/>

 Fig 2 shows how the subsamples were created for the Fineweb baselines as well for GneissWeb.  A similar strategy as for the creation of the Fineweb baseline was used for other baselines too

<img src="ablation_strategy.png" alt="ablation_strategy.png" style="width:1000px;"/>

**Figure 2:**  Subsampling and Ablation Strategy

We trained and evaluated our models on an LSF (Load Sharing Facility) cluster with each node equipped with eight H100 GPUs. For training tasks involving 35 billion tokens, we typically trained models with 1.4 billion trainable parameters across 64 GPUs. For more compute intensive tasks, we scale up to 128 or 256 GPUs to reduce training time and for evaluation tasks we generally used 8 GPUs. 

The tokens for an experimental dataset are read from IBM’s GPFS (General Parallel File System) to minimize network traffic during training. With this computational infrastructure, the training speed of an FSDP model with 1.4 billion parameters is approximately 32,000 tokens/GPU/sec. Consequently, training the model with 35 billion tokens on 64 GPUs typically takes about 4.6 hours. Model checkpoints are saved regularly and evaluated in real time, with results automatically uploaded, stored and visualized. 

&nbsp;&nbsp;**Evaluation Benchmarks Selection**

We evaluated our ablation models using lm-evaluation-harness on two categories of tasks: 11 High-Signal tasks (0-shot and few-shot) and 20 Extended tasks (0-shot and few-shot).

&nbsp;&nbsp;&nbsp;&nbsp;**High-Signal tasks:**

Since ablations are performed by training ‘small’ models (1.4B parameter models) for a ‘few billion’ tokens (typically 35B tokens), it is important to identify benchmarks that provide good signal at this relatively small scale. Similar to FineWeb, we used the following criteria for selecting the 11 High-Signal/Early-Signal tasks: accuracy above random guessing, accuracy monotonically increasing over training epochs, and small variance across runs. These are shown in Fig 3 and cover Commonsense Reasoning, Reading Comprehension, World Knowledge and Language Understanding task categories.  We used both the zero-shot as well as few-shot variations of these tasks.   

  
<img src="HighSignal.png" alt="HighSignal.png" style="width:1000px;"/>
                                        
**Figure 3:**  High Signal Tasks — provide good signal at relatively small scale (of 1.4B models trained on 35B to 100B tokens)

The High-Signal tasks were used to analyze individual ingredients and possible recipe combinations via ablations. After we narrowed a few candidate recipes using these signals, we used the extended set of benchmarks to evaluate the model’s ability to generalize.

&nbsp;&nbsp;&nbsp;&nbsp;**Extended tasks:**

The extended tasks shown in Fig 4 are a superset of the High Signal tasks.  Besides the task categories of Commonsense Reasoning, Reading Comprehension, World Knowledge, Language Understanding, it also has the category of Symbolic Problem Solving.  For the extended set, we also focus on zero-shot as well as few-shot variations.  

<img src="Extended_Tasks.png" alt="Extended_Tasks.png" style="width:1000px;"/>

**Figure 4:**  Extended Tasks — broader set of tasks to evaluate generalization at larger number of tokens and/or larger model sizes

The Extended Task set have some tasks which are not in High Signal. These tasks are useful but at ablation scale may have high standard deviation (like PubMedQA) or are at random guessing the entire training cycle (like MMLU) or which are above random guessing but do not show improvement with training (like GSM8k). However, these tasks are useful indicators for larger model performance and thus have been retained in the Extended Tasks set.

These differences between the High-Signal Tasks vs Extended Tasks are seen in Fig 5 where we see a comparison of the High Signal Tasks vs those which are in the Extended Tasks and excluded from the High Signal Tasks.  We see that the average accuracy increases in the former and is relatively static in the latter. This was a criteria for excluding them from the High Signal Task set.

<img src="accuracy_HS_vs_excluded_tasks_350b_no_stdev_v2.png" alt="accuracy_HS_vs_excluded_tasks_350b_no_stdev_v2.png" style="width:1000px;"/>

**Figure 5:**   High-Signal Tasks show increasing accuracy with more training

The high signal tasks also show lower coefficient of variation  compared to the excluded tasks as shown in Figure 6.  The coefficient of variation is calculated as the ratio between the standard deviation of the average score divided by the mean, where statistics are computed across three random training seeds. Lower coefficient of variation shows more stable results, due to lower variance across random seeds. Their lower coefficient of variation makes the high-signal tasks more reliable at the ablation scale.

<img src="coeff_variation_HS_vs_excluded_v2.png" alt="coeff_variation_HS_vs_excluded_v2.png" style="width:1000px;"/> 

**Figure 6:** Coefficient of Variation (standard deviation divided by mean) for High-Signal Set and Excluded Set. 

&nbsp;&nbsp;**Evaluation Results**

**At 1.4 Billion Model Size Trained on 350 Billion Tokens**

<img src="fig7.jpg" alt="fig7.jpg" style="width:1400px;"/>

**Figure 7:** Average scores of 1.4 Billion parameter models trained on 350 Billion tokens randomly sampled from state-of-the-art open datasets. Scores are averaged over 3 random seeds used for data sampling and are reported along with standard deviations.  GneissWeb performs the best among the class of large datasets.

The datasets evaluated are broken down into those which are above 5 Trillion tokens in size and those below 5 Trillion.  The former are useful for Stage-1 kind of training and are the primary focus of this study,  The latter are useful for Stage-2 kind of training and with certain tuning of parameters of filtering a version of GneissWeb can be produced for this space. 

For those in the greater than 5 Trillion token set size, in Fig 8 we show the performance broken down into the various categories of tasks - Commonsense Reasoning,  Language Understanding, Reading Comprehension, World Knowledge and Symbolic Problem Solving,  As shown,  GneissWeb is not only the best overall but actually does best in all categories of tasks except World Knowledge. 

In Fig 9, we show the progression of accuracy with training for High Signal Tasks for 1.4 billion parameter model for 350 billion tokens.  We see that for all three datasets compared, the accuracy increases over time and the accuracy of GneissWeb is consistently higher than FineWeb and FineWeb-Edu-score-2.

**Figure 9:** Average evaluation score on High-Signal tasks versus the number of tokens for 1.4 Billion parameter models. The model trained on GneissWeb consistently outperforms the ones trained on FineWeb.V1.1 and FineWeb-Edu-score-2. 

**At  3 and 7  Billion Model Size with 100  Billion Tokens**

Given than training models of size 3 and 7 Billion parameters require lot more compute and so does evaluation, we have limited training to 100 billion tokens.  We see that the 7 Billion parameter models do better than the 3 Billion parameter models.  We also see that the models trained on GneissWeb outperform the models trained on FineWeb.V1.1 and FineWeb-Edu-score-2.

At 3 Billion model size, models trained on GneissWeb outperform those trained on FineWeb.V1.1 by 1.83 percent points in terms of the average score computed on a set of 11 High-signal benchmarks (both zero-shot and few-shot), and 1.09 percent points on Extended benchmarks (both zero-shot and few-shot). 

**Figure 10:**  Comparison of Average Eval Scores on High Signal and Extended Eval Tasks at 3B model size.  Scores are averaged over 3 random seeds used for data sampling and are reported along with standard deviations.

**Figure 11:** Average evaluation score on High-Signal tasks versus the number of tokens at 3 Billion model size for 100 Billion tokens. The model trained on GneissWeb consistently outperforms the one trained on FineWeb.V1.1 throughout the training.
This gain further increases at 7 Billion model size, models trained on GneissWeb outperform those trained on FineWeb.V1.1 by 2.04 percent points in terms of the average score computed on a set of 11 High-signal benchmarks (both zero-shot and few-shot), and 1.32 percent points on Extended benchmarks (both zero-shot and few-shot).

<img src="fig12.jpg" alt="fig12.jpg" style="width:1000px;"/>

**Figure 12:**  Comparison of Average Eval Scores on High Signal and Extended Eval Tasks at 7B model size.  Scores are averaged over 3 random seeds used for data sampling and are reported along with standard deviations.

<img src="Fig18.png" alt="Fig18.png" style="width:1000px;"/>

**Figure 13:** Average evaluation score on High-Signal tasks versus the number of tokens at 7 Billion model size for 350 Billion tokens. The model trained on GneissWeb consistently outperforms the one trained on FineWeb.V1.1 throughout the training.

 

**Combining GneissWeb Components into a Winning Recipe**

There are various ways to combine the key ingredients and build a recipe, including deciding which components to include and their order as well as designing ensemble filtering rules using multiple quality annotators. We performed rigorous ablations by combining the key ingredients in multiple variations and sequences with the aim of maximizing downstream task performance under the constraint of retaining at least 10T tokens from FineWeb.V1.1.0.

<img src="Ingredients.png" alt="Ingredients.png" style="width:1000px;"/>

**Figure 19:** Key ingredients selected for building the GneissWeb recipe

The GneissWeb recipe illustrated in Figure 1 produces the highest performance gain. The GneissWeb recipe consists of first applying the exact substring deduplication, computing category and quality annotations, and then applying the ensemble quality filter as shown in Figure 1. We obtain the GneissWeb dataset of 10T tokens by applying the GneissWeb recipe to the 15T tokens in the 96 snapshots of FineWeb-V1.1.0. We prepared GneissWeb using a version of IBM Data Prep Kit  which will be released in open source in future.

Equipped with fastText classifiers, category-aware readability score filter, and category-aware extreme-tokenized documents filter, we perform ablations over various ensemble filtering rules. We first select the thresholds for category-aware readability score filter and category-aware extreme-tokenized filter as discussed in the above sections. Then, we tune the thresholds for fastText classifiers for a given ensemble filtering rule such that at least 10T tokens are retained from the 15T tokens of FineWeb-V1.1.0. Specifically, we consider the following two ensemble aggregation rules:

Using the notation 

&nbsp;&nbsp;A: Custom built fastText quality filter

&nbsp;&nbsp;B: Custom built Category-aware readability score quality filter by leveraging Custom built fastText category classifier

&nbsp;&nbsp;C: Custom built Category-aware extreme_tokenized quality filter by leveraging Custom built fastText category classifier

&nbsp;&nbsp;**GneissWeb Recipe:**

Exact substring deduplication → ((A AND B) OR (A AND C)) 

GneissWeb ensemble filtering rule: A document is retained if either the fastText combination and category-aware readability score filter agree, or the fastText combination and category-aware extreme-toeknized filter agree,. Here the fastText combination is logical OR of the fastText classifiers, i.e., either of the fastText classifiers agrees. See the detailed rule in Figure 1.

&nbsp;&nbsp;Recipe2:

Exact substring deduplication → (A AND B AND C)

Ensemble filtering rule 2: A document is retained if either of the fastText classifiers agrees and category-aware readability score filter agrees and category-aware extreme tokenized filter agrees. Note that this rule is equivalent to sequentially applying the filters (in arbitrary order).  

Figure 20 shows the average eval score on high-signal tasks as well as extended tasks for the filtering rules along with the baseline of FineWeb-V1.1.0. We observe that the GneissWeb filtering ensemble rule outperforms the other rule on both high-signal and extended tasks.

<img src="fig20.jpg" alt="fig20.jpg" style="width:1000px;"/>

**Figure 20:** Comparison of ablations at 7 Billion model size for 100 Billion tokens

**Conclusion and Future Work**

IBM Research has produced a dataset called GneissWeb using an internal version of IBM DataPrep Kit. GneissWeb consists of 96 common-crawl snapshots outperforming some state-of-the-art datasets of comparative size. We continue to perform further data ablation experiments and plan to opensource the recipe via IBM DataPrep Kit. We are currently processing the latest 7 snapshots that we aim to include in GneissWeb after conducting further evaluations and verifications.  


**Dataset Summary**

Recently, IBM has introduced GneissWeb; a large dataset yielding around 10 trillion tokens that caters to the data quality and quantity requirements of training LLMs. The models trained using GneissWeb dataset outperform those trained on FineWeb 1.1.0 by 2.14 percentage points in terms of average score computed on a set of 11 commonly used benchmarks




&nbsp;&nbsp;&nbsp;&nbsp; **Developers**: IBM Research

&nbsp;&nbsp;&nbsp;&nbsp; **Release Date**: Feb 10th, 2025

&nbsp;&nbsp;&nbsp;&nbsp; **License**: Apache 2.0.

**Usage**