The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 81, in _split_generators
first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 32, in _get_pipeline_from_tar
for filename, f in tar_iterator:
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/track.py", line 49, in __iter__
for x in self.generator(*self.args):
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1329, in _iter_from_urlpath
yield from cls._iter_tar(f)
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1280, in _iter_tar
stream = tarfile.open(fileobj=f, mode="r|*")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/tarfile.py", line 1886, in open
t = cls(name, filemode, stream, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/tarfile.py", line 1762, in __init__
self.firstmember = self.next()
^^^^^^^^^^^
File "/usr/local/lib/python3.12/tarfile.py", line 2750, in next
raise ReadError(str(e)) from None
tarfile.ReadError: invalid header
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
CHAOS-MatForge: Combinatorial High-throughput Analysis and Optimization for Synthesis Database
This dataset is a contribution from the Entropy for Energy (S4E) Laboratory at Johns Hopkins University, led by Prof. Corey Oses.
We release a large-scale, million-level dataset of high-quality VASP calculations for training machine learning interatomic potentials (MLIPs). This database provides comprehensive coverage of both ordered and disordered high-entropy material systems.
For more information about the S4E Laboratory, visit https://s4e.ai/.
Dataset Overview and Statistics
The CHAOS-MatForge database contains millions of atomic structures with DFT-calculated energies, forces, and structural information. The dataset is designed to support the development and fine-tuning of state-of-the-art MLIP models.
Ordered Structures (Coming Soon)
Ordered crystal structures for comparison and validation will be available in future releases.
Disordered Structures
High-entropy systems with configurational disorder, generated through AFLOW-POCC (Partially Occupied Crystalline Configurations) methodology. Different snapshots are extracted to represent the configurational space.
The disordered structures currently include two main material categories:
- High-Entropy Alloys (HEA): Body-Centered Cubic (BCC) structure
- High-Entropy Oxides (HEO): Rocksalt structure
Structure Counts
Structure counts include all ionic steps from the relaxation trajectories:
| Category | Train | Validation | Test |
|---|---|---|---|
| alloys | 467017 | 59345 | 57753 |
| oxides | 2135365 | 264284 | 250559 |
| high-entropy-alloys | 168633 | 49675 | 21980 |
| high-entropy-oxides | 127086 | 9703 | 13890 |
Formula Counts
Number of unique chemical compositions:
| Category | Train | Validation | Test |
|---|---|---|---|
| alloys | 6824 | 805 | 885 |
| oxides | 7995 | 944 | 982 |
| high-entropy-alloys | 88 | 24 | 12 |
| high-entropy-oxides | 169 | 15 | 16 |
System Counts
Number of distinct snapshots (configurational instances):
| Category | Train | Validation | Test |
|---|---|---|---|
| alloys | 13681 | 1692 | 1665 |
| oxides | 38294 | 4814 | 4512 |
| high-entropy-alloys | 167 | 47 | 22 |
| high-entropy-oxides | 169 | 15 | 16 |
Note: System is defined as a combination of formula and space group. High entropy systems are approximated using the POCC formalism, and each contain multiple sets of structures as ordered representatives.
Each split is fully deterministic, based on the sha256sum of the formula.
Data Format
- File Format:
.tar.zst(zstandard compressed tar archives) - Database Format:
.aselmdb(ASE database format) - Each Structure Contains:
- Atomic positions (3D coordinates)
- Cell parameters (lattice vectors)
- Energy (DFT calculated, in eV)
- Forces (on each atom, in eV/Å)
- Chemical composition and metadata
Tutorials
Note: The helper scripts required for the tutorials (e.g., evaluate_uma.py, relax_uma.py, plot_energy.py) can be found in the GitHub repository.
Fine-tuning Meta's UMA Model
Open to show fine-tuning instructions
This requires the new version of fairchem which does not support eSEN-OAM (as of 2025-07-16).
Set Up the Python Environment
Create and activate a virtual environment. Using uv .
Note: All of the helper scripts are based on uv, which allows multiple different versions of python to be used, and manages all the dependencies automatically.
If you do not use uv, you will need to manage the environments yourself using pip. We do not recommend doing this as the different models require different versions of python, torch, and is difficult to maintain.
It is important to run the python scripts directly, instead of using python, for the dependencies to be managed automatically by uv.
curl -LsSf https://astral.sh/uv/install.sh | sh
Prepare the Data
First, you want to download the UMA model, and save it to ./models/uma-s-p1.pt.
Select the data you want to use for the fine tuning. You should move all the training sets into a directory named train, and all the validation sets into a directory named val.
For example, if we want to pick up the high-entropy alloys and oxides:
mkdir train
tar xaf high-entropy-alloys-train.tar.zst
tar xaf high-entropy-oxides-train.tar.zst
mv high-entropy-alloys-train/* train/
mv high-entropy-oxides-train/* train/
mkdir val
tar xaf high-entropy-alloys-val.tar.zst
tar xaf high-entropy-oxides-val.tar.zst
mv high-entropy-alloys-val/* val/
mv high-entropy-oxides-val/* val/
We also need to get the fine-tuning scripts from fairchem:
git clone https://github.com/facebookresearch/fairchem
mv fairchem/{src,configs} .
(Optional) Run Evaluations Before Fine-tuning
Force/Energy Errors
It is helpful to see the performance of the base model first before fine-tuning.
./evaluate_uma.py test/
This produces a jsonl file which can be plotted with another helper script:
./plot_energy.py test_uma_ef.jsonl
Relaxations
./relax_uma.py test_prototypes/
This produces a new folder of .aselmdb files named test_uma_relaxed, which can be plotted:
./plot_energy.py test_prototypes_uma_relaxed.jsonl
Run Fine-tuning
First, generate the configuration using fairchem's helper script:
uv run src/fairchem/core/scripts/create_uma_finetune_dataset.py --train-dir train/ --val-dir val/ --output-dir ./finetune_output --uma-task=omat --regression-task ef
(ef means energy+force. Force is required to run relaxations. efs can also be used to include stress.).
The configuration file is at finetune_output/uma_sm_finetune_template.yaml. It should be edited to increase the step count between running evals so as to not slow down the training excessively. Find the corresponding keys, and change it to the following:
evaluate_every_n_steps: 5000
checkpoint_every_n_steps: 5000
If you have more than one GPU, you should select a GPU now (run nvidia-smi to see which GPU is free).
fairchem's finetuning scripts only work on a single GPU, but this should be relatively fast.
export CUDA_VISIBLE_DEVICES=0
Now, you can run the fine-tuning. This will take a while. It is recommended to use screen, tmux, or a similar tool to avoid interruptions.
uv run fairchem -c finetune_output/uma_sm_finetune_template.yaml job.run_dir=./finetune_out
After the fine tuning you should get a checkpoint inside the finetune_out directory.
Evaluation of the Fine-tuned Model
Now that you have a fine-tuned model, you can evaluate it on the test set.
Several metrics can be used for this. Common ones included in this repository are:
- Mean absolute error of force and energy predictions (vs DFT)
- Geometry error of relaxed structures (root-mean-squared displacement, RMSD).
Force/Energy Errors
It is helpful to see the performance of the base model first before fine-tuning.
./evaluate_uma.py --model finetune_out/checkpoint.pt test/
This produces a jsonl file which can be plotted with another helper script:
./plot_energy.py test/ test_uma_relaxed/
Relaxations
./relax_uma.py --model finetune_out/checkpoint.pt test_prototypes/
This produces a new folder of .aselmdb files named test_prototypes_uma_relaxed, which can be plotted.
Note that the energies here should be compared against the final relaxed structures, which is provided separatly in *-test-final.tar.zst.
./plot_energy.py test_final/ test_prototypes_uma_relaxed/
Citation
If you use this dataset in your research, please cite:
@dataset{chaos_matforge_2024,
title={CHAOS-MatForge: Combinatorial High-throughput Analysis and Optimization for Synthesis Database},
author={Han, Guangshuai and Tseng, Shao-Yu and Li, Tianhao and Oses, Corey},
year={2024},
publisher={Hugging Face},
url={https://huggingface.co/datasets/entropy4energy/S4E-MatForge}
}
- Downloads last month
- 21