Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 91, in _split_generators
pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 193, in _generate_tables
examples = [ujson_loads(line) for line in batch.splitlines()]
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 20, in ujson_loads
return pd.io.json.ujson_loads(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Expected object or value
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
EEG Dataset
This dataset was created using braindecode, a deep learning library for EEG/MEG/ECoG signals.
Dataset Information
| Property | Value |
|---|---|
| Recordings | 1 |
| Type | Continuous (Raw) |
| Channels | 26 |
| Sampling frequency | 250 Hz |
| Total duration | 0:06:26 |
| Windows/samples | 96,735 |
| Size | 19.22 MB |
| Format | zarr |
Quick Start
from braindecode.datasets import BaseConcatDataset
# Load from Hugging Face Hub
dataset = BaseConcatDataset.pull_from_hub("username/dataset-name")
# Access a sample
X, y, metainfo = dataset[0]
# X: EEG data [n_channels, n_times]
# y: target label
# metainfo: window indices
Training with PyTorch
from torch.utils.data import DataLoader
loader = DataLoader(dataset, batch_size=32, shuffle=True, num_workers=4)
for X, y, metainfo in loader:
# X: [batch_size, n_channels, n_times]
# y: [batch_size]
pass # Your training code
BIDS-inspired Structure
This dataset uses a BIDS-inspired organization. Metadata files follow BIDS conventions, while data is stored in Zarr format for efficient deep learning.
BIDS-style metadata:
dataset_description.json- Dataset informationparticipants.tsv- Subject metadata*_events.tsv- Trial/window events*_channels.tsv- Channel information*_eeg.json- Recording parameters
Data storage:
dataset.zarr/- Zarr format (optimized for random access)
sourcedata/braindecode/
βββ dataset_description.json
βββ participants.tsv
βββ dataset.zarr/
βββ sub-<label>/
βββ eeg/
βββ *_events.tsv
βββ *_channels.tsv
βββ *_eeg.json
Accessing Metadata
# Participants info
if hasattr(dataset, "participants"):
print(dataset.participants)
# Events for a recording
if hasattr(dataset.datasets[0], "bids_events"):
print(dataset.datasets[0].bids_events)
# Channel info
if hasattr(dataset.datasets[0], "bids_channels"):
print(dataset.datasets[0].bids_channels)
Created with braindecode
- Downloads last month
- 48