url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
2.41B
node_id
stringlengths
18
32
number
int64
1
7.05k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
int64
0
70
created_at
timestamp[ns, tz=UTC]
updated_at
timestamp[ns, tz=UTC]
closed_at
timestamp[ns, tz=UTC]
author_association
stringclasses
4 values
active_lock_reason
float64
body
stringlengths
0
228k
βŒ€
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
draft
float64
0
1
βŒ€
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6539
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6539/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6539/comments
https://api.github.com/repos/huggingface/datasets/issues/6539/events
https://github.com/huggingface/datasets/issues/6539
2,058,493,960
I_kwDODunzps56siAI
6,539
'Repo card metadata block was not found' when loading a pragmeval dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/3647577?v=4", "events_url": "https://api.github.com/users/lambdaofgod/events{/privacy}", "followers_url": "https://api.github.com/users/lambdaofgod/followers", "following_url": "https://api.github.com/users/lambdaofgod/following{/other_user}", "gists_url": "https://api.github.com/users/lambdaofgod/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lambdaofgod", "id": 3647577, "login": "lambdaofgod", "node_id": "MDQ6VXNlcjM2NDc1Nzc=", "organizations_url": "https://api.github.com/users/lambdaofgod/orgs", "received_events_url": "https://api.github.com/users/lambdaofgod/received_events", "repos_url": "https://api.github.com/users/lambdaofgod/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lambdaofgod/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lambdaofgod/subscriptions", "type": "User", "url": "https://api.github.com/users/lambdaofgod" }
[]
open
false
null
[]
null
0
2023-12-28T14:18:25Z
2023-12-28T14:18:37Z
null
NONE
null
### Describe the bug I can't load dataset subsets of 'pragmeval'. The funny thing is I ran the dataset author's [colab notebook](https://colab.research.google.com/drive/1sg--LF4z7XR1wxAOfp0-3d4J6kQ9nj_A?usp=sharing) and it works just fine. I tried to install exactly the same packages that are installed on colab using poetry, so my environment info only differs from the one from colab in linux version - I still get the same bug outside colab. ### Steps to reproduce the bug Install dependencies with poetry pyproject.toml ``` [tool.poetry] name = "project" version = "0.1.0" description = "" authors = [] [tool.poetry.dependencies] python = "^3.10" datasets = "2.16.0" pandas = "1.5.3" pyarrow = "10.0.1" huggingface-hub = "0.19.4" fsspec = "2023.6.0" [build-system] requires = ["poetry-core"] build-backend = "poetry.core.masonry.api" ``` `poetry run python -c "import datasets; print(datasets.get_dataset_config_names('pragmeval'))` prints ['default'] ### Expected behavior The command should print ``` ['emergent', 'emobank-arousal', 'emobank-dominance', 'emobank-valence', 'gum', 'mrda', 'pdtb', 'persuasiveness-claimtype', 'persuasiveness-eloquence', 'persuasiveness-premisetype', 'persuasiveness-relevance', 'persuasiveness-specificity', 'persuasiveness-strength', 'sarcasm', 'squinky-formality', 'squinky-implicature', 'squinky-informativeness', 'stac', 'switchboard', 'verifiability'] ``` ### Environment info - `datasets` version: 2.16.0 - Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 10.0.1 - Pandas version: 1.5.3 - `fsspec` version: 2023.6.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6539/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6539/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6538
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6538/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6538/comments
https://api.github.com/repos/huggingface/datasets/issues/6538/events
https://github.com/huggingface/datasets/issues/6538
2,057,377,630
I_kwDODunzps56oRde
6,538
ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
{ "avatar_url": "https://avatars.githubusercontent.com/u/131662185?v=4", "events_url": "https://api.github.com/users/Sonali-Behera-TRT/events{/privacy}", "followers_url": "https://api.github.com/users/Sonali-Behera-TRT/followers", "following_url": "https://api.github.com/users/Sonali-Behera-TRT/following{/other_user}", "gists_url": "https://api.github.com/users/Sonali-Behera-TRT/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Sonali-Behera-TRT", "id": 131662185, "login": "Sonali-Behera-TRT", "node_id": "U_kgDOB9kBaQ", "organizations_url": "https://api.github.com/users/Sonali-Behera-TRT/orgs", "received_events_url": "https://api.github.com/users/Sonali-Behera-TRT/received_events", "repos_url": "https://api.github.com/users/Sonali-Behera-TRT/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Sonali-Behera-TRT/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Sonali-Behera-TRT/subscriptions", "type": "User", "url": "https://api.github.com/users/Sonali-Behera-TRT" }
[]
closed
false
null
[]
null
15
2023-12-27T13:31:16Z
2024-01-03T10:06:47Z
2024-01-03T10:04:58Z
NONE
null
### Describe the bug While importing from packages getting the error Code: ``` import os import torch from datasets import load_dataset, Dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline, logging ) from peft import LoraConfig, PeftModel from trl import SFTTrainer from huggingface_hub import login import pandas as pd ``` Error: ```` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[5], line 14 4 from transformers import ( 5 AutoModelForCausalLM, 6 AutoTokenizer, (...) 11 logging 12 ) 13 from peft import LoraConfig, PeftModel ---> 14 from trl import SFTTrainer 15 from huggingface_hub import login 16 import pandas as pd File /opt/conda/lib/python3.10/site-packages/trl/__init__.py:21 8 from .import_utils import ( 9 is_diffusers_available, 10 is_npu_available, (...) 13 is_xpu_available, 14 ) 15 from .models import ( 16 AutoModelForCausalLMWithValueHead, 17 AutoModelForSeq2SeqLMWithValueHead, 18 PreTrainedModelWrapper, 19 create_reference_model, 20 ) ---> 21 from .trainer import ( 22 DataCollatorForCompletionOnlyLM, 23 DPOTrainer, 24 IterativeSFTTrainer, 25 PPOConfig, 26 PPOTrainer, 27 RewardConfig, 28 RewardTrainer, 29 SFTTrainer, 30 ) 33 if is_diffusers_available(): 34 from .models import ( 35 DDPOPipelineOutput, 36 DDPOSchedulerOutput, 37 DDPOStableDiffusionPipeline, 38 DefaultDDPOStableDiffusionPipeline, 39 ) File /opt/conda/lib/python3.10/site-packages/trl/trainer/__init__.py:44 42 from .ppo_trainer import PPOTrainer 43 from .reward_trainer import RewardTrainer, compute_accuracy ---> 44 from .sft_trainer import SFTTrainer 45 from .training_configs import RewardConfig File /opt/conda/lib/python3.10/site-packages/trl/trainer/sft_trainer.py:23 21 import torch.nn as nn 22 from datasets import Dataset ---> 23 from datasets.arrow_writer import SchemaInferenceError 24 from datasets.builder import DatasetGenerationError 25 from transformers import ( 26 AutoModelForCausalLM, 27 AutoTokenizer, (...) 33 TrainingArguments, 34 ) ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py ```` transformers version: 4.36.2 python version: 3.10.12 datasets version: 2.16.1 ### Steps to reproduce the bug 1. Install packages ``` !pip install -U datasets trl accelerate peft bitsandbytes transformers trl huggingface_hub ``` 2. import packages ``` import os import torch from datasets import load_dataset, Dataset from transformers import ( AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, HfArgumentParser, TrainingArguments, pipeline, logging ) from peft import LoraConfig, PeftModel from trl import SFTTrainer from huggingface_hub import login import pandas as pd ``` ### Expected behavior No error while importing ### Environment info - `datasets` version: 2.16.0 - Platform: Linux-5.15.133+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.20.1 - PyArrow version: 11.0.0 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6538/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6538/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6537
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6537/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6537/comments
https://api.github.com/repos/huggingface/datasets/issues/6537/events
https://github.com/huggingface/datasets/issues/6537
2,057,132,173
I_kwDODunzps56nViN
6,537
Adding support for netCDF (*.nc) files
{ "avatar_url": "https://avatars.githubusercontent.com/u/12627125?v=4", "events_url": "https://api.github.com/users/shermansiu/events{/privacy}", "followers_url": "https://api.github.com/users/shermansiu/followers", "following_url": "https://api.github.com/users/shermansiu/following{/other_user}", "gists_url": "https://api.github.com/users/shermansiu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shermansiu", "id": 12627125, "login": "shermansiu", "node_id": "MDQ6VXNlcjEyNjI3MTI1", "organizations_url": "https://api.github.com/users/shermansiu/orgs", "received_events_url": "https://api.github.com/users/shermansiu/received_events", "repos_url": "https://api.github.com/users/shermansiu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shermansiu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shermansiu/subscriptions", "type": "User", "url": "https://api.github.com/users/shermansiu" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
3
2023-12-27T09:27:29Z
2023-12-27T20:46:53Z
null
NONE
null
### Feature request netCDF (*.nc) is a file format for storing multidimensional scientific data, which is used by packages like `xarray` (labelled multi-dimensional arrays in Python). It would be nice to have native support for netCDF in `datasets`. ### Motivation When uploading *.nc files onto Huggingface Hub through the `datasets` API, I would like to be able to preview the dataset without converting it to another format. ### Your contribution I can submit a PR, provided I have the time.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6537/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6537/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6536
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6536/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6536/comments
https://api.github.com/repos/huggingface/datasets/issues/6536/events
https://github.com/huggingface/datasets/issues/6536
2,056,863,239
I_kwDODunzps56mT4H
6,536
datasets.load_dataset raises FileNotFoundError for datasets==2.16.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/46237844?v=4", "events_url": "https://api.github.com/users/ArvinZhuang/events{/privacy}", "followers_url": "https://api.github.com/users/ArvinZhuang/followers", "following_url": "https://api.github.com/users/ArvinZhuang/following{/other_user}", "gists_url": "https://api.github.com/users/ArvinZhuang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArvinZhuang", "id": 46237844, "login": "ArvinZhuang", "node_id": "MDQ6VXNlcjQ2MjM3ODQ0", "organizations_url": "https://api.github.com/users/ArvinZhuang/orgs", "received_events_url": "https://api.github.com/users/ArvinZhuang/received_events", "repos_url": "https://api.github.com/users/ArvinZhuang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArvinZhuang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArvinZhuang/subscriptions", "type": "User", "url": "https://api.github.com/users/ArvinZhuang" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
2
2023-12-27T03:15:48Z
2023-12-30T18:58:04Z
2023-12-30T15:54:00Z
NONE
null
### Describe the bug Seems `datasets.load_dataset` raises FileNotFoundError for some hub datasets with the latest `datasets==2.16.0` ### Steps to reproduce the bug For example `pip install datasets==2.16.0` then ```python import datasets datasets.load_dataset("wentingzhao/anthropic-hh-first-prompt", cache_dir='cache1')["train"] ``` This will raise: ```bash Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/load.py", line 2545, in load_dataset builder_instance.download_and_prepare( File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/builder.py", line 1003, in download_and_prepare self._download_and_prepare( File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/builder.py", line 1076, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 43, in _split_generators data_files = dl_manager.download_and_extract(self.config.data_files) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 566, in download_and_extract return self.extract(self.download(url_or_urls)) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 539, in extract extracted_paths = map_nested( File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 466, in map_nested mapped = [ File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 467, in <listcomp> _single_map_nested((function, obj, types, None, True, None)) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 387, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 387, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True, None)) for v in pbar] File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 370, in _single_map_nested return function(data_struct) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/download/download_manager.py", line 451, in _download out = cached_path(url_or_filename, download_config=download_config) File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 188, in cached_path output_path = get_from_cache( File "/Users/xxx/miniconda3/envs/env/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 570, in get_from_cache raise FileNotFoundError(f"Couldn't find file at {url}") FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/wentingzhao/anthropic-hh-first-prompt/resolve/11b393a5545f706a357ebcd4a5285d93db176715/cache1/downloads/87d66c365626feca116cba323c4856c9aae056e4503f09f23e34aa085eb9de15 ``` However, seems it works fine for some datasets, for example, if works fine for `datasets.load_dataset("ag_news", cache_dir='cache2')["test"]` But the dataset works fine for datasets==2.15.0, for example `pip install datasets==2.15.0`, then ```python import datasets datasets.load_dataset("wentingzhao/anthropic-hh-first-prompt", cache_dir='cache3')["train"] Dataset({ features: ['user', 'system', 'source'], num_rows: 8552 }) ``` ### Expected behavior 2.16.0 should work as same as 2.15.0 for all datasets ### Environment info python3.9 conda env tested on MacOS and Linux
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6536/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6536/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6535
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6535/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6535/comments
https://api.github.com/repos/huggingface/datasets/issues/6535/events
https://github.com/huggingface/datasets/issues/6535
2,056,264,339
I_kwDODunzps56kBqT
6,535
IndexError: Invalid key: 47682 is out of bounds for size 0 while using PEFT
{ "avatar_url": "https://avatars.githubusercontent.com/u/57484266?v=4", "events_url": "https://api.github.com/users/MahavirDabas18/events{/privacy}", "followers_url": "https://api.github.com/users/MahavirDabas18/followers", "following_url": "https://api.github.com/users/MahavirDabas18/following{/other_user}", "gists_url": "https://api.github.com/users/MahavirDabas18/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MahavirDabas18", "id": 57484266, "login": "MahavirDabas18", "node_id": "MDQ6VXNlcjU3NDg0MjY2", "organizations_url": "https://api.github.com/users/MahavirDabas18/orgs", "received_events_url": "https://api.github.com/users/MahavirDabas18/received_events", "repos_url": "https://api.github.com/users/MahavirDabas18/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MahavirDabas18/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MahavirDabas18/subscriptions", "type": "User", "url": "https://api.github.com/users/MahavirDabas18" }
[]
open
false
null
[]
null
3
2023-12-26T10:14:33Z
2024-02-05T08:42:31Z
null
NONE
null
### Describe the bug I am trying to fine-tune the t5 model on the paraphrasing task. While running the same code without- model = get_peft_model(model, config) the model trains without any issues. However, using the model returned from get_peft_model raises the following error due to datasets- IndexError: Invalid key: 47682 is out of bounds for size 0. I had raised this in https://github.com/huggingface/peft/issues/1299#issue-2056173386 and they suggested that I raise it here. Here is the complete error- IndexError Traceback (most recent call last) in <cell line: 1>() ----> 1 trainer.train() 11 frames [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1553 hf_hub_utils.enable_progress_bars() 1554 else: -> 1555 return inner_training_loop( 1556 args=args, 1557 resume_from_checkpoint=resume_from_checkpoint, [/usr/local/lib/python3.10/dist-packages/transformers/trainer.py](https://localhost:8080/#) in _inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1836 1837 step = -1 -> 1838 for step, inputs in enumerate(epoch_iterator): 1839 total_batched_samples += 1 1840 if rng_to_sync: [/usr/local/lib/python3.10/dist-packages/accelerate/data_loader.py](https://localhost:8080/#) in iter(self) 446 # We iterate one batch ahead to check when we are at the end 447 try: --> 448 current_batch = next(dataloader_iter) 449 except StopIteration: 450 yield [/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in next(self) 628 # TODO(https://github.com/pytorch/pytorch/issues/76750) 629 self._reset() # type: ignore[call-arg] --> 630 data = self._next_data() 631 self._num_yielded += 1 632 if self._dataset_kind == _DatasetKind.Iterable and \ [/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py](https://localhost:8080/#) in _next_data(self) 672 def _next_data(self): 673 index = self._next_index() # may raise StopIteration --> 674 data = self._dataset_fetcher.fetch(index) # may raise StopIteration 675 if self._pin_memory: 676 data = _utils.pin_memory.pin_memory(data, self._pin_memory_device) [/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py](https://localhost:8080/#) in fetch(self, possibly_batched_index) 47 if self.auto_collation: 48 if hasattr(self.dataset, "getitems") and self.dataset.getitems: ---> 49 data = self.dataset.getitems(possibly_batched_index) 50 else: 51 data = [self.dataset[idx] for idx in possibly_batched_index] [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in getitems(self, keys) 2802 def getitems(self, keys: List) -> List: 2803 """Can be used to get a batch using a list of integers indices.""" -> 2804 batch = self.getitem(keys) 2805 n_examples = len(batch[next(iter(batch))]) 2806 return [{col: array[i] for col, array in batch.items()} for i in range(n_examples)] [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in getitem(self, key) 2798 def getitem(self, key): # noqa: F811 2799 """Can be used to index columns (by string names) or rows (by integer index or iterable of indices or bools).""" -> 2800 return self._getitem(key) 2801 2802 def getitems(self, keys: List) -> List: [/usr/local/lib/python3.10/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in _getitem(self, key, **kwargs) 2782 format_kwargs = format_kwargs if format_kwargs is not None else {} 2783 formatter = get_formatter(format_type, features=self._info.features, **format_kwargs) -> 2784 pa_subtable = query_table(self._data, key, indices=self._indices if self._indices is not None else None) 2785 formatted_output = format_table( 2786 pa_subtable, key, formatter=formatter, format_columns=format_columns, output_all_columns=output_all_columns [/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in query_table(table, key, indices) 581 else: 582 size = indices.num_rows if indices is not None else table.num_rows --> 583 _check_valid_index_key(key, size) 584 # Query the main table 585 if indices is None: [/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size) 534 elif isinstance(key, Iterable): 535 if len(key) > 0: --> 536 _check_valid_index_key(int(max(key)), size=size) 537 _check_valid_index_key(int(min(key)), size=size) 538 else: [/usr/local/lib/python3.10/dist-packages/datasets/formatting/formatting.py](https://localhost:8080/#) in _check_valid_index_key(key, size) 524 if isinstance(key, int): 525 if (key < 0 and key + size < 0) or (key >= size): --> 526 raise IndexError(f"Invalid key: {key} is out of bounds for size {size}") 527 return 528 elif isinstance(key, slice): IndexError: Invalid key: 47682 is out of bounds for size 0 ### Steps to reproduce the bug device = "cuda:0" if torch.cuda.is_available() else "cpu" #defining model name for tokenizer and model loading model_name= "t5-small" #loading the tokenizer tokenizer = AutoTokenizer.from_pretrained(model_name) def preprocess_function(data, tokenizer): inputs = [f"Paraphrase this sentence: {doc}" for doc in data["text"]] model_inputs = tokenizer(inputs, max_length=150, truncation=True) labels = [ast.literal_eval(i)[0] for i in data['paraphrases']] labels = tokenizer(labels, max_length=150, truncation=True) model_inputs["labels"] = labels["input_ids"] return model_inputs train_dataset = load_dataset("humarin/chatgpt-paraphrases", split="train").shuffle(seed=42).select(range(50000)) val_dataset = load_dataset("humarin/chatgpt-paraphrases", split="train").shuffle(seed=42).select(range(50000,55000)) tokenized_train = train_dataset.map(lambda batch: preprocess_function(batch, tokenizer), batched=True) tokenized_val = val_dataset.map(lambda batch: preprocess_function(batch, tokenizer), batched=True) def print_trainable_parameters(model): """ Prints the number of trainable parameters in the model. """ trainable_params = 0 all_param = 0 for _, param in model.named_parameters(): all_param += param.numel() if param.requires_grad: trainable_params += param.numel() print( f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}" ) config = LoraConfig( r=16, #attention heads lora_alpha=32, #alpha scaling lora_dropout=0.05, bias="none", task_type="Seq2Seq" ) #loading the model model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device) model = get_peft_model(model, config) print_trainable_parameters(model) #loading the data collator data_collator = DataCollatorForSeq2Seq( tokenizer=tokenizer, model=model, label_pad_token_id=-100, padding="longest" ) #defining the training arguments training_args = Seq2SeqTrainingArguments( output_dir=os.getcwd(), evaluation_strategy="epoch", save_strategy="epoch", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, weight_decay=1e-3, save_total_limit=3, load_best_model_at_end=True, num_train_epochs=1, predict_with_generate=True ) def compute_metric_with_extra(tokenizer): def compute_metrics(eval_preds): metric = evaluate.load('rouge') preds, labels = eval_preds # decode preds and labels labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # rougeLSum expects newline after each sentence decoded_preds = ["\n".join(nltk.sent_tokenize(pred.strip())) for pred in decoded_preds] decoded_labels = ["\n".join(nltk.sent_tokenize(label.strip())) for label in decoded_labels] result = metric.compute(predictions=decoded_preds, references=decoded_labels, use_stemmer=True) return result return compute_metrics trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=tokenized_train, eval_dataset=tokenized_val, tokenizer=tokenizer, data_collator=data_collator, compute_metrics= compute_metric_with_extra(tokenizer) ) trainer.train() ### Expected behavior I would want the trainer to train normally as it was before I used- model = get_peft_model(model, config) ### Environment info datasets version- 2.16.0 peft version- 0.7.1 transformers version- 4.35.2 accelerate version- 0.25.0 python- 3.10.12 enviroment- google colab
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6535/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6535/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6534
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6534/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6534/comments
https://api.github.com/repos/huggingface/datasets/issues/6534/events
https://github.com/huggingface/datasets/issues/6534
2,056,002,548
I_kwDODunzps56jBv0
6,534
How to configure multiple folders in the same zip package
{ "avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4", "events_url": "https://api.github.com/users/d710055071/events{/privacy}", "followers_url": "https://api.github.com/users/d710055071/followers", "following_url": "https://api.github.com/users/d710055071/following{/other_user}", "gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/d710055071", "id": 12895488, "login": "d710055071", "node_id": "MDQ6VXNlcjEyODk1NDg4", "organizations_url": "https://api.github.com/users/d710055071/orgs", "received_events_url": "https://api.github.com/users/d710055071/received_events", "repos_url": "https://api.github.com/users/d710055071/repos", "site_admin": false, "starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/d710055071/subscriptions", "type": "User", "url": "https://api.github.com/users/d710055071" }
[]
open
false
null
[]
null
1
2023-12-26T03:56:20Z
2023-12-26T06:31:16Z
null
CONTRIBUTOR
null
How should I write "config" in readme when all the data, such as train test, is in a zip file train floder and test floder in data.zip
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6534/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6534/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6533
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6533/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6533/comments
https://api.github.com/repos/huggingface/datasets/issues/6533/events
https://github.com/huggingface/datasets/issues/6533
2,055,929,101
I_kwDODunzps56iv0N
6,533
ted_talks_iwslt | Error: Config name is missing
{ "avatar_url": "https://avatars.githubusercontent.com/u/35850903?v=4", "events_url": "https://api.github.com/users/rayliuca/events{/privacy}", "followers_url": "https://api.github.com/users/rayliuca/followers", "following_url": "https://api.github.com/users/rayliuca/following{/other_user}", "gists_url": "https://api.github.com/users/rayliuca/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rayliuca", "id": 35850903, "login": "rayliuca", "node_id": "MDQ6VXNlcjM1ODUwOTAz", "organizations_url": "https://api.github.com/users/rayliuca/orgs", "received_events_url": "https://api.github.com/users/rayliuca/received_events", "repos_url": "https://api.github.com/users/rayliuca/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rayliuca/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rayliuca/subscriptions", "type": "User", "url": "https://api.github.com/users/rayliuca" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
2
2023-12-26T00:38:18Z
2023-12-30T18:58:21Z
2023-12-30T16:09:50Z
NONE
null
### Describe the bug Running load_dataset using the newest `datasets` library like below on the ted_talks_iwslt using year pair data will throw an error "Config name is missing" see also: https://huggingface.co/datasets/ted_talks_iwslt/discussions/3 likely caused by #6493, where the `and not config_kwargs` part in the if logic was removed https://github.com/huggingface/datasets/blob/ef3b5dd3633995c95d77f35fb17f89ff44990bc4/src/datasets/builder.py#L512 ### Steps to reproduce the bug run: ```python load_dataset("ted_talks_iwslt", language_pair=("ja", "en"), year="2015") ``` ### Expected behavior Load the data without error ### Environment info datasets 2.16.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6533/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6533/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6532
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6532/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6532/comments
https://api.github.com/repos/huggingface/datasets/issues/6532/events
https://github.com/huggingface/datasets/issues/6532
2,055,631,201
I_kwDODunzps56hnFh
6,532
[Feature request] Indexing datasets by a customly-defined id field to enable random access dataset items via the id
{ "avatar_url": "https://avatars.githubusercontent.com/u/3377221?v=4", "events_url": "https://api.github.com/users/Yu-Shi/events{/privacy}", "followers_url": "https://api.github.com/users/Yu-Shi/followers", "following_url": "https://api.github.com/users/Yu-Shi/following{/other_user}", "gists_url": "https://api.github.com/users/Yu-Shi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Yu-Shi", "id": 3377221, "login": "Yu-Shi", "node_id": "MDQ6VXNlcjMzNzcyMjE=", "organizations_url": "https://api.github.com/users/Yu-Shi/orgs", "received_events_url": "https://api.github.com/users/Yu-Shi/received_events", "repos_url": "https://api.github.com/users/Yu-Shi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Yu-Shi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Yu-Shi/subscriptions", "type": "User", "url": "https://api.github.com/users/Yu-Shi" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
5
2023-12-25T11:37:10Z
2024-06-05T07:54:54Z
null
NONE
null
### Feature request Some datasets may contain an id-like field, for example the `id` field in [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and the `_id` field in [BeIR/dbpedia-entity](https://huggingface.co/datasets/BeIR/dbpedia-entity). HF datasets support efficient random access via row, but not via this kinds of id fields. I wonder if it is possible to add support for indexing by a custom "id-like" field to enable random access via such ids. The ids may be numbers or strings. ### Motivation In some cases, especially during inference/evaluation, I may want to find out the item that has a specified id, defined by the dataset itself. For example, in a typical re-ranking setting in information retrieval, the user may want to re-rank the set of candidate documents of each query. The input is usually presented in a TREC-style run file, with the following format: ``` <qid> Q0 <docno> <rank> <score> <tag> ``` The re-ranking program should be able to fetch the queries and documents according to the `<qid>` and `<docno>`, which are the original id defined in the query/document datasets. To accomplish this, I have to iterate over the whole HF dataset to get the mapping from real ids to row ids every time I start the program, which is time-consuming. Thus I want HF dataset to provide options for users to index by a custom id column, not by row. ### Your contribution I'm not an expert in this project and I'm afraid that I'm not able to make contributions on the code.
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6532/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6532/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6531
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6531/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6531/comments
https://api.github.com/repos/huggingface/datasets/issues/6531/events
https://github.com/huggingface/datasets/pull/6531
2,055,201,605
PR_kwDODunzps5it5Sm
6,531
Add polars compatibility
{ "avatar_url": "https://avatars.githubusercontent.com/u/11325244?v=4", "events_url": "https://api.github.com/users/psmyth94/events{/privacy}", "followers_url": "https://api.github.com/users/psmyth94/followers", "following_url": "https://api.github.com/users/psmyth94/following{/other_user}", "gists_url": "https://api.github.com/users/psmyth94/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/psmyth94", "id": 11325244, "login": "psmyth94", "node_id": "MDQ6VXNlcjExMzI1MjQ0", "organizations_url": "https://api.github.com/users/psmyth94/orgs", "received_events_url": "https://api.github.com/users/psmyth94/received_events", "repos_url": "https://api.github.com/users/psmyth94/repos", "site_admin": false, "starred_url": "https://api.github.com/users/psmyth94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/psmyth94/subscriptions", "type": "User", "url": "https://api.github.com/users/psmyth94" }
[]
closed
false
null
[]
null
7
2023-12-24T20:03:23Z
2024-03-08T19:29:25Z
2024-03-08T15:22:58Z
CONTRIBUTOR
null
Hey there, I've just finished adding support to convert and format to `polars.DataFrame`. This was in response to the open issue about integrating Polars [#3334](https://github.com/huggingface/datasets/issues/3334). Datasets can be switched to Polars format via `Dataset.set_format("polars")`. I've also included `to_polars` and `from_polars`. All polars functions are checked via config.POLARS_AVAILABLE. A few notes: This only supports `DataFrames` and not `LazyFrames`. This probably could be integrated fairly easily via `is_lazy` args in `set_format`, and `to_polars`. Let me know your feedbacks.
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 4, "hooray": 2, "laugh": 0, "rocket": 0, "total_count": 8, "url": "https://api.github.com/repos/huggingface/datasets/issues/6531/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6531/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6531.diff", "html_url": "https://github.com/huggingface/datasets/pull/6531", "merged_at": "2024-03-08T15:22:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/6531.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6531" }
true
https://api.github.com/repos/huggingface/datasets/issues/6530
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6530/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6530/comments
https://api.github.com/repos/huggingface/datasets/issues/6530/events
https://github.com/huggingface/datasets/issues/6530
2,054,817,609
I_kwDODunzps56egdJ
6,530
Impossible to save a mapped dataset to disk
{ "avatar_url": "https://avatars.githubusercontent.com/u/17604849?v=4", "events_url": "https://api.github.com/users/kopyl/events{/privacy}", "followers_url": "https://api.github.com/users/kopyl/followers", "following_url": "https://api.github.com/users/kopyl/following{/other_user}", "gists_url": "https://api.github.com/users/kopyl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kopyl", "id": 17604849, "login": "kopyl", "node_id": "MDQ6VXNlcjE3NjA0ODQ5", "organizations_url": "https://api.github.com/users/kopyl/orgs", "received_events_url": "https://api.github.com/users/kopyl/received_events", "repos_url": "https://api.github.com/users/kopyl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kopyl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kopyl/subscriptions", "type": "User", "url": "https://api.github.com/users/kopyl" }
[]
open
false
null
[]
null
1
2023-12-23T15:18:27Z
2023-12-24T09:40:30Z
null
NONE
null
### Describe the bug I want to play around with different hyperparameters when training but don't want to re-map my dataset with 3 million samples each time for tens of hours when I [fully fine-tune SDXL](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py). After I do the mapping like this: ``` train_dataset = train_dataset.map(compute_embeddings_fn, batched=True) train_dataset = train_dataset.map( compute_vae_encodings_fn, batched=True, batch_size=16, ) ``` and try to save it like this: `train_dataset.save_to_disk("test")` i get this error ([full traceback](https://pastebin.com/kq3vt739)): ``` TypeError: Object of type function is not JSON serializable The format kwargs must be JSON serializable, but key 'transform' isn't. ``` But what is interesting is that pushing to hub works like that: `train_dataset.push_to_hub("kopyl/mapped-833-icons-sdxl-1024-dataset", token=True)` Here is the link of the pushed dataset: https://huggingface.co/datasets/kopyl/mapped-833-icons-sdxl-1024-dataset ### Steps to reproduce the bug Here is the self-contained notebook: https://colab.research.google.com/drive/1RtCsEMVcwWcMwlWURk_cj_9xUBHz065M?usp=sharing ### Expected behavior It should be easily saved to disk ### Environment info NVIDIA A100, Linux (NC24ads A100 v4 from Azure), CUDA 12.2. [pip freeze](https://pastebin.com/QTNb6iru)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6530/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6530/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6529
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6529/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6529/comments
https://api.github.com/repos/huggingface/datasets/issues/6529/events
https://github.com/huggingface/datasets/issues/6529
2,054,209,449
I_kwDODunzps56cL-p
6,529
Impossible to only download a test split
{ "avatar_url": "https://avatars.githubusercontent.com/u/28439529?v=4", "events_url": "https://api.github.com/users/ysig/events{/privacy}", "followers_url": "https://api.github.com/users/ysig/followers", "following_url": "https://api.github.com/users/ysig/following{/other_user}", "gists_url": "https://api.github.com/users/ysig/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ysig", "id": 28439529, "login": "ysig", "node_id": "MDQ6VXNlcjI4NDM5NTI5", "organizations_url": "https://api.github.com/users/ysig/orgs", "received_events_url": "https://api.github.com/users/ysig/received_events", "repos_url": "https://api.github.com/users/ysig/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ysig/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ysig/subscriptions", "type": "User", "url": "https://api.github.com/users/ysig" }
[]
open
false
null
[]
null
2
2023-12-22T16:56:32Z
2024-02-02T00:05:04Z
null
NONE
null
I've spent a significant amount of time trying to locate the split object inside my _split_generators() custom function. Then after diving [in the code](https://github.com/huggingface/datasets/blob/5ff3670c18ed34fa8ddfa70a9aa403ae6cc9ad54/src/datasets/load.py#L2558) I realized that `download_and_prepare` is executed before! split is passed to the dataset builder in `as_dataset`. If I'm not missing something, this seems like bad design, for the following use case: > Imagine there is a huge dataset that has an evaluation test set and you want to just download and run just to compare your method. Is there a current workaround that can help me achieve the same result? Thank you,
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6529/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6529/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6528
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6528/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6528/comments
https://api.github.com/repos/huggingface/datasets/issues/6528/events
https://github.com/huggingface/datasets/pull/6528
2,053,996,494
PR_kwDODunzps5ip9JH
6,528
set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
2023-12-22T14:23:18Z
2023-12-22T14:31:42Z
2023-12-22T14:25:34Z
MEMBER
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6528/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6528/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6528.diff", "html_url": "https://github.com/huggingface/datasets/pull/6528", "merged_at": "2023-12-22T14:25:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/6528.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6528" }
true
https://api.github.com/repos/huggingface/datasets/issues/6527
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6527/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6527/comments
https://api.github.com/repos/huggingface/datasets/issues/6527/events
https://github.com/huggingface/datasets/pull/6527
2,053,966,748
PR_kwDODunzps5ip2vd
6,527
Release: 2.16.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
2023-12-22T13:59:56Z
2023-12-22T14:24:12Z
2023-12-22T14:17:55Z
MEMBER
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6527/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6527/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6527.diff", "html_url": "https://github.com/huggingface/datasets/pull/6527", "merged_at": "2023-12-22T14:17:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/6527.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6527" }
true
https://api.github.com/repos/huggingface/datasets/issues/6526
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6526/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6526/comments
https://api.github.com/repos/huggingface/datasets/issues/6526/events
https://github.com/huggingface/datasets/pull/6526
2,053,726,451
PR_kwDODunzps5ipB5v
6,526
Preserve order of configs and splits when using Parquet exports
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
2
2023-12-22T10:35:56Z
2023-12-22T11:42:22Z
2023-12-22T11:36:14Z
MEMBER
null
Preserve order of configs and splits, as defined in dataset infos. Fix #6521.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6526/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6526/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6526.diff", "html_url": "https://github.com/huggingface/datasets/pull/6526", "merged_at": "2023-12-22T11:36:14Z", "patch_url": "https://github.com/huggingface/datasets/pull/6526.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6526" }
true
https://api.github.com/repos/huggingface/datasets/issues/6525
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6525/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6525/comments
https://api.github.com/repos/huggingface/datasets/issues/6525/events
https://github.com/huggingface/datasets/pull/6525
2,053,119,357
PR_kwDODunzps5im-lL
6,525
BBox type
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
2023-12-21T22:13:27Z
2024-01-11T06:34:51Z
2023-12-21T22:39:27Z
MEMBER
null
see [internal discussion](https://huggingface.slack.com/archives/C02EK7C3SHW/p1703097195609209) Draft to get some feedback on a possible `BBox` feature type that can be used to get object detection bounding boxes data in one format or another. ```python >>> from datasets import load_dataset, BBox >>> ds = load_dataset("svhn", "full_numbers", split="train") >>> ds[0] { 'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=107x46 at 0x126409BE0>, 'digits': {'bbox': [[38, 1, 21, 40], [57, 3, 16, 40]], 'label': [4, 6]} } >>> ds = ds.rename_column("digits", "annotations").cast_column("annotations", BBox(format="coco")) >>> ds[0] { 'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=107x46 at 0x147730070>, 'annotations': [{'bbox': [38, 1, 21, 40], 'category_id': 4}, {'bbox': [57, 3, 16, 40], 'category_id': 6}] } ``` note that it's a type for a list of bounding boxes, not just one - which would be needed to switch from a format to another using type casting.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6525/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6525/timeline
null
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/6525.diff", "html_url": "https://github.com/huggingface/datasets/pull/6525", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6525.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6525" }
true
https://api.github.com/repos/huggingface/datasets/issues/6524
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6524/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6524/comments
https://api.github.com/repos/huggingface/datasets/issues/6524/events
https://github.com/huggingface/datasets/issues/6524
2,053,076,311
I_kwDODunzps56X3VX
6,524
Streaming the Pile: Missing Files
{ "avatar_url": "https://avatars.githubusercontent.com/u/23347756?v=4", "events_url": "https://api.github.com/users/FelixLabelle/events{/privacy}", "followers_url": "https://api.github.com/users/FelixLabelle/followers", "following_url": "https://api.github.com/users/FelixLabelle/following{/other_user}", "gists_url": "https://api.github.com/users/FelixLabelle/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/FelixLabelle", "id": 23347756, "login": "FelixLabelle", "node_id": "MDQ6VXNlcjIzMzQ3NzU2", "organizations_url": "https://api.github.com/users/FelixLabelle/orgs", "received_events_url": "https://api.github.com/users/FelixLabelle/received_events", "repos_url": "https://api.github.com/users/FelixLabelle/repos", "site_admin": false, "starred_url": "https://api.github.com/users/FelixLabelle/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/FelixLabelle/subscriptions", "type": "User", "url": "https://api.github.com/users/FelixLabelle" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
1
2023-12-21T21:25:09Z
2023-12-22T09:17:05Z
2023-12-22T09:17:05Z
NONE
null
### Describe the bug The pile does not stream, a "File not Found error" is returned. It looks like the Pile's files have been moved. ### Steps to reproduce the bug To reproduce run the following code: ``` from datasets import load_dataset dataset = load_dataset('EleutherAI/pile', 'en', split='train', streaming=True) next(iter(dataset)) ``` I get the following error: `FileNotFoundError: https://the-eye.eu/public/AI/pile/train/00.jsonl.zst` ### Expected behavior Return the data in a stream. ### Environment info - `datasets` version: 2.12.0 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.11.5 - Huggingface_hub version: 0.15.1 - PyArrow version: 11.0.0 - Pandas version: 2.0.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6524/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6524/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6523
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6523/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6523/comments
https://api.github.com/repos/huggingface/datasets/issues/6523/events
https://github.com/huggingface/datasets/pull/6523
2,052,643,484
PR_kwDODunzps5ilV6d
6,523
fix tests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
2023-12-21T15:36:21Z
2023-12-21T15:56:54Z
2023-12-21T15:50:38Z
MEMBER
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6523/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6523/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6523.diff", "html_url": "https://github.com/huggingface/datasets/pull/6523", "merged_at": "2023-12-21T15:50:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/6523.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6523" }
true
https://api.github.com/repos/huggingface/datasets/issues/6522
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6522/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6522/comments
https://api.github.com/repos/huggingface/datasets/issues/6522/events
https://github.com/huggingface/datasets/issues/6522
2,052,332,528
I_kwDODunzps56VBvw
6,522
Loading HF Hub Dataset (private org repo) fails to load all features
{ "avatar_url": "https://avatars.githubusercontent.com/u/6579034?v=4", "events_url": "https://api.github.com/users/versipellis/events{/privacy}", "followers_url": "https://api.github.com/users/versipellis/followers", "following_url": "https://api.github.com/users/versipellis/following{/other_user}", "gists_url": "https://api.github.com/users/versipellis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/versipellis", "id": 6579034, "login": "versipellis", "node_id": "MDQ6VXNlcjY1NzkwMzQ=", "organizations_url": "https://api.github.com/users/versipellis/orgs", "received_events_url": "https://api.github.com/users/versipellis/received_events", "repos_url": "https://api.github.com/users/versipellis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/versipellis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/versipellis/subscriptions", "type": "User", "url": "https://api.github.com/users/versipellis" }
[]
open
false
null
[]
null
0
2023-12-21T12:26:35Z
2023-12-21T13:24:31Z
null
NONE
null
### Describe the bug When pushing a `Dataset` with multiple `Features` (`input`, `output`, `tags`) to Huggingface Hub (private org repo), and later downloading the `Dataset`, only `input` and `output` load - I believe the expected behavior is for all `Features` to be loaded by default? ### Steps to reproduce the bug Pushing the data. `data_concat` is a `list` of `dict`s. ```python for datum in data_concat: datum_tags = {d["key"]: d["value"] for d in datum["tags"]} split_fraction = # some logic that generates a train/test split number if split_faction < test_fraction: data_test.append(datum) else: data_train.append(datum) dataset = DatasetDict( { "train": Dataset.from_list(data_train), "test": Dataset.from_list(data_test), "full": Dataset.from_list(data_concat), }, ) dataset_shuffled = dataset.shuffle(seed=shuffle_seed) dataset_shuffled.push_to_hub( repo_id=hf_repo_id, private=True, config_name=m, revision=revision, token=hf_token, ) ``` Loading it later: ```python dataset = datasets.load_dataset( path=hf_repo_id, name=name, token=hf_token, ) ``` Produces: ``` DatasetDict({ train: Dataset({ features: ['input', 'output'], num_rows: <obfuscated> }) test: Dataset({ features: ['input', 'output'], num_rows: <obfuscated> }) full: Dataset({ features: ['input', 'output'], num_rows: <obfuscated> }) }) ``` ### Expected behavior The expected result is below: ``` DatasetDict({ train: Dataset({ features: ['input', 'output', 'tags'], num_rows: <obfuscated> }) test: Dataset({ features: ['input', 'output', 'tags'], num_rows: <obfuscated> }) full: Dataset({ features: ['input', 'output', 'tags'], num_rows: <obfuscated> }) }) ``` My workaround is as follows: ```python dsinfo = datasets.get_dataset_config_info( path=data_files, config_name=data_config, token=hf_token, ) allfeatures = dsinfo.features.copy() if "tags" not in allfeatures: allfeatures["tags"] = [{"key": Value(dtype="string", id=None), "value": Value(dtype="string", id=None)}] dataset = datasets.load_dataset( path=data_files, name=data_config, features=allfeatures, token=hf_token, ) ``` Interestingly enough (and perhaps a related bug?), if I don't add the `tags` to `allfeatures` above (i.e. only loading `input` and `output`), it throws an error when executing `load_dataset`: ``` ValueError: Couldn't cast tags: list<element: struct<key: string, value: string>> child 0, element: struct<key: string, value: string> child 0, key: string child 1, value: string input: <obfuscated> output: <obfuscated> -- schema metadata -- huggingface: '{"info": {"features": {"tags": [{"key": {"dtype": "string",' + 532 to {'input': <obfuscated>, 'output': <obfuscated> because column names don't match ``` Traceback for this: ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/load.py", line 2152, in load_dataset builder_instance.download_and_prepare( File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 948, in download_and_prepare self._download_and_prepare( File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1043, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1805, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/Users/bt/github/core/.venv/lib/python3.11/site-packages/datasets/builder.py", line 1950, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Environment info - `datasets` version: 2.15.0 - Platform: macOS-14.0-arm64-arm-64bit - Python version: 3.11.5 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6522/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6522/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6521
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6521/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6521/comments
https://api.github.com/repos/huggingface/datasets/issues/6521/events
https://github.com/huggingface/datasets/issues/6521
2,052,229,538
I_kwDODunzps56Uomi
6,521
The order of the splits is not preserved
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
1
2023-12-21T11:17:27Z
2023-12-22T11:36:15Z
2023-12-22T11:36:15Z
MEMBER
null
We had a regression and the order of the splits is not preserved. They are alphabetically sorted, instead of preserving original "train", "validation", "test" order. Check: In branch "main" ```python In [9]: dataset = load_dataset("adversarial_qa", '"adversarialQA") In [10]: dataset Out[10]: DatasetDict({ test: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 3000 }) train: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 30000 }) validation: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 3000 }) }) ``` Before (2.15.0) it was: ```python DatasetDict({ train: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 30000 }) validation: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 3000 }) test: Dataset({ features: ['id', 'title', 'context', 'question', 'answers', 'metadata'], num_rows: 3000 }) }) ``` See issues: - https://huggingface.co/datasets/adversarial_qa/discussions/3 - https://huggingface.co/datasets/beans/discussions/4 This is a regression because it was previously fixed. See: - #6196 - #5728
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6521/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6521/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6520
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6520/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6520/comments
https://api.github.com/repos/huggingface/datasets/issues/6520/events
https://github.com/huggingface/datasets/pull/6520
2,052,059,078
PR_kwDODunzps5ijUiw
6,520
Support commit_description parameter in push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
2
2023-12-21T09:36:11Z
2023-12-21T14:49:47Z
2023-12-21T14:43:35Z
MEMBER
null
Support `commit_description` parameter in `push_to_hub`. CC: @Wauplin
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6520/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6520/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6520.diff", "html_url": "https://github.com/huggingface/datasets/pull/6520", "merged_at": "2023-12-21T14:43:35Z", "patch_url": "https://github.com/huggingface/datasets/pull/6520.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6520" }
true
https://api.github.com/repos/huggingface/datasets/issues/6519
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6519/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6519/comments
https://api.github.com/repos/huggingface/datasets/issues/6519/events
https://github.com/huggingface/datasets/pull/6519
2,050,759,824
PR_kwDODunzps5ie4MA
6,519
Support push_to_hub canonical datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
4
2023-12-20T15:16:45Z
2023-12-21T14:48:20Z
2023-12-21T14:40:57Z
MEMBER
null
Support `push_to_hub` canonical datasets. This is necessary in the Space to convert script-datasets to Parquet: https://huggingface.co/spaces/albertvillanova/convert-dataset-to-parquet Note that before this PR, the `repo_id` "dataset_name" was transformed to "user/dataset_name". This behavior was introduced by: - #6269
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6519/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6519/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6519.diff", "html_url": "https://github.com/huggingface/datasets/pull/6519", "merged_at": "2023-12-21T14:40:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/6519.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6519" }
true
https://api.github.com/repos/huggingface/datasets/issues/6518
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6518/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6518/comments
https://api.github.com/repos/huggingface/datasets/issues/6518/events
https://github.com/huggingface/datasets/pull/6518
2,050,137,038
PR_kwDODunzps5icu-W
6,518
fix get_metadata_patterns function args error
{ "avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4", "events_url": "https://api.github.com/users/d710055071/events{/privacy}", "followers_url": "https://api.github.com/users/d710055071/followers", "following_url": "https://api.github.com/users/d710055071/following{/other_user}", "gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/d710055071", "id": 12895488, "login": "d710055071", "node_id": "MDQ6VXNlcjEyODk1NDg4", "organizations_url": "https://api.github.com/users/d710055071/orgs", "received_events_url": "https://api.github.com/users/d710055071/received_events", "repos_url": "https://api.github.com/users/d710055071/repos", "site_admin": false, "starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/d710055071/subscriptions", "type": "User", "url": "https://api.github.com/users/d710055071" }
[]
closed
false
null
[]
null
3
2023-12-20T09:06:22Z
2023-12-21T15:14:17Z
2023-12-21T15:07:57Z
CONTRIBUTOR
null
Bug get_metadata_patterns arg error https://github.com/huggingface/datasets/issues/6517
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6518/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6518/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6518.diff", "html_url": "https://github.com/huggingface/datasets/pull/6518", "merged_at": "2023-12-21T15:07:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/6518.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6518" }
true
https://api.github.com/repos/huggingface/datasets/issues/6517
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6517/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6517/comments
https://api.github.com/repos/huggingface/datasets/issues/6517/events
https://github.com/huggingface/datasets/issues/6517
2,050,121,588
I_kwDODunzps56Ml90
6,517
Bug get_metadata_patterns arg error
{ "avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4", "events_url": "https://api.github.com/users/d710055071/events{/privacy}", "followers_url": "https://api.github.com/users/d710055071/followers", "following_url": "https://api.github.com/users/d710055071/following{/other_user}", "gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/d710055071", "id": 12895488, "login": "d710055071", "node_id": "MDQ6VXNlcjEyODk1NDg4", "organizations_url": "https://api.github.com/users/d710055071/orgs", "received_events_url": "https://api.github.com/users/d710055071/received_events", "repos_url": "https://api.github.com/users/d710055071/repos", "site_admin": false, "starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/d710055071/subscriptions", "type": "User", "url": "https://api.github.com/users/d710055071" }
[]
closed
false
null
[]
null
0
2023-12-20T08:56:44Z
2023-12-22T00:24:23Z
2023-12-22T00:24:23Z
CONTRIBUTOR
null
https://github.com/huggingface/datasets/blob/3f149204a2a5948287adcade5e90707aa5207a92/src/datasets/load.py#L1240C1-L1240C69 metadata_patterns = get_metadata_patterns(base_path, download_config=self.download_config)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6517/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6517/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6516
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6516/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6516/comments
https://api.github.com/repos/huggingface/datasets/issues/6516/events
https://github.com/huggingface/datasets/pull/6516
2,050,033,322
PR_kwDODunzps5icYX0
6,516
Support huggingface-hub pre-releases
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
2
2023-12-20T07:52:29Z
2023-12-20T08:51:34Z
2023-12-20T08:44:44Z
MEMBER
null
Support `huggingface-hub` pre-releases. This way we will have our CI green when testing `huggingface-hub` release candidates. See: https://github.com/huggingface/datasets/tree/ci-test-huggingface-hub-v0.20.0.rc1 Close #6513.
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6516/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6516/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6516.diff", "html_url": "https://github.com/huggingface/datasets/pull/6516", "merged_at": "2023-12-20T08:44:44Z", "patch_url": "https://github.com/huggingface/datasets/pull/6516.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6516" }
true
https://api.github.com/repos/huggingface/datasets/issues/6515
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6515/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6515/comments
https://api.github.com/repos/huggingface/datasets/issues/6515/events
https://github.com/huggingface/datasets/issues/6515
2,049,724,251
I_kwDODunzps56LE9b
6,515
Why call http_head() when fsspec_head() succeeds
{ "avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4", "events_url": "https://api.github.com/users/d710055071/events{/privacy}", "followers_url": "https://api.github.com/users/d710055071/followers", "following_url": "https://api.github.com/users/d710055071/following{/other_user}", "gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/d710055071", "id": 12895488, "login": "d710055071", "node_id": "MDQ6VXNlcjEyODk1NDg4", "organizations_url": "https://api.github.com/users/d710055071/orgs", "received_events_url": "https://api.github.com/users/d710055071/received_events", "repos_url": "https://api.github.com/users/d710055071/repos", "site_admin": false, "starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/d710055071/subscriptions", "type": "User", "url": "https://api.github.com/users/d710055071" }
[]
closed
false
null
[]
null
0
2023-12-20T02:25:51Z
2023-12-26T05:35:46Z
2023-12-26T05:35:46Z
CONTRIBUTOR
null
https://github.com/huggingface/datasets/blob/a91582de288d98e94bcb5ab634ca1cfeeff544c5/src/datasets/utils/file_utils.py#L510C1-L523C14
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6515/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6515/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6514
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6514/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6514/comments
https://api.github.com/repos/huggingface/datasets/issues/6514/events
https://github.com/huggingface/datasets/pull/6514
2,049,600,663
PR_kwDODunzps5ia6Os
6,514
Cache backward compatibility with 2.15.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
4
2023-12-19T23:52:25Z
2023-12-21T21:14:11Z
2023-12-21T21:07:55Z
MEMBER
null
...for datasets without scripts It takes into account the changes in cache from - https://github.com/huggingface/datasets/pull/6493: switch to `config/version/commit_sha` schema - https://github.com/huggingface/datasets/pull/6454: fix `DataFilesDict` keys ordering when hashing requires https://github.com/huggingface/datasets/pull/6493 to be merged
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6514/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6514/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6514.diff", "html_url": "https://github.com/huggingface/datasets/pull/6514", "merged_at": "2023-12-21T21:07:55Z", "patch_url": "https://github.com/huggingface/datasets/pull/6514.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6514" }
true
https://api.github.com/repos/huggingface/datasets/issues/6513
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6513/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6513/comments
https://api.github.com/repos/huggingface/datasets/issues/6513/events
https://github.com/huggingface/datasets/issues/6513
2,048,869,151
I_kwDODunzps56H0Mf
6,513
Support huggingface-hub 0.20.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
0
2023-12-19T15:15:46Z
2023-12-20T08:44:45Z
2023-12-20T08:44:45Z
MEMBER
null
CI to test the support of `huggingface-hub` 0.20.0: https://github.com/huggingface/datasets/compare/main...ci-test-huggingface-hub-v0.20.0.rc1 We need to merge: - #6510 - #6512 - #6516
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6513/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6513/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6512
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6512/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6512/comments
https://api.github.com/repos/huggingface/datasets/issues/6512/events
https://github.com/huggingface/datasets/pull/6512
2,048,795,819
PR_kwDODunzps5iYI5z
6,512
Remove deprecated HfFolder
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
2023-12-19T14:40:49Z
2023-12-19T20:21:13Z
2023-12-19T20:14:30Z
MEMBER
null
...and use `huggingface_hub.get_token()` instead
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6512/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6512/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6512.diff", "html_url": "https://github.com/huggingface/datasets/pull/6512", "merged_at": "2023-12-19T20:14:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/6512.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6512" }
true
https://api.github.com/repos/huggingface/datasets/issues/6511
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6511/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6511/comments
https://api.github.com/repos/huggingface/datasets/issues/6511/events
https://github.com/huggingface/datasets/pull/6511
2,048,465,958
PR_kwDODunzps5iXAXR
6,511
Implement get dataset default config name
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
3
2023-12-19T11:26:19Z
2023-12-21T14:48:57Z
2023-12-21T14:42:41Z
MEMBER
null
Implement `get_dataset_default_config_name`. Now that we support setting a configuration as default in `push_to_hub` (see #6500), we need a programmatically way to know in advance which is the default configuration. This will be used in the Space to convert script-datasets to Parquet: https://huggingface.co/spaces/albertvillanova/convert-dataset-to-parquet Follow-up of: - #6500 CC: @severo
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6511/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6511/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6511.diff", "html_url": "https://github.com/huggingface/datasets/pull/6511", "merged_at": "2023-12-21T14:42:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/6511.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6511" }
true
https://api.github.com/repos/huggingface/datasets/issues/6510
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6510/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6510/comments
https://api.github.com/repos/huggingface/datasets/issues/6510/events
https://github.com/huggingface/datasets/pull/6510
2,046,928,742
PR_kwDODunzps5iRyiV
6,510
Replace `list_files_info` with `list_repo_tree` in `push_to_hub`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
3
2023-12-18T15:34:19Z
2023-12-19T18:05:47Z
2023-12-19T17:58:34Z
COLLABORATOR
null
Starting from `huggingface_hub` 0.20.0, `list_files_info` will be deprecated in favor of `list_repo_tree` (see https://github.com/huggingface/huggingface_hub/pull/1910)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6510/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6510/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6510.diff", "html_url": "https://github.com/huggingface/datasets/pull/6510", "merged_at": "2023-12-19T17:58:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/6510.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6510" }
true
https://api.github.com/repos/huggingface/datasets/issues/6509
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6509/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6509/comments
https://api.github.com/repos/huggingface/datasets/issues/6509/events
https://github.com/huggingface/datasets/pull/6509
2,046,720,869
PR_kwDODunzps5iREyE
6,509
Better cast error when generating dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
3
2023-12-18T13:57:24Z
2023-12-19T09:37:12Z
2023-12-19T09:31:03Z
MEMBER
null
I want to improve the error message for datasets like https://huggingface.co/datasets/m-a-p/COIG-CQIA Cc @albertvillanova @severo is this new error ok ? Or should I use a dedicated error class ? New: ```python Traceback (most recent call last): File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1920, in _prepare_split_single writer.write_table(table) File "/Users/quentinlhoest/hf/datasets/src/datasets/arrow_writer.py", line 574, in write_table pa_table = table_cast(pa_table, self._schema) File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2322, in table_cast return cast_table_to_schema(table, schema) File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2276, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast instruction: string other: string index: string domain: list<item: string> child 0, item: string output: string task_type: struct<major: list<item: string>, minor: list<item: string>> child 0, major: list<item: string> child 0, item: string child 1, minor: list<item: string> child 0, item: string task_name_in_eng: string input: string to {'answer_from': Value(dtype='string', id=None), 'instruction': Value(dtype='string', id=None), 'human_verified': Value(dtype='bool', id=None), 'domain': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'output': Value(dtype='string', id=None), 'task_type': {'major': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'minor': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'copyright': Value(dtype='string', id=None), 'input': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/Users/quentinlhoest/hf/datasets/playground/ttest.py", line 74, in <module> load_dataset("m-a-p/COIG-CQIA") File "/Users/quentinlhoest/hf/datasets/src/datasets/load.py", line 2529, in load_dataset builder_instance.download_and_prepare( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 936, in download_and_prepare self._download_and_prepare( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1031, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1791, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1922, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 3 new columns (other, index, task_name_in_eng) and 3 missing columns (answer_from, copyright, human_verified). This happened while the json dataset builder was generating data using hf://datasets/m-a-p/COIG-CQIA/coig_pc/coig_pc_core_sample.json (at revision b7b7ecf290f6515036c7c04bd8537228ac2eb474) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) ``` Previously: ```python Traceback (most recent call last): File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1931, in _prepare_split_single writer.write_table(table) File "/Users/quentinlhoest/hf/datasets/src/datasets/arrow_writer.py", line 574, in write_table pa_table = table_cast(pa_table, self._schema) File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2295, in table_cast return cast_table_to_schema(table, schema) File "/Users/quentinlhoest/hf/datasets/src/datasets/table.py", line 2253, in cast_table_to_schema raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match") ValueError: Couldn't cast task_type: struct<major: list<item: string>, minor: list<item: string>> child 0, major: list<item: string> child 0, item: string child 1, minor: list<item: string> child 0, item: string other: string instruction: string task_name_in_eng: string domain: list<item: string> child 0, item: string index: string output: string input: string to {'human_verified': Value(dtype='bool', id=None), 'task_type': {'major': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'minor': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'answer_from': Value(dtype='string', id=None), 'copyright': Value(dtype='string', id=None), 'instruction': Value(dtype='string', id=None), 'domain': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'output': Value(dtype='string', id=None), 'input': Value(dtype='string', id=None)} because column names don't match The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/Users/quentinlhoest/hf/datasets/playground/ttest.py", line 74, in <module> load_dataset("m-a-p/COIG-CQIA") File "/Users/quentinlhoest/hf/datasets/src/datasets/load.py", line 2529, in load_dataset builder_instance.download_and_prepare( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 949, in download_and_prepare self._download_and_prepare( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1044, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1804, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/Users/quentinlhoest/hf/datasets/src/datasets/builder.py", line 1949, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6509/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6509/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6509.diff", "html_url": "https://github.com/huggingface/datasets/pull/6509", "merged_at": "2023-12-19T09:31:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/6509.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6509" }
true
https://api.github.com/repos/huggingface/datasets/issues/6508
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6508/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6508/comments
https://api.github.com/repos/huggingface/datasets/issues/6508/events
https://github.com/huggingface/datasets/pull/6508
2,045,733,273
PR_kwDODunzps5iNvAu
6,508
Read GeoParquet files using parquet reader
{ "avatar_url": "https://avatars.githubusercontent.com/u/23487320?v=4", "events_url": "https://api.github.com/users/weiji14/events{/privacy}", "followers_url": "https://api.github.com/users/weiji14/followers", "following_url": "https://api.github.com/users/weiji14/following{/other_user}", "gists_url": "https://api.github.com/users/weiji14/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/weiji14", "id": 23487320, "login": "weiji14", "node_id": "MDQ6VXNlcjIzNDg3MzIw", "organizations_url": "https://api.github.com/users/weiji14/orgs", "received_events_url": "https://api.github.com/users/weiji14/received_events", "repos_url": "https://api.github.com/users/weiji14/repos", "site_admin": false, "starred_url": "https://api.github.com/users/weiji14/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/weiji14/subscriptions", "type": "User", "url": "https://api.github.com/users/weiji14" }
[]
closed
false
null
[]
null
13
2023-12-18T04:50:37Z
2024-01-26T18:22:35Z
2024-01-26T16:18:41Z
CONTRIBUTOR
null
Let GeoParquet files with the file extension `*.geoparquet` or `*.gpq` be readable by the default parquet reader. Those two file extensions are the ones most commonly used for GeoParquet files, and is included in the `gpq` validator tool at https://github.com/planetlabs/gpq/blob/e5576b4ee7306b4d2259d56c879465a9364dab90/cmd/gpq/command/convert.go#L73-L75 Addresses https://github.com/huggingface/datasets/issues/6438
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6508/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6508/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6508.diff", "html_url": "https://github.com/huggingface/datasets/pull/6508", "merged_at": "2024-01-26T16:18:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/6508.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6508" }
true
https://api.github.com/repos/huggingface/datasets/issues/6507
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6507/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6507/comments
https://api.github.com/repos/huggingface/datasets/issues/6507/events
https://github.com/huggingface/datasets/issues/6507
2,045,152,928
I_kwDODunzps555o6g
6,507
where is glue_metric.py> @Frankie123421 what was the resolution to this?
{ "avatar_url": "https://avatars.githubusercontent.com/u/119146162?v=4", "events_url": "https://api.github.com/users/Mcccccc1024/events{/privacy}", "followers_url": "https://api.github.com/users/Mcccccc1024/followers", "following_url": "https://api.github.com/users/Mcccccc1024/following{/other_user}", "gists_url": "https://api.github.com/users/Mcccccc1024/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Mcccccc1024", "id": 119146162, "login": "Mcccccc1024", "node_id": "U_kgDOBxoGsg", "organizations_url": "https://api.github.com/users/Mcccccc1024/orgs", "received_events_url": "https://api.github.com/users/Mcccccc1024/received_events", "repos_url": "https://api.github.com/users/Mcccccc1024/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Mcccccc1024/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Mcccccc1024/subscriptions", "type": "User", "url": "https://api.github.com/users/Mcccccc1024" }
[]
closed
false
null
[]
null
0
2023-12-17T09:58:25Z
2023-12-18T11:42:49Z
2023-12-18T11:42:49Z
NONE
null
> @Frankie123421 what was the resolution to this? use glue_metric.py instead of glue.py in load_metric _Originally posted by @Frankie123421 in https://github.com/huggingface/datasets/issues/2117#issuecomment-905093763_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6507/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6507/timeline
null
not_planned
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6506
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6506/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6506/comments
https://api.github.com/repos/huggingface/datasets/issues/6506/events
https://github.com/huggingface/datasets/issues/6506
2,044,975,038
I_kwDODunzps5549e-
6,506
Incorrect test set labels for RTE and CoLA datasets via load_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/73316684?v=4", "events_url": "https://api.github.com/users/emreonal11/events{/privacy}", "followers_url": "https://api.github.com/users/emreonal11/followers", "following_url": "https://api.github.com/users/emreonal11/following{/other_user}", "gists_url": "https://api.github.com/users/emreonal11/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/emreonal11", "id": 73316684, "login": "emreonal11", "node_id": "MDQ6VXNlcjczMzE2Njg0", "organizations_url": "https://api.github.com/users/emreonal11/orgs", "received_events_url": "https://api.github.com/users/emreonal11/received_events", "repos_url": "https://api.github.com/users/emreonal11/repos", "site_admin": false, "starred_url": "https://api.github.com/users/emreonal11/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emreonal11/subscriptions", "type": "User", "url": "https://api.github.com/users/emreonal11" }
[]
closed
false
null
[]
null
1
2023-12-16T22:06:08Z
2023-12-21T09:57:57Z
2023-12-21T09:57:57Z
NONE
null
### Describe the bug The test set labels for the RTE and CoLA datasets when loading via datasets load_dataset are all -1. Edit: It appears this is also the case for every other dataset except for MRPC (stsb, sst2, qqp, mnli (both matched and mismatched), qnli, wnli, ax). Is this intended behavior to safeguard the test set for evaluation purposes? ### Steps to reproduce the bug !pip install datasets from datasets import load_dataset rte_data = load_dataset('glue', 'rte') cola_data = load_dataset('glue', 'cola') print(rte_data['test'][0:30]['label']) print(cola_data['test'][0:30]['label']) Output: [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1] The non-label test data seems to be fine: e.g. rte_data['test'][1] is: {'sentence1': "Authorities in Brazil say that more than 200 people are being held hostage in a prison in the country's remote, Amazonian-jungle state of Rondonia.", 'sentence2': 'Authorities in Brazil hold 200 people as hostage.', 'label': -1, 'idx': 1} Training and validation data are also fine: e.g. rte_data['train][0] is: {'sentence1': 'No Weapons of Mass Destruction Found in Iraq Yet.', 'sentence2': 'Weapons of Mass Destruction Found in Iraq.', 'label': 1, 'idx': 0} ### Expected behavior Expected the labels to be binary 0/1 values; Got all -1s instead ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-6.1.58+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 10.0.1 - Pandas version: 1.5.3 - `fsspec` version: 2023.6.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6506/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6506/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6505
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6505/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6505/comments
https://api.github.com/repos/huggingface/datasets/issues/6505/events
https://github.com/huggingface/datasets/issues/6505
2,044,721,288
I_kwDODunzps553_iI
6,505
Got stuck when I trying to load a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/18232551?v=4", "events_url": "https://api.github.com/users/yirenpingsheng/events{/privacy}", "followers_url": "https://api.github.com/users/yirenpingsheng/followers", "following_url": "https://api.github.com/users/yirenpingsheng/following{/other_user}", "gists_url": "https://api.github.com/users/yirenpingsheng/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yirenpingsheng", "id": 18232551, "login": "yirenpingsheng", "node_id": "MDQ6VXNlcjE4MjMyNTUx", "organizations_url": "https://api.github.com/users/yirenpingsheng/orgs", "received_events_url": "https://api.github.com/users/yirenpingsheng/received_events", "repos_url": "https://api.github.com/users/yirenpingsheng/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yirenpingsheng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yirenpingsheng/subscriptions", "type": "User", "url": "https://api.github.com/users/yirenpingsheng" }
[]
open
false
null
[]
null
6
2023-12-16T11:51:07Z
2024-05-10T05:01:52Z
null
NONE
null
### Describe the bug Hello, everyone. I met a problem when I am trying to load a data file using load_dataset method on a Debian 10 system. The data file is not very large, only 1.63MB with 600 records. Here is my code: from datasets import load_dataset dataset = load_dataset('json', data_files='mypath/oaast_rm_zh.json') I waited it for 20 minutes. It still no response. I cannot using Ctrl+C to cancel the command. I have to use Ctrl+Z to kill it. I also try it with a txt file, it still no response in a long time. I can load the same file successfully using my laptop (windows 10, python 3.8.5, datasets==2.14.5). I can also make it on another computer (Ubuntu 20.04.5 LTS, python 3.10.13, datasets 2.14.7). It only takes me 1-2 miniutes. Could you give me some suggestions? Thank you. ### Steps to reproduce the bug from datasets import load_dataset dataset = load_dataset('json', data_files='mypath/oaast_rm_zh.json') ### Expected behavior I hope it can load the file successfully. ### Environment info OS: Debian GNU/Linux 10 Python: Python 3.10.13 Pip list: Package Version ------------------------- ------------ accelerate 0.25.0 addict 2.4.0 aiofiles 23.2.1 aiohttp 3.9.1 aiosignal 1.3.1 aliyun-python-sdk-core 2.14.0 aliyun-python-sdk-kms 2.16.2 altair 5.2.0 annotated-types 0.6.0 anyio 3.7.1 async-timeout 4.0.3 attrs 23.1.0 certifi 2023.11.17 cffi 1.16.0 charset-normalizer 3.3.2 click 8.1.7 contourpy 1.2.0 crcmod 1.7 cryptography 41.0.7 cycler 0.12.1 datasets 2.14.7 dill 0.3.7 docstring-parser 0.15 einops 0.7.0 exceptiongroup 1.2.0 fastapi 0.105.0 ffmpy 0.3.1 filelock 3.13.1 fonttools 4.46.0 frozenlist 1.4.1 fsspec 2023.10.0 gast 0.5.4 gradio 3.50.2 gradio_client 0.6.1 h11 0.14.0 httpcore 1.0.2 httpx 0.25.2 huggingface-hub 0.19.4 idna 3.6 importlib-metadata 7.0.0 importlib-resources 6.1.1 jieba 0.42.1 Jinja2 3.1.2 jmespath 0.10.0 joblib 1.3.2 jsonschema 4.20.0 jsonschema-specifications 2023.11.2 kiwisolver 1.4.5 markdown-it-py 3.0.0 MarkupSafe 2.1.3 matplotlib 3.8.2 mdurl 0.1.2 modelscope 1.10.0 mpmath 1.3.0 multidict 6.0.4 multiprocess 0.70.15 networkx 3.2.1 nltk 3.8.1 numpy 1.26.2 nvidia-cublas-cu12 12.1.3.1 nvidia-cuda-cupti-cu12 12.1.105 nvidia-cuda-nvrtc-cu12 12.1.105 nvidia-cuda-runtime-cu12 12.1.105 nvidia-cudnn-cu12 8.9.2.26 nvidia-cufft-cu12 11.0.2.54 nvidia-curand-cu12 10.3.2.106 nvidia-cusolver-cu12 11.4.5.107 nvidia-cusparse-cu12 12.1.0.106 nvidia-nccl-cu12 2.18.1 nvidia-nvjitlink-cu12 12.3.101 nvidia-nvtx-cu12 12.1.105 orjson 3.9.10 oss2 2.18.3 packaging 23.2 pandas 2.1.4 peft 0.7.1 Pillow 10.1.0 pip 23.3.1 platformdirs 4.1.0 protobuf 4.25.1 psutil 5.9.6 pyarrow 14.0.1 pyarrow-hotfix 0.6 pycparser 2.21 pycryptodome 3.19.0 pydantic 2.5.2 pydantic_core 2.14.5 pydub 0.25.1 Pygments 2.17.2 pyparsing 3.1.1 python-dateutil 2.8.2 python-multipart 0.0.6 pytz 2023.3.post1 PyYAML 6.0.1 referencing 0.32.0 regex 2023.10.3 requests 2.31.0 rich 13.7.0 rouge-chinese 1.0.3 rpds-py 0.13.2 safetensors 0.4.1 scipy 1.11.4 semantic-version 2.10.0 sentencepiece 0.1.99 setuptools 68.2.2 shtab 1.6.5 simplejson 3.19.2 six 1.16.0 sniffio 1.3.0 sortedcontainers 2.4.0 sse-starlette 1.8.2 starlette 0.27.0 sympy 1.12 tiktoken 0.5.2 tokenizers 0.15.0 tomli 2.0.1 toolz 0.12.0 torch 2.1.2 tqdm 4.66.1 transformers 4.36.1 triton 2.1.0 trl 0.7.4 typing_extensions 4.9.0 tyro 0.6.0 tzdata 2023.3 urllib3 2.1.0 uvicorn 0.24.0.post1 websockets 11.0.3 wheel 0.41.2 xxhash 3.4.1 yapf 0.40.2 yarl 1.9.4 zipp 3.17.0
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6505/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6505/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6504
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6504/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6504/comments
https://api.github.com/repos/huggingface/datasets/issues/6504/events
https://github.com/huggingface/datasets/issues/6504
2,044,541,154
I_kwDODunzps553Tji
6,504
Error Pushing to Hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/55055083?v=4", "events_url": "https://api.github.com/users/Jiayi-Pan/events{/privacy}", "followers_url": "https://api.github.com/users/Jiayi-Pan/followers", "following_url": "https://api.github.com/users/Jiayi-Pan/following{/other_user}", "gists_url": "https://api.github.com/users/Jiayi-Pan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Jiayi-Pan", "id": 55055083, "login": "Jiayi-Pan", "node_id": "MDQ6VXNlcjU1MDU1MDgz", "organizations_url": "https://api.github.com/users/Jiayi-Pan/orgs", "received_events_url": "https://api.github.com/users/Jiayi-Pan/received_events", "repos_url": "https://api.github.com/users/Jiayi-Pan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Jiayi-Pan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jiayi-Pan/subscriptions", "type": "User", "url": "https://api.github.com/users/Jiayi-Pan" }
[]
closed
false
null
[]
null
0
2023-12-16T01:05:22Z
2023-12-16T06:20:53Z
2023-12-16T06:20:53Z
NONE
null
### Describe the bug Error when trying to push a dataset in a special format to hub ### Steps to reproduce the bug ``` import datasets from datasets import Dataset dataset_dict = { "filename": ["apple", "banana"], "token": [[[1,2],[3,4]],[[1,2],[3,4]]], "label": [0, 1], } dataset = Dataset.from_dict(dataset_dict) dataset = dataset.cast_column("token", datasets.features.features.Array2D(shape=(2, 2),dtype="int16")) dataset.push_to_hub("SequenceModel/imagenet_val_256") ``` Error: ``` ... ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/tuple' in "<unicode string>", line 8, column 16: shape: !!python/tuple ^ ``` ### Expected behavior Dataset being pushed to hub ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-5.19.0-1022-gcp-x86_64-with-glibc2.35 - Python version: 3.11.5 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.4 - `fsspec` version: 2023.10.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6504/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6504/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6503
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6503/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6503/comments
https://api.github.com/repos/huggingface/datasets/issues/6503/events
https://github.com/huggingface/datasets/pull/6503
2,043,847,591
PR_kwDODunzps5iHgZf
6,503
Fix streaming xnli
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
2023-12-15T14:40:57Z
2023-12-15T14:51:06Z
2023-12-15T14:44:47Z
MEMBER
null
This code was failing ```python In [1]: from datasets import load_dataset In [2]: ...: ds = load_dataset("xnli", "all_languages", split="test", streaming=True) ...: ...: sample_data = next(iter(ds))["premise"] # pick up one data ...: input_text = list(sample_data.values()) ``` ``` File ~/hf/datasets/src/datasets/features/translation.py:104, in TranslationVariableLanguages.encode_example(self, translation_dict) 102 return translation_dict 103 elif self.languages and set(translation_dict) - lang_set: --> 104 raise ValueError( 105 f'Some languages in example ({", ".join(sorted(set(translation_dict) - lang_set))}) are not in valid set ({", ".join(lang_set)}).' 106 ) 108 # Convert dictionary into tuples, splitting out cases where there are 109 # multiple translations for a single language. 110 translation_tuples = [] ValueError: Some languages in example (language, translation) are not in valid set (ur, fr, hi, sw, vi, el, de, th, en, tr, zh, ar, bg, ru, es). ``` because in streaming mode we expect features encode methods to be no-ops if the example is already encoded. I fixed `TranslationVariableLanguages` to account for that
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6503/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6503/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6503.diff", "html_url": "https://github.com/huggingface/datasets/pull/6503", "merged_at": "2023-12-15T14:44:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/6503.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6503" }
true
https://api.github.com/repos/huggingface/datasets/issues/6502
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6502/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6502/comments
https://api.github.com/repos/huggingface/datasets/issues/6502/events
https://github.com/huggingface/datasets/pull/6502
2,043,771,731
PR_kwDODunzps5iHPt-
6,502
Pickle support for `torch.Generator` objects
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
2
2023-12-15T13:55:12Z
2023-12-15T15:04:33Z
2023-12-15T14:58:22Z
COLLABORATOR
null
Fix for https://discuss.huggingface.co/t/caching-a-dataset-processed-with-randomness/65616
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6502/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6502/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6502.diff", "html_url": "https://github.com/huggingface/datasets/pull/6502", "merged_at": "2023-12-15T14:58:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/6502.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6502" }
true
https://api.github.com/repos/huggingface/datasets/issues/6501
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6501/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6501/comments
https://api.github.com/repos/huggingface/datasets/issues/6501/events
https://github.com/huggingface/datasets/issues/6501
2,043,377,240
I_kwDODunzps55y3ZY
6,501
OverflowError: value too large to convert to int32_t
{ "avatar_url": "https://avatars.githubusercontent.com/u/47747764?v=4", "events_url": "https://api.github.com/users/zhangfan-algo/events{/privacy}", "followers_url": "https://api.github.com/users/zhangfan-algo/followers", "following_url": "https://api.github.com/users/zhangfan-algo/following{/other_user}", "gists_url": "https://api.github.com/users/zhangfan-algo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zhangfan-algo", "id": 47747764, "login": "zhangfan-algo", "node_id": "MDQ6VXNlcjQ3NzQ3NzY0", "organizations_url": "https://api.github.com/users/zhangfan-algo/orgs", "received_events_url": "https://api.github.com/users/zhangfan-algo/received_events", "repos_url": "https://api.github.com/users/zhangfan-algo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zhangfan-algo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhangfan-algo/subscriptions", "type": "User", "url": "https://api.github.com/users/zhangfan-algo" }
[]
open
false
null
[]
null
0
2023-12-15T10:10:21Z
2023-12-15T10:10:21Z
null
NONE
null
### Describe the bug ![image](https://github.com/huggingface/datasets/assets/47747764/f58044fb-ddda-48b6-ba68-7bbfef781630) ### Steps to reproduce the bug just loading datasets ### Expected behavior how can I fix it ### Environment info pip install /mnt/cluster/zhangfan/study_info/LLaMA-Factory/peft-0.6.0-py3-none-any.whl pip install huggingface_hub-0.19.4-py3-none-any.whl tokenizers-0.15.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl transformers-4.36.1-py3-none-any.whl pyarrow_hotfix-0.6-py3-none-any.whl datasets-2.15.0-py3-none-any.whl tyro-0.5.18-py3-none-any.whl trl-0.7.4-py3-none-any.whl done
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6501/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6501/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6500
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6500/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6500/comments
https://api.github.com/repos/huggingface/datasets/issues/6500/events
https://github.com/huggingface/datasets/pull/6500
2,043,258,633
PR_kwDODunzps5iFc6e
6,500
Enable setting config as default when push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
8
2023-12-15T09:17:41Z
2023-12-18T11:56:11Z
2023-12-18T11:50:03Z
MEMBER
null
Fix #6497.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6500/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6500/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6500.diff", "html_url": "https://github.com/huggingface/datasets/pull/6500", "merged_at": "2023-12-18T11:50:03Z", "patch_url": "https://github.com/huggingface/datasets/pull/6500.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6500" }
true
https://api.github.com/repos/huggingface/datasets/issues/6499
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6499/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6499/comments
https://api.github.com/repos/huggingface/datasets/issues/6499/events
https://github.com/huggingface/datasets/pull/6499
2,043,166,976
PR_kwDODunzps5iFIUF
6,499
docs: add reference Git over SSH
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[]
closed
false
null
[]
null
2
2023-12-15T08:38:31Z
2023-12-15T11:48:47Z
2023-12-15T11:42:38Z
CONTRIBUTOR
null
see https://discuss.huggingface.co/t/update-datasets-getting-started-to-new-git-security/65893
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6499/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6499/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6499.diff", "html_url": "https://github.com/huggingface/datasets/pull/6499", "merged_at": "2023-12-15T11:42:38Z", "patch_url": "https://github.com/huggingface/datasets/pull/6499.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6499" }
true
https://api.github.com/repos/huggingface/datasets/issues/6498
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6498/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6498/comments
https://api.github.com/repos/huggingface/datasets/issues/6498/events
https://github.com/huggingface/datasets/pull/6498
2,042,075,969
PR_kwDODunzps5iBcFj
6,498
Fallback on dataset script if user wants to load default config
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
8
2023-12-14T16:46:01Z
2023-12-15T13:16:56Z
2023-12-15T13:10:48Z
MEMBER
null
Right now this code is failing on `main`: ```python load_dataset("openbookqa") ``` This is because it tries to load the dataset from the Parquet export but the dataset has multiple configurations and the Parquet export doesn't know which one is the default one. I fixed this by simply falling back on using the dataset script (which tells the user to pass `trust_remote_code=True`): ```python load_dataset("openbookqa", trust_remote_code=True) ``` Note that if the user happened to specify a config name I don't fall back on the script since we can use the Parquet export in this case (no need to know which config is the default) ```python load_dataset("openbookqa", "main") ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6498/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6498/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6498.diff", "html_url": "https://github.com/huggingface/datasets/pull/6498", "merged_at": "2023-12-15T13:10:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/6498.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6498" }
true
https://api.github.com/repos/huggingface/datasets/issues/6497
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6497/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6497/comments
https://api.github.com/repos/huggingface/datasets/issues/6497/events
https://github.com/huggingface/datasets/issues/6497
2,041,994,274
I_kwDODunzps55tlwi
6,497
Support setting a default config name in push_to_hub
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
0
2023-12-14T15:59:03Z
2023-12-18T11:50:04Z
2023-12-18T11:50:04Z
MEMBER
null
In order to convert script-datasets to no-script datasets, we need to support setting a default config name for those scripts that set one.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6497/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6497/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6496
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6496/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6496/comments
https://api.github.com/repos/huggingface/datasets/issues/6496/events
https://github.com/huggingface/datasets/issues/6496
2,041,589,386
I_kwDODunzps55sC6K
6,496
Error when writing a dataset to HF Hub: A commit has happened since. Please refresh and try again.
{ "avatar_url": "https://avatars.githubusercontent.com/u/35808396?v=4", "events_url": "https://api.github.com/users/GeorgesLorre/events{/privacy}", "followers_url": "https://api.github.com/users/GeorgesLorre/followers", "following_url": "https://api.github.com/users/GeorgesLorre/following{/other_user}", "gists_url": "https://api.github.com/users/GeorgesLorre/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/GeorgesLorre", "id": 35808396, "login": "GeorgesLorre", "node_id": "MDQ6VXNlcjM1ODA4Mzk2", "organizations_url": "https://api.github.com/users/GeorgesLorre/orgs", "received_events_url": "https://api.github.com/users/GeorgesLorre/received_events", "repos_url": "https://api.github.com/users/GeorgesLorre/repos", "site_admin": false, "starred_url": "https://api.github.com/users/GeorgesLorre/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GeorgesLorre/subscriptions", "type": "User", "url": "https://api.github.com/users/GeorgesLorre" }
[]
open
false
null
[]
null
1
2023-12-14T11:24:54Z
2023-12-14T12:22:21Z
null
NONE
null
**Describe the bug** Getting a `412 Client Error: Precondition Failed` when trying to write a dataset to the HF hub. ``` huggingface_hub.utils._errors.HfHubHTTPError: 412 Client Error: Precondition Failed for url: https://huggingface.co/api/datasets/GLorr/test-dask/commit/main (Request ID: Root=1-657ae26f-3bd92bf861bb254b2cc0826c;50a09ab7-9347-406a-ba49-69f98abee9cc) A commit has happened since. Please refresh and try again. ``` **Steps to reproduce the bug** This is a minimal reproducer: ``` import dask.dataframe as dd import pandas as pd import random import os import huggingface_hub import datasets huggingface_hub.login(token=os.getenv("HF_TOKEN")) data = {"number": [random.randint(0,10) for _ in range(1000)]} df = pd.DataFrame.from_dict(data) dataframe = dd.from_pandas(df, npartitions=1) dataframe = dataframe.repartition(npartitions=3) schema = datasets.Features({"number": datasets.Value("int64")}).arrow_schema repo_id = "GLorr/test-dask" repo_path = f"hf://datasets/{repo_id}" huggingface_hub.create_repo(repo_id=repo_id, repo_type="dataset", exist_ok=True) dd.to_parquet(dataframe, path=f"{repo_path}/data", schema=schema) ``` **Expected behavior** Would expect to write to the hub without any problem. **Environment info** ``` datasets==2.15.0 huggingface-hub==0.19.4 ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6496/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6496/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6494
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6494/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6494/comments
https://api.github.com/repos/huggingface/datasets/issues/6494/events
https://github.com/huggingface/datasets/issues/6494
2,039,684,839
I_kwDODunzps55kx7n
6,494
Image Data loaded Twice
{ "avatar_url": "https://avatars.githubusercontent.com/u/28867010?v=4", "events_url": "https://api.github.com/users/ArcaneLex/events{/privacy}", "followers_url": "https://api.github.com/users/ArcaneLex/followers", "following_url": "https://api.github.com/users/ArcaneLex/following{/other_user}", "gists_url": "https://api.github.com/users/ArcaneLex/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ArcaneLex", "id": 28867010, "login": "ArcaneLex", "node_id": "MDQ6VXNlcjI4ODY3MDEw", "organizations_url": "https://api.github.com/users/ArcaneLex/orgs", "received_events_url": "https://api.github.com/users/ArcaneLex/received_events", "repos_url": "https://api.github.com/users/ArcaneLex/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ArcaneLex/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ArcaneLex/subscriptions", "type": "User", "url": "https://api.github.com/users/ArcaneLex" }
[]
open
false
null
[]
null
0
2023-12-13T13:11:42Z
2023-12-13T13:11:42Z
null
NONE
null
### Describe the bug ![1702472610561](https://github.com/huggingface/datasets/assets/28867010/4b7ef5e7-32c3-4b73-84cb-5de059caa0b6) When I learn from https://huggingface.co/docs/datasets/image_load and try to load image data from a folder. I noticed that the image was read twice in the returned data. As you can see in the attached image, there are only four images in the train folder, but reading brings up eight images ### Steps to reproduce the bug from datasets import Dataset, load_dataset dataset = load_dataset("imagefolder", data_dir="data/", drop_labels=False) # print(dataset["train"][0]["image"] == dataset["train"][1]["image"]) print(dataset) print(dataset["train"]["image"]) print(len(dataset["train"]["image"])) ### Expected behavior DatasetDict({ train: Dataset({ features: ['image', 'label'], num_rows: 8 }) }) [<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D1CA8B0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D2452E0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D245310>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D2453A0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D245460>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=2877x2129 at 0x1BD1D245430>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D2454F0>, <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=4208x3120 at 0x1BD1D245550>] 8 ### Environment info - `datasets` version: 2.14.5 - Platform: Windows-10-10.0.22621-SP0 - Python version: 3.9.17 - Huggingface_hub version: 0.19.4 - PyArrow version: 13.0.0 - Pandas version: 2.0.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6494/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6494/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6495
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6495/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6495/comments
https://api.github.com/repos/huggingface/datasets/issues/6495/events
https://github.com/huggingface/datasets/issues/6495
2,039,708,529
I_kwDODunzps55k3tx
6,495
Newline characters don't behave as expected when calling dataset.info
{ "avatar_url": "https://avatars.githubusercontent.com/u/32300890?v=4", "events_url": "https://api.github.com/users/gerald-wrona/events{/privacy}", "followers_url": "https://api.github.com/users/gerald-wrona/followers", "following_url": "https://api.github.com/users/gerald-wrona/following{/other_user}", "gists_url": "https://api.github.com/users/gerald-wrona/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gerald-wrona", "id": 32300890, "login": "gerald-wrona", "node_id": "MDQ6VXNlcjMyMzAwODkw", "organizations_url": "https://api.github.com/users/gerald-wrona/orgs", "received_events_url": "https://api.github.com/users/gerald-wrona/received_events", "repos_url": "https://api.github.com/users/gerald-wrona/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gerald-wrona/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gerald-wrona/subscriptions", "type": "User", "url": "https://api.github.com/users/gerald-wrona" }
[]
open
false
null
[]
null
0
2023-12-12T23:07:51Z
2023-12-13T13:24:22Z
null
NONE
null
### System Info - `transformers` version: 4.32.1 - Platform: Windows-10-10.0.19045-SP0 - Python version: 3.11.5 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.2 - Accelerate version: not installed - Accelerate config: not found - PyTorch version (GPU?): 2.1.1+cpu (False) - Tensorflow version (GPU?): 2.15.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @marios ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction [Source](https://huggingface.co/docs/datasets/v2.2.1/en/access) ``` from datasets import load_dataset dataset = load_dataset('glue', 'mrpc', split='train') dataset.info ``` DatasetInfo(description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n', citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398', license='', features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(names=['not_equivalent', 'equivalent'], id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, task_templates=None, builder_name='glue', dataset_name=None, config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943843, num_examples=3668, shard_lengths=None, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105879, num_examples=408, shard_lengths=None, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442410, num_examples=1725, shard_lengths=None, dataset_name='glue')}, download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': None}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': None}}, download_size=1494541, post_processing_size=None, dataset_size=1492132, size_in_bytes=2986673) ### Expected behavior ``` from datasets import load_dataset dataset = load_dataset('glue', 'mrpc', split='train') dataset.info ``` DatasetInfo( description='GLUE, the General Language Understanding Evaluation benchmark\n(https://gluebenchmark.com/) is a collection of resources for training,\nevaluating, and analyzing natural language understanding systems.\n\n', citation='@inproceedings{dolan2005automatically,\n title={Automatically constructing a corpus of sentential paraphrases},\n author={Dolan, William B and Brockett, Chris},\n booktitle={Proceedings of the Third International Workshop on Paraphrasing (IWP2005)},\n year={2005}\n}\n@inproceedings{wang2019glue,\n title={{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding},\n author={Wang, Alex and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel R.},\n note={In the Proceedings of ICLR.},\n year={2019}\n}\n', homepage='https://www.microsoft.com/en-us/download/details.aspx?id=52398', license='', features={'sentence1': Value(dtype='string', id=None), 'sentence2': Value(dtype='string', id=None), 'label': ClassLabel(num_classes=2, names=['not_equivalent', 'equivalent'], names_file=None, id=None), 'idx': Value(dtype='int32', id=None)}, post_processed=None, supervised_keys=None, builder_name='glue', config_name='mrpc', version=1.0.0, splits={'train': SplitInfo(name='train', num_bytes=943851, num_examples=3668, dataset_name='glue'), 'validation': SplitInfo(name='validation', num_bytes=105887, num_examples=408, dataset_name='glue'), 'test': SplitInfo(name='test', num_bytes=442418, num_examples=1725, dataset_name='glue')}, download_checksums={'https://dl.fbaipublicfiles.com/glue/data/mrpc_dev_ids.tsv': {'num_bytes': 6222, 'checksum': '971d7767d81b997fd9060ade0ec23c4fc31cbb226a55d1bd4a1bac474eb81dc7'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_train.txt': {'num_bytes': 1047044, 'checksum': '60a9b09084528f0673eedee2b69cb941920f0b8cd0eeccefc464a98768457f89'}, 'https://dl.fbaipublicfiles.com/senteval/senteval_data/msr_paraphrase_test.txt': {'num_bytes': 441275, 'checksum': 'a04e271090879aaba6423d65b94950c089298587d9c084bf9cd7439bd785f784'}}, download_size=1494541, post_processing_size=None, dataset_size=1492156, size_in_bytes=2986697 )
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6495/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6495/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6493
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6493/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6493/comments
https://api.github.com/repos/huggingface/datasets/issues/6493/events
https://github.com/huggingface/datasets/pull/6493
2,038,221,490
PR_kwDODunzps5h0XJK
6,493
Lazy data files resolution and offline cache reload
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
8
2023-12-12T17:15:17Z
2023-12-21T15:19:20Z
2023-12-21T15:13:11Z
MEMBER
null
Includes both https://github.com/huggingface/datasets/pull/6458 and https://github.com/huggingface/datasets/pull/6459 This PR should be merged instead of the two individually, since they are conflicting ## Offline cache reload it can reload datasets that were pushed to hub if they exist in the cache. example: ```python >>> Dataset.from_dict({"a": [1, 2]}).push_to_hub("lhoestq/tmp") >>> load_dataset("lhoestq/tmp") DatasetDict({ train: Dataset({ features: ['a'], num_rows: 2 }) }) ``` and later, without connection: ```python >>> load_dataset("lhoestq/tmp") Using the latest cached version of the dataset since lhoestq/tmp couldn't be found on the Hugging Face Hub Found the latest cached dataset configuration 'default' at /Users/quentinlhoest/.cache/huggingface/datasets/lhoestq___tmp/default/0.0.0/da0e902a945afeb9 (last modified on Wed Dec 13 14:55:52 2023). DatasetDict({ train: Dataset({ features: ['a'], num_rows: 2 }) }) ``` - Updated `CachedDatasetModuleFactory` to look for datasets in the cache at `<namespace>___<dataset_name>/<config_id>` - Since the metadata configs parameters are not available in offline mode, we don't know which folder to load (config_id and hash change), so I simply load the latest one - I instantiate a BuilderConfig even if there is no metadata config with the right config_name - Its config_id is equal to the config_name to be able to retrieve it in the cache (no more suffix for configs from metadata configs) - We can reload this config if offline mode by specifying the right config_name (same as online !) - Consequences of this change: - Only when there are user's parameters it creates a custom builder config with config_id = config_name + user parameters hash - the hash used to name the cache folder takes into account the metadata config and the dataset info, so that the right cache can be reloaded when there is internet connection without redownloading the data or resolving the data files. For local directories I hash the builder configs and dataset info, and for datasets on the hub I use the commit sha as hash. - cache directories now look like `config/version/commit_sha` for hub datasets which is clean :) Fix https://github.com/huggingface/datasets/issues/3547 ## Lazy data files resolution this makes this code run in 2sec instead of >10sec ```python from datasets import load_dataset ds = load_dataset("glue", "sst2", streaming=True, trust_remote_code=False) ``` For some datasets with many configs and files it can be up to 100x faster. This is particularly important now that some datasets will be loaded from the Parquet export instead of the scripts. The data files are only resolved in the builder `__init__`. To do so I added DataFilesPatternsList and DataFilesPatternsDict that have `.resolve()` to return resolved DataFilesList and DataFilesDict
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6493/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6493/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6493.diff", "html_url": "https://github.com/huggingface/datasets/pull/6493", "merged_at": "2023-12-21T15:13:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/6493.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6493" }
true
https://api.github.com/repos/huggingface/datasets/issues/6492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6492/comments
https://api.github.com/repos/huggingface/datasets/issues/6492/events
https://github.com/huggingface/datasets/pull/6492
2,037,987,267
PR_kwDODunzps5hzjhQ
6,492
Make push_to_hub return CommitInfo
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
3
2023-12-12T15:18:16Z
2023-12-13T14:29:01Z
2023-12-13T14:22:41Z
MEMBER
null
Make `push_to_hub` return `CommitInfo`. This is useful, for example, if we pass `create_pr=True` and we want to know the created PR ID. CC: @severo for the use case in https://huggingface.co/datasets/jmhessel/newyorker_caption_contest/discussions/4
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6492/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6492/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6492.diff", "html_url": "https://github.com/huggingface/datasets/pull/6492", "merged_at": "2023-12-13T14:22:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/6492.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6492" }
true
https://api.github.com/repos/huggingface/datasets/issues/6491
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6491/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6491/comments
https://api.github.com/repos/huggingface/datasets/issues/6491/events
https://github.com/huggingface/datasets/pull/6491
2,037,690,643
PR_kwDODunzps5hyiTY
6,491
Fix metrics dead link
{ "avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4", "events_url": "https://api.github.com/users/qgallouedec/events{/privacy}", "followers_url": "https://api.github.com/users/qgallouedec/followers", "following_url": "https://api.github.com/users/qgallouedec/following{/other_user}", "gists_url": "https://api.github.com/users/qgallouedec/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qgallouedec", "id": 45557362, "login": "qgallouedec", "node_id": "MDQ6VXNlcjQ1NTU3MzYy", "organizations_url": "https://api.github.com/users/qgallouedec/orgs", "received_events_url": "https://api.github.com/users/qgallouedec/received_events", "repos_url": "https://api.github.com/users/qgallouedec/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qgallouedec/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qgallouedec/subscriptions", "type": "User", "url": "https://api.github.com/users/qgallouedec" }
[]
closed
false
null
[]
null
2
2023-12-12T12:51:49Z
2023-12-21T15:15:08Z
2023-12-21T15:08:53Z
MEMBER
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6491/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6491/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6491.diff", "html_url": "https://github.com/huggingface/datasets/pull/6491", "merged_at": "2023-12-21T15:08:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/6491.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6491" }
true
https://api.github.com/repos/huggingface/datasets/issues/6490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6490/comments
https://api.github.com/repos/huggingface/datasets/issues/6490/events
https://github.com/huggingface/datasets/issues/6490
2,037,204,892
I_kwDODunzps55bUec
6,490
`load_dataset(...,save_infos=True)` not working without loading script
{ "avatar_url": "https://avatars.githubusercontent.com/u/114978051?v=4", "events_url": "https://api.github.com/users/morganveyret/events{/privacy}", "followers_url": "https://api.github.com/users/morganveyret/followers", "following_url": "https://api.github.com/users/morganveyret/following{/other_user}", "gists_url": "https://api.github.com/users/morganveyret/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/morganveyret", "id": 114978051, "login": "morganveyret", "node_id": "U_kgDOBtptAw", "organizations_url": "https://api.github.com/users/morganveyret/orgs", "received_events_url": "https://api.github.com/users/morganveyret/received_events", "repos_url": "https://api.github.com/users/morganveyret/repos", "site_admin": false, "starred_url": "https://api.github.com/users/morganveyret/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/morganveyret/subscriptions", "type": "User", "url": "https://api.github.com/users/morganveyret" }
[]
open
false
null
[]
null
1
2023-12-12T08:09:18Z
2023-12-12T08:36:22Z
null
NONE
null
### Describe the bug It seems that saving a dataset infos back into the card file is not working for datasets without a loading script. After tracking the problem a bit it looks like saving the infos uses `Builder.get_imported_module_dir()` as its destination directory. Internally this is a call to `inspect.getfile()` but since the actual builder class used is dynamically created (cf. `datasets.load.configure_builder_class`) this method actually return te path to the parent builder class (e.g. `datasets.packaged_modules.json.JSON`). ### Steps to reproduce the bug 1. Have a local dataset without any loading script 2. Make sure there are no dataset infos in the README.md 3. Load with `save_infos=True` 4. No change in the dataset README.md 5. A new README.md file is created in the directory of the parent builder class (e.g. for json in `.../site-packages/datasets/packaged_modules/json/README.md`) ### Expected behavior The dataset README.md should be updated and no file should be created in the python environment. ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-6.2.0-37-generic-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.3 - `fsspec` version: 2023.6.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6490/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6490/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6489
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6489/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6489/comments
https://api.github.com/repos/huggingface/datasets/issues/6489/events
https://github.com/huggingface/datasets/issues/6489
2,036,743,777
I_kwDODunzps55Zj5h
6,489
load_dataset imageflder for aws s3 path
{ "avatar_url": "https://avatars.githubusercontent.com/u/9353106?v=4", "events_url": "https://api.github.com/users/segalinc/events{/privacy}", "followers_url": "https://api.github.com/users/segalinc/followers", "following_url": "https://api.github.com/users/segalinc/following{/other_user}", "gists_url": "https://api.github.com/users/segalinc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/segalinc", "id": 9353106, "login": "segalinc", "node_id": "MDQ6VXNlcjkzNTMxMDY=", "organizations_url": "https://api.github.com/users/segalinc/orgs", "received_events_url": "https://api.github.com/users/segalinc/received_events", "repos_url": "https://api.github.com/users/segalinc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/segalinc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/segalinc/subscriptions", "type": "User", "url": "https://api.github.com/users/segalinc" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
0
2023-12-12T00:08:43Z
2023-12-12T00:09:27Z
null
NONE
null
### Feature request I would like to load a dataset from S3 using the imagefolder option something like `dataset = datasets.load_dataset('imagefolder', data_dir='s3://.../lsun/train/bedroom', fs=S3FileSystem(), streaming=True) ` ### Motivation no need of data_files ### Your contribution no experience with this
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6489/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6489/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6488/comments
https://api.github.com/repos/huggingface/datasets/issues/6488/events
https://github.com/huggingface/datasets/issues/6488
2,035,899,898
I_kwDODunzps55WV36
6,488
429 Client Error
{ "avatar_url": "https://avatars.githubusercontent.com/u/7882383?v=4", "events_url": "https://api.github.com/users/sasaadi/events{/privacy}", "followers_url": "https://api.github.com/users/sasaadi/followers", "following_url": "https://api.github.com/users/sasaadi/following{/other_user}", "gists_url": "https://api.github.com/users/sasaadi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sasaadi", "id": 7882383, "login": "sasaadi", "node_id": "MDQ6VXNlcjc4ODIzODM=", "organizations_url": "https://api.github.com/users/sasaadi/orgs", "received_events_url": "https://api.github.com/users/sasaadi/received_events", "repos_url": "https://api.github.com/users/sasaadi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sasaadi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sasaadi/subscriptions", "type": "User", "url": "https://api.github.com/users/sasaadi" }
[]
open
false
null
[]
null
2
2023-12-11T15:06:01Z
2024-06-20T05:55:45Z
null
NONE
null
Hello, I was downloading the following dataset and after 20% of data was downloaded, I started getting error 429. It is not resolved since a few days. How should I resolve it? Thanks Dataset: https://huggingface.co/datasets/cerebras/SlimPajama-627B Error: `requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://huggingface.co/datasets/cerebras/SlimPajama-627B/resolve/2d0accdd58c5d5511943ca1f5ff0e3eb5e293543/train/chunk1/example_train_3300.jsonl.zst`
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6488/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6488/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6487
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6487/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6487/comments
https://api.github.com/repos/huggingface/datasets/issues/6487/events
https://github.com/huggingface/datasets/pull/6487
2,035,424,254
PR_kwDODunzps5hqyfV
6,487
Update builder hash with info
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
2023-12-11T11:09:16Z
2024-01-11T06:35:07Z
2023-12-11T11:41:34Z
MEMBER
null
Currently if you change the `dataset_info` of a dataset (e.g. in the YAML part of the README.md), the cache ignores this change. This is problematic because you want to regenerate a dataset if you change the features or the split sizes for example (e.g. after push_to_hub) Ideally we should take the resolved files into account as well but this will be for another PR
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6487/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6487/timeline
null
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/6487.diff", "html_url": "https://github.com/huggingface/datasets/pull/6487", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6487.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6487" }
true
https://api.github.com/repos/huggingface/datasets/issues/6486
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6486/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6486/comments
https://api.github.com/repos/huggingface/datasets/issues/6486/events
https://github.com/huggingface/datasets/pull/6486
2,035,206,206
PR_kwDODunzps5hqCSc
6,486
Fix docs phrasing about supported formats when sharing a dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
2
2023-12-11T09:21:22Z
2023-12-13T14:21:29Z
2023-12-13T14:15:21Z
MEMBER
null
Fix docs phrasing.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6486/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6486/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6486.diff", "html_url": "https://github.com/huggingface/datasets/pull/6486", "merged_at": "2023-12-13T14:15:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/6486.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6486" }
true
https://api.github.com/repos/huggingface/datasets/issues/6485
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6485/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6485/comments
https://api.github.com/repos/huggingface/datasets/issues/6485/events
https://github.com/huggingface/datasets/issues/6485
2,035,141,884
I_kwDODunzps55Tcz8
6,485
FileNotFoundError: [Errno 2] No such file or directory: 'nul'
{ "avatar_url": "https://avatars.githubusercontent.com/u/73683903?v=4", "events_url": "https://api.github.com/users/amanyara/events{/privacy}", "followers_url": "https://api.github.com/users/amanyara/followers", "following_url": "https://api.github.com/users/amanyara/following{/other_user}", "gists_url": "https://api.github.com/users/amanyara/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/amanyara", "id": 73683903, "login": "amanyara", "node_id": "MDQ6VXNlcjczNjgzOTAz", "organizations_url": "https://api.github.com/users/amanyara/orgs", "received_events_url": "https://api.github.com/users/amanyara/received_events", "repos_url": "https://api.github.com/users/amanyara/repos", "site_admin": false, "starred_url": "https://api.github.com/users/amanyara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/amanyara/subscriptions", "type": "User", "url": "https://api.github.com/users/amanyara" }
[]
closed
false
null
[]
null
1
2023-12-11T08:52:13Z
2023-12-14T08:09:08Z
2023-12-14T08:09:08Z
NONE
null
### Describe the bug it seems that sth wrong with my terrible "bug body" life, When i run this code, "import datasets" i meet this error FileNotFoundError: [Errno 2] No such file or directory: 'nul' ![image](https://github.com/huggingface/datasets/assets/73683903/3973c120-ebb1-42b7-bede-b9de053e861d) ![image](https://github.com/huggingface/datasets/assets/73683903/0496adff-a7a7-4dcb-929e-ec11ede71f04) ### Steps to reproduce the bug 1.import datasets ### Expected behavior i just run a single line code and stuct in this bug ### Environment info OS: Windows10 Datasets==2.15.0 python=3.10
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6485/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6485/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6483
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6483/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6483/comments
https://api.github.com/repos/huggingface/datasets/issues/6483/events
https://github.com/huggingface/datasets/issues/6483
2,032,946,981
I_kwDODunzps55LE8l
6,483
Iterable Dataset: rename column clashes with remove column
{ "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanchit-gandhi", "id": 93869735, "login": "sanchit-gandhi", "node_id": "U_kgDOBZhWpw", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "type": "User", "url": "https://api.github.com/users/sanchit-gandhi" }
[ { "color": "fef2c0", "default": false, "description": "", "id": 3287858981, "name": "streaming", "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming" } ]
closed
false
null
[]
null
4
2023-12-08T16:11:30Z
2023-12-08T16:27:16Z
2023-12-08T16:27:04Z
CONTRIBUTOR
null
### Describe the bug Suppose I have a two iterable datasets, one with the features: * `{"audio", "text", "column_a"}` And the other with the features: * `{"audio", "sentence", "column_b"}` I want to combine both datasets using `interleave_datasets`, which requires me to unify the column names. I would typically do this by: 1. Renaming the common columns to the same name (e.g. `"text"` -> `"sentence"`) 2. Removing the unwanted columns (e.g. `"column_a"`, `"column_b"`) However, the process of renaming and removing columns in an iterable dataset doesn't work, since we need to preserve the original text column, meaning we can't combine the datasets. ### Steps to reproduce the bug ```python from datasets import load_dataset # load LS in streaming mode dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) # check original features dataset_features = dataset.features.keys() print("Original features: ", dataset_features) #Β rename "text" -> "sentence" dataset = dataset.rename_column("text", "sentence") # remove unwanted columns COLUMNS_TO_KEEP = {"audio", "sentence"} dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP)) # stream first sample, should return "audio" and "sentence" columns print(next(iter(dataset))) ``` Traceback: ```python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) Cell In[5], line 17 14 COLUMNS_TO_KEEP = {"audio", "sentence"} 15 dataset = dataset.remove_columns(set(dataset_features - COLUMNS_TO_KEEP)) ---> 17 print(next(iter(dataset))) File ~/datasets/src/datasets/iterable_dataset.py:1353, in IterableDataset.__iter__(self) 1350 yield formatter.format_row(pa_table) 1351 return -> 1353 for key, example in ex_iterable: 1354 if self.features: 1355 # `IterableDataset` automatically fills missing columns with None. 1356 # This is done with `_apply_feature_types_on_example`. 1357 example = _apply_feature_types_on_example( 1358 example, self.features, token_per_repo_id=self._token_per_repo_id 1359 ) File ~/datasets/src/datasets/iterable_dataset.py:652, in MappedExamplesIterable.__iter__(self) 650 yield from ArrowExamplesIterable(self._iter_arrow, {}) 651 else: --> 652 yield from self._iter() File ~/datasets/src/datasets/iterable_dataset.py:729, in MappedExamplesIterable._iter(self) 727 if self.remove_columns: 728 for c in self.remove_columns: --> 729 del transformed_example[c] 730 yield key, transformed_example 731 current_idx += 1 KeyError: 'text' ``` => we see that `datasets` is looking for the column "text", even though we've renamed this to "sentence" and then removed the un-wanted "text" column from our dataset. ### Expected behavior Should be able to rename and remove columns from iterable dataset. ### Environment info - `datasets` version: 2.15.1.dev0 - Platform: macOS-13.5.1-arm64-arm-64bit - Python version: 3.11.6 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.2 - `fsspec` version: 2023.9.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6483/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6483/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6484
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6484/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6484/comments
https://api.github.com/repos/huggingface/datasets/issues/6484/events
https://github.com/huggingface/datasets/issues/6484
2,033,333,294
I_kwDODunzps55MjQu
6,484
[Feature Request] Dataset versioning
{ "avatar_url": "https://avatars.githubusercontent.com/u/47979198?v=4", "events_url": "https://api.github.com/users/kenfus/events{/privacy}", "followers_url": "https://api.github.com/users/kenfus/followers", "following_url": "https://api.github.com/users/kenfus/following{/other_user}", "gists_url": "https://api.github.com/users/kenfus/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kenfus", "id": 47979198, "login": "kenfus", "node_id": "MDQ6VXNlcjQ3OTc5MTk4", "organizations_url": "https://api.github.com/users/kenfus/orgs", "received_events_url": "https://api.github.com/users/kenfus/received_events", "repos_url": "https://api.github.com/users/kenfus/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kenfus/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kenfus/subscriptions", "type": "User", "url": "https://api.github.com/users/kenfus" }
[]
open
false
null
[]
null
2
2023-12-08T16:01:35Z
2023-12-11T19:13:46Z
null
NONE
null
**Is your feature request related to a problem? Please describe.** I am working on a project, where I would like to test different preprocessing methods for my ML-data. Thus, I would like to work a lot with revisions and compare them. Currently, I was not able to make it work with the revision keyword because it was not redownloading the data, it was reading in some cached data, until I put `download_mode="force_redownload"`, even though the reversion was different. Of course, I may have done something wrong or missed a setting somewhere! **Describe the solution you'd like** The solution would allow me to easily work with revisions: - create a new dataset (by combining things, different preprocessing, ..) and give it a new revision (v.1.2.3), maybe like this: `dataset_audio.push_to_hub('kenfus/xy', revision='v1.0.2')` - then, get the current revision as follows: ``` dataset = load_dataset( 'kenfus/xy', revision='v1.0.2', ) ``` this downloads the new version and does not load in a different revision and all future map, filter, .. operations are done on this dataset and not loaded from cache produced from a different revision. - if I rerun the run, the caching should be smart enough in every step to not reuse a mapping operation on a different revision. **Describe alternatives you've considered** I created my own caching, putting `download_mode="force_redownload"` and `load_from_cache_file=False,` everywhere. **Additional context** Thanks a lot for your great work! Creating NLP datasets and training a model with them is really easy and straightforward with huggingface. This is the data loading in my script: ``` ## CREATE PATHS prepared_dataset_path = os.path.join( DATA_FOLDER, str(DATA_VERSION), "prepared_dataset" ) os.makedirs(os.path.join(DATA_FOLDER, str(DATA_VERSION)), exist_ok=True) ## LOAD DATASET if os.path.exists(prepared_dataset_path): print("Loading prepared dataset from disk...") dataset_prepared = load_from_disk(prepared_dataset_path) else: print("Loading dataset from HuggingFace Datasets...") dataset = load_dataset( PATH_TO_DATASET, revision=DATA_VERSION, download_mode="force_redownload" ) print("Preparing dataset...") dataset_prepared = dataset.map( prepare_dataset, remove_columns=["audio", "transcription"], num_proc=os.cpu_count(), load_from_cache_file=False, ) dataset_prepared.save_to_disk(prepared_dataset_path) del dataset if CHECK_DATASET: ## CHECK DATASET dataset_prepared = dataset_prepared.map( check_dimensions, num_proc=os.cpu_count(), load_from_cache_file=False ) dataset_filtered = dataset_prepared.filter( lambda example: not example["incorrect_dimension"], load_from_cache_file=False, ) for example in dataset_prepared.filter( lambda example: example["incorrect_dimension"], load_from_cache_file=False ): print(example["path"]) print( f"Number of examples with incorrect dimension: {len(dataset_prepared) - len(dataset_filtered)}" ) print("Number of examples train: ", len(dataset_filtered["train"])) print("Number of examples test: ", len(dataset_filtered["test"])) ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6484/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6484/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6482
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6482/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6482/comments
https://api.github.com/repos/huggingface/datasets/issues/6482/events
https://github.com/huggingface/datasets/pull/6482
2,032,675,918
PR_kwDODunzps5hhl23
6,482
Fix max lock length on unix
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
3
2023-12-08T13:39:30Z
2023-12-12T11:53:32Z
2023-12-12T11:47:27Z
MEMBER
null
reported in https://github.com/huggingface/datasets/pull/6482
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6482/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6482/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6482.diff", "html_url": "https://github.com/huggingface/datasets/pull/6482", "merged_at": "2023-12-12T11:47:27Z", "patch_url": "https://github.com/huggingface/datasets/pull/6482.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6482" }
true
https://api.github.com/repos/huggingface/datasets/issues/6481
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6481/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6481/comments
https://api.github.com/repos/huggingface/datasets/issues/6481/events
https://github.com/huggingface/datasets/issues/6481
2,032,650,003
I_kwDODunzps55J8cT
6,481
using torchrun, save_to_disk suddenly shows SIGTERM
{ "avatar_url": "https://avatars.githubusercontent.com/u/85916625?v=4", "events_url": "https://api.github.com/users/Ariya12138/events{/privacy}", "followers_url": "https://api.github.com/users/Ariya12138/followers", "following_url": "https://api.github.com/users/Ariya12138/following{/other_user}", "gists_url": "https://api.github.com/users/Ariya12138/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Ariya12138", "id": 85916625, "login": "Ariya12138", "node_id": "MDQ6VXNlcjg1OTE2NjI1", "organizations_url": "https://api.github.com/users/Ariya12138/orgs", "received_events_url": "https://api.github.com/users/Ariya12138/received_events", "repos_url": "https://api.github.com/users/Ariya12138/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Ariya12138/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ariya12138/subscriptions", "type": "User", "url": "https://api.github.com/users/Ariya12138" }
[]
open
false
null
[]
null
0
2023-12-08T13:22:03Z
2023-12-08T13:22:03Z
null
NONE
null
### Describe the bug When I run my code using the "torchrun" command, when the code reaches the "save_to_disk" part, suddenly I get the following warning and error messages: Because the dataset is too large, the "save_to_disk" function splits it into 70 parts for saving. However, an error occurs suddenly when it reaches the 14th shard. WARNING: torch.distributed.elastic.multiprocessing.api: Sending process 2224968 closing signal SIGTERM ERROR: torch.distributed.elastic.multiprocessing.api: failed (exitcode: -7). traceback: Signal 7 (SIGBUS) received by PID 2224967. ### Steps to reproduce the bug ds_shard = ds_shard.map(map_fn, *args, **kwargs) ds_shard.save_to_disk(ds_shard_filepaths[rank]) Saving the dataset (14/70 shards): 20%|β–ˆβ–ˆ | 875350/4376702 [00:19<01:53, 30863.15 examples/s] WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2224968 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -7) local_rank: 0 (pid: 2224967) of binary: /home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/python Traceback (most recent call last): File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/torchrun", line 8, in <module> sys.exit(main()) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ========================================================== run.py FAILED ---------------------------------------------------------- Failures: <NO_OTHER_FAILURES> ---------------------------------------------------------- Root Cause (first observed failure): [0]: time : 2023-12-08_20:09:04 rank : 0 (local_rank: 0) exitcode : -7 (pid: 2224967) error_file: <N/A> traceback : Signal 7 (SIGBUS) received by PID 2224967 ### Expected behavior I hope it can save successfully without any issues, but it seems there is a problem. ### Environment info `datasets` version: 2.14.6 - Platform: Linux-4.19.90-24.4.v2101.ky10.aarch64-aarch64-with-glibc2.28 - Python version: 3.10.11 - Huggingface_hub version: 0.17.3 - PyArrow version: 14.0.0 - Pandas version: 2.1.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6481/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6481/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6480
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6480/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6480/comments
https://api.github.com/repos/huggingface/datasets/issues/6480/events
https://github.com/huggingface/datasets/pull/6480
2,031,116,653
PR_kwDODunzps5hcS7P
6,480
Add IterableDataset `__repr__`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
2023-12-07T16:31:50Z
2023-12-08T13:33:06Z
2023-12-08T13:26:54Z
MEMBER
null
Example for glue sst2: Dataset ``` DatasetDict({ test: Dataset({ features: ['sentence', 'label', 'idx'], num_rows: 1821 }) train: Dataset({ features: ['sentence', 'label', 'idx'], num_rows: 67349 }) validation: Dataset({ features: ['sentence', 'label', 'idx'], num_rows: 872 }) }) ``` IterableDataset (new) ``` IterableDatasetDict({ test: IterableDataset({ features: ['sentence', 'label', 'idx'], n_shards: 1 }) train: IterableDataset({ features: ['sentence', 'label', 'idx'], n_shards: 1 }) validation: IterableDataset({ features: ['sentence', 'label', 'idx'], n_shards: 1 }) }) ``` IterableDataset (before) ``` {'test': <datasets.iterable_dataset.IterableDataset object at 0x130d421f0>, 'train': <datasets.iterable_dataset.IterableDataset object at 0x136f3aaf0>, 'validation': <datasets.iterable_dataset.IterableDataset object at 0x136f4b100>} {'sentence': 'hide new secretions from the parental units ', 'label': 0, 'idx': 0} ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6480/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6480/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6480.diff", "html_url": "https://github.com/huggingface/datasets/pull/6480", "merged_at": "2023-12-08T13:26:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/6480.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6480" }
true
https://api.github.com/repos/huggingface/datasets/issues/6479
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6479/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6479/comments
https://api.github.com/repos/huggingface/datasets/issues/6479/events
https://github.com/huggingface/datasets/pull/6479
2,029,040,121
PR_kwDODunzps5hVLom
6,479
More robust preupload retry mechanism
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
2
2023-12-06T17:19:38Z
2023-12-06T19:47:29Z
2023-12-06T19:41:06Z
COLLABORATOR
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6479/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6479/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6479.diff", "html_url": "https://github.com/huggingface/datasets/pull/6479", "merged_at": "2023-12-06T19:41:06Z", "patch_url": "https://github.com/huggingface/datasets/pull/6479.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6479" }
true
https://api.github.com/repos/huggingface/datasets/issues/6478
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6478/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6478/comments
https://api.github.com/repos/huggingface/datasets/issues/6478/events
https://github.com/huggingface/datasets/issues/6478
2,028,071,596
I_kwDODunzps544eqs
6,478
How to load data from lakefs
{ "avatar_url": "https://avatars.githubusercontent.com/u/12895488?v=4", "events_url": "https://api.github.com/users/d710055071/events{/privacy}", "followers_url": "https://api.github.com/users/d710055071/followers", "following_url": "https://api.github.com/users/d710055071/following{/other_user}", "gists_url": "https://api.github.com/users/d710055071/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/d710055071", "id": 12895488, "login": "d710055071", "node_id": "MDQ6VXNlcjEyODk1NDg4", "organizations_url": "https://api.github.com/users/d710055071/orgs", "received_events_url": "https://api.github.com/users/d710055071/received_events", "repos_url": "https://api.github.com/users/d710055071/repos", "site_admin": false, "starred_url": "https://api.github.com/users/d710055071/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/d710055071/subscriptions", "type": "User", "url": "https://api.github.com/users/d710055071" }
[]
closed
false
null
[]
null
3
2023-12-06T09:04:11Z
2024-07-03T19:13:57Z
2024-07-03T19:13:56Z
CONTRIBUTOR
null
My dataset is stored on the company's lakefs server. How can I write code to load the dataset? It would be great if I could provide code examples or provide some references
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6478/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6478/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6477
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6477/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6477/comments
https://api.github.com/repos/huggingface/datasets/issues/6477/events
https://github.com/huggingface/datasets/pull/6477
2,028,022,374
PR_kwDODunzps5hRq_N
6,477
Fix PermissionError on Windows CI
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
2
2023-12-06T08:34:53Z
2023-12-06T09:24:11Z
2023-12-06T09:17:52Z
MEMBER
null
Fix #6476.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6477/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6477/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6477.diff", "html_url": "https://github.com/huggingface/datasets/pull/6477", "merged_at": "2023-12-06T09:17:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/6477.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6477" }
true
https://api.github.com/repos/huggingface/datasets/issues/6476
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6476/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6476/comments
https://api.github.com/repos/huggingface/datasets/issues/6476/events
https://github.com/huggingface/datasets/issues/6476
2,028,018,596
I_kwDODunzps544Ruk
6,476
CI on windows is broken: PermissionError
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
0
2023-12-06T08:32:53Z
2023-12-06T09:17:53Z
2023-12-06T09:17:53Z
MEMBER
null
See: https://github.com/huggingface/datasets/actions/runs/7104781624/job/19340572394 ``` FAILED tests/test_load.py::test_loading_from_the_datasets_hub - NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpfcnps56i\\hf-internal-testing___dataset_with_script\\default\\0.0.0\\c240e2be3370bdbd\\dataset_with_script-train.arrow' ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6476/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6476/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6475
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6475/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6475/comments
https://api.github.com/repos/huggingface/datasets/issues/6475/events
https://github.com/huggingface/datasets/issues/6475
2,027,373,734
I_kwDODunzps5410Sm
6,475
laion2B-en failed to load on Windows with PrefetchVirtualMemory failed
{ "avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4", "events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}", "followers_url": "https://api.github.com/users/doctorpangloss/followers", "following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}", "gists_url": "https://api.github.com/users/doctorpangloss/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/doctorpangloss", "id": 2229300, "login": "doctorpangloss", "node_id": "MDQ6VXNlcjIyMjkzMDA=", "organizations_url": "https://api.github.com/users/doctorpangloss/orgs", "received_events_url": "https://api.github.com/users/doctorpangloss/received_events", "repos_url": "https://api.github.com/users/doctorpangloss/repos", "site_admin": false, "starred_url": "https://api.github.com/users/doctorpangloss/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/doctorpangloss/subscriptions", "type": "User", "url": "https://api.github.com/users/doctorpangloss" }
[]
open
false
null
[]
null
6
2023-12-06T00:07:34Z
2023-12-06T23:26:23Z
null
NONE
null
### Describe the bug I have downloaded laion2B-en, and I'm receiving the following error trying to load it: ``` Resolving data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 128/128 [00:00<00:00, 1173.79it/s] Traceback (most recent call last): File "D:\Art-Workspace\src\artworkspace\tokeneval\compute_frequencies.py", line 31, in <module> count = compute_frequencies() ^^^^^^^^^^^^^^^^^^^^^ File "D:\Art-Workspace\src\artworkspace\tokeneval\compute_frequencies.py", line 17, in compute_frequencies laion2b_dataset = load_dataset("laion/laion2B-en", split="train", cache_dir=_CACHE_DIR, keep_in_memory=False) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\load.py", line 2165, in load_dataset ds = builder_instance.as_dataset(split=split, verification_mode=verification_mode, in_memory=keep_in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1187, in as_dataset datasets = map_nested( ^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\utils\py_utils.py", line 456, in map_nested return function(data_struct) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1217, in _build_single_dataset ds = self._as_dataset( ^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\builder.py", line 1291, in _as_dataset dataset_kwargs = ArrowReader(cache_dir, self.info).read( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 244, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 265, in read_files pa_table = self._read_files(files, in_memory=in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 200, in _read_files pa_table: Table = self._get_table_from_filename(f_dict, in_memory=in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 336, in _get_table_from_filename table = ArrowReader.read_table(filename, in_memory=in_memory) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\arrow_reader.py", line 357, in read_table return table_cls.from_file(filename) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\table.py", line 1059, in from_file table = _memory_mapped_arrow_table_from_file(filename) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\bberman\Documents\Art-Workspace\venv\Lib\site-packages\datasets\table.py", line 66, in _memory_mapped_arrow_table_from_file pa_table = opened_stream.read_all() ^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow\ipc.pxi", line 757, in pyarrow.lib.RecordBatchReader.read_all File "pyarrow\error.pxi", line 91, in pyarrow.lib.check_status OSError: [WinError 8] PrefetchVirtualMemory failed. Detail: [Windows error 8] Not enough memory resources are available to process this command. ``` This error is probably a red herring: https://stackoverflow.com/questions/50263929/numpy-memmap-returns-not-enough-memory-while-there-are-plenty-available In other words, the issue is related to asking for a memory mapping of length N > M the length of the file on Windows. This gracefully succeeds on Linux. I have 1024 arrow files in my cache instead of 128 like in the repository for it. Probably related. I don't know why `datasets` reorganized/rewrote the dataset in my cache to be 1024 slices instead of the original 128. ### Steps to reproduce the bug ``` # as a huggingface developer, you may already have laion2B-en somewhere _CACHE_DIR = "." from datasets import load_dataset load_dataset("laion/laion2B-en", split="train", cache_dir=_CACHE_DIR, keep_in_memory=False) ``` ### Expected behavior This should correctly load as a memory mapped Arrow dataset. ### Environment info - `datasets` version: 2.15.0 - Platform: Windows-10-10.0.20348-SP0 (this is windows 2022) - Python version: 3.11.4 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.2 - `fsspec` version: 2023.10.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6475/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6475/timeline
null
reopened
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6474
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6474/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6474/comments
https://api.github.com/repos/huggingface/datasets/issues/6474/events
https://github.com/huggingface/datasets/pull/6474
2,027,006,715
PR_kwDODunzps5hONZc
6,474
Deprecate Beam API and download from HF GCS bucket
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
2
2023-12-05T19:51:33Z
2024-03-12T14:56:25Z
2024-03-12T14:50:12Z
COLLABORATOR
null
Deprecate the Beam API and download from the HF GCS bucked. TODO: - [x] Convert the Beam-based [`wikipedia`](https://huggingface.co/datasets/wikipedia) to an Arrow-based dataset ([Hub PR](https://huggingface.co/datasets/wikipedia/discussions/19)) - [x] Make [`natural_questions`](https://huggingface.co/datasets/natural_questions) a no-code dataset ([Hub PR](https://huggingface.co/datasets/natural_questions/discussions/7)) - [x] Make [`wiki40b`](https://huggingface.co/datasets/wiki40b) a no-code dataset ([Hub PR](https://huggingface.co/datasets/wiki40b/discussions/5)) - [x] Make [`wiki_dpr`](https://huggingface.co/datasets/wiki_dpr) an Arrow-based dataset ([Hub PR](https://huggingface.co/datasets/wiki_dpr/discussions/14))
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6474/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6474/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6474.diff", "html_url": "https://github.com/huggingface/datasets/pull/6474", "merged_at": "2024-03-12T14:50:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/6474.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6474" }
true
https://api.github.com/repos/huggingface/datasets/issues/6473
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6473/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6473/comments
https://api.github.com/repos/huggingface/datasets/issues/6473/events
https://github.com/huggingface/datasets/pull/6473
2,026,495,084
PR_kwDODunzps5hMbvz
6,473
Fix CI quality
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
2
2023-12-05T15:36:23Z
2023-12-05T18:14:50Z
2023-12-05T18:08:41Z
MEMBER
null
Fix #6472.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6473/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6473/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6473.diff", "html_url": "https://github.com/huggingface/datasets/pull/6473", "merged_at": "2023-12-05T18:08:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/6473.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6473" }
true
https://api.github.com/repos/huggingface/datasets/issues/6472
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6472/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6472/comments
https://api.github.com/repos/huggingface/datasets/issues/6472/events
https://github.com/huggingface/datasets/issues/6472
2,026,493,439
I_kwDODunzps54ydX_
6,472
CI quality is broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "d4c5f9", "default": false, "description": "Maintenance tasks", "id": 4296013012, "name": "maintenance", "node_id": "LA_kwDODunzps8AAAABAA_01A", "url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
0
2023-12-05T15:35:34Z
2023-12-06T08:17:34Z
2023-12-05T18:08:43Z
MEMBER
null
See: https://github.com/huggingface/datasets/actions/runs/7100835633/job/19327734359 ``` Would reformat: src/datasets/features/image.py 1 file would be reformatted, 253 files left unchanged ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6472/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6472/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6471
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6471/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6471/comments
https://api.github.com/repos/huggingface/datasets/issues/6471/events
https://github.com/huggingface/datasets/pull/6471
2,026,100,761
PR_kwDODunzps5hLEni
6,471
Remove delete doc CI
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
2023-12-05T12:37:50Z
2023-12-05T12:44:59Z
2023-12-05T12:38:50Z
MEMBER
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6471/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6471/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6471.diff", "html_url": "https://github.com/huggingface/datasets/pull/6471", "merged_at": "2023-12-05T12:38:50Z", "patch_url": "https://github.com/huggingface/datasets/pull/6471.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6471" }
true
https://api.github.com/repos/huggingface/datasets/issues/6470
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6470/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6470/comments
https://api.github.com/repos/huggingface/datasets/issues/6470/events
https://github.com/huggingface/datasets/issues/6470
2,024,724,319
I_kwDODunzps54rtdf
6,470
If an image in a dataset is corrupted, we get unescapable error
{ "avatar_url": "https://avatars.githubusercontent.com/u/14337872?v=4", "events_url": "https://api.github.com/users/chigozienri/events{/privacy}", "followers_url": "https://api.github.com/users/chigozienri/followers", "following_url": "https://api.github.com/users/chigozienri/following{/other_user}", "gists_url": "https://api.github.com/users/chigozienri/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/chigozienri", "id": 14337872, "login": "chigozienri", "node_id": "MDQ6VXNlcjE0MzM3ODcy", "organizations_url": "https://api.github.com/users/chigozienri/orgs", "received_events_url": "https://api.github.com/users/chigozienri/received_events", "repos_url": "https://api.github.com/users/chigozienri/repos", "site_admin": false, "starred_url": "https://api.github.com/users/chigozienri/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/chigozienri/subscriptions", "type": "User", "url": "https://api.github.com/users/chigozienri" }
[]
open
false
null
[]
null
0
2023-12-04T20:58:49Z
2023-12-04T20:58:49Z
null
NONE
null
### Describe the bug Example discussed in detail here: https://huggingface.co/datasets/sasha/birdsnap/discussions/1 ### Steps to reproduce the bug ``` from datasets import load_dataset, VerificationMode dataset = load_dataset( 'sasha/birdsnap', split="train", verification_mode=VerificationMode.ALL_CHECKS, streaming=True # I recommend using streaming=True when reproducing, as this dataset is large ) for idx, row in enumerate(dataset): # Iterating to 9287 took 7 minutes for me # If you already have the data locally cached and set streaming=False, you see the same error just by with dataset[9287] pass # error at 9287 OSError: image file is truncated (45 bytes not processed) # note that we can't avoid the error using a try/except + continue inside the loop ``` ### Expected behavior Able to escape errors in casting to Image() without killing the whole loop ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-5.15.0-84-generic-x86_64-with-glibc2.31 - Python version: 3.11.5 - `huggingface_hub` version: 0.19.4 - PyArrow version: 14.0.1 - Pandas version: 2.1.3 - `fsspec` version: 2023.10.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6470/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6470/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6469
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6469/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6469/comments
https://api.github.com/repos/huggingface/datasets/issues/6469/events
https://github.com/huggingface/datasets/pull/6469
2,023,695,839
PR_kwDODunzps5hC6xf
6,469
Don't expand_info in HF glob
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
3
2023-12-04T12:00:37Z
2023-12-15T13:18:37Z
2023-12-15T13:12:30Z
MEMBER
null
Finally fix https://github.com/huggingface/datasets/issues/5537
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6469/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6469/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6469.diff", "html_url": "https://github.com/huggingface/datasets/pull/6469", "merged_at": "2023-12-15T13:12:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/6469.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6469" }
true
https://api.github.com/repos/huggingface/datasets/issues/6468
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6468/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6468/comments
https://api.github.com/repos/huggingface/datasets/issues/6468/events
https://github.com/huggingface/datasets/pull/6468
2,023,617,877
PR_kwDODunzps5hCpbN
6,468
Use auth to get parquet export
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
2023-12-04T11:18:27Z
2023-12-04T17:21:22Z
2023-12-04T17:15:11Z
MEMBER
null
added `token` to the `_datasets_server` functions
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6468/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6468/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6468.diff", "html_url": "https://github.com/huggingface/datasets/pull/6468", "merged_at": "2023-12-04T17:15:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/6468.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6468" }
true
https://api.github.com/repos/huggingface/datasets/issues/6467
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6467/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6467/comments
https://api.github.com/repos/huggingface/datasets/issues/6467/events
https://github.com/huggingface/datasets/issues/6467
2,023,174,233
I_kwDODunzps54lzBZ
6,467
New version release request
{ "avatar_url": "https://avatars.githubusercontent.com/u/36994684?v=4", "events_url": "https://api.github.com/users/LZHgrla/events{/privacy}", "followers_url": "https://api.github.com/users/LZHgrla/followers", "following_url": "https://api.github.com/users/LZHgrla/following{/other_user}", "gists_url": "https://api.github.com/users/LZHgrla/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LZHgrla", "id": 36994684, "login": "LZHgrla", "node_id": "MDQ6VXNlcjM2OTk0Njg0", "organizations_url": "https://api.github.com/users/LZHgrla/orgs", "received_events_url": "https://api.github.com/users/LZHgrla/received_events", "repos_url": "https://api.github.com/users/LZHgrla/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LZHgrla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LZHgrla/subscriptions", "type": "User", "url": "https://api.github.com/users/LZHgrla" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
2
2023-12-04T07:08:26Z
2023-12-04T15:42:22Z
2023-12-04T15:42:22Z
CONTRIBUTOR
null
### Feature request Hi! I am using `datasets` in library `xtuner` and am highly interested in the features introduced since v2.15.0. To avoid installation from source in our pypi wheels, we are eagerly waiting for the new release. So, Does your team have a new release plan for v2.15.1 and could you please share it with us? Thanks very much! ### Motivation . ### Your contribution .
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6467/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6467/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6466
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6466/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6466/comments
https://api.github.com/repos/huggingface/datasets/issues/6466/events
https://github.com/huggingface/datasets/issues/6466
2,022,601,176
I_kwDODunzps54jnHY
6,466
Can't align optional features of struct
{ "avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4", "events_url": "https://api.github.com/users/Dref360/events{/privacy}", "followers_url": "https://api.github.com/users/Dref360/followers", "following_url": "https://api.github.com/users/Dref360/following{/other_user}", "gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Dref360", "id": 8976546, "login": "Dref360", "node_id": "MDQ6VXNlcjg5NzY1NDY=", "organizations_url": "https://api.github.com/users/Dref360/orgs", "received_events_url": "https://api.github.com/users/Dref360/received_events", "repos_url": "https://api.github.com/users/Dref360/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Dref360/subscriptions", "type": "User", "url": "https://api.github.com/users/Dref360" }
[]
closed
false
null
[]
null
3
2023-12-03T15:57:07Z
2024-02-15T15:19:33Z
2024-02-08T14:38:34Z
CONTRIBUTOR
null
### Describe the bug Hello! I'm currently experiencing an issue where I can't concatenate datasets if an inner field of a Feature is Optional. I have a column named `speaker`, and this holds some information about a speaker. ```python @dataclass class Speaker: name: str email: Optional[str] ``` If I have two datasets, one happens to have `email` always None, then I get `The features can't be aligned because the key email of features` ### Steps to reproduce the bug You can run the following script: ```python ds = Dataset.from_dict({'speaker': [{'name': 'Ben', 'email': None}]}) ds2 = Dataset.from_dict({'speaker': [{'name': 'Fred', 'email': 'abc@aol.com'}]}) concatenate_datasets([ds, ds2]) >>>The features can't be aligned because the key speaker of features {'speaker': {'email': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None)}} has unexpected type - {'email': Value(dtype='string', id=None), 'name': Value(dtype='string', id=None)} (expected either {'email': Value(dtype='null', id=None), 'name': Value(dtype='string', id=None)} or Value("null"). ``` ### Expected behavior I think this should work; if two top-level columns were in the same situation it would properly cast to `string`. ```python ds = Dataset.from_dict({'email': [None, None]}) ds2 = Dataset.from_dict({'email': ['abc@aol.com', 'one@yahoo.com']}) concatenate_datasets([ds, ds2]) >>> # Works! ``` ### Environment info - `datasets` version: 2.15.1.dev0 - Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35 - Python version: 3.9.13 - `huggingface_hub` version: 0.19.4 - PyArrow version: 9.0.0 - Pandas version: 1.4.4 - `fsspec` version: 2023.6.0 I would be happy to fix this issue.
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6466/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6466/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6465
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6465/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6465/comments
https://api.github.com/repos/huggingface/datasets/issues/6465/events
https://github.com/huggingface/datasets/issues/6465
2,022,212,468
I_kwDODunzps54iIN0
6,465
`load_dataset` uses out-of-date cache instead of re-downloading a changed dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/3391297?v=4", "events_url": "https://api.github.com/users/mnoukhov/events{/privacy}", "followers_url": "https://api.github.com/users/mnoukhov/followers", "following_url": "https://api.github.com/users/mnoukhov/following{/other_user}", "gists_url": "https://api.github.com/users/mnoukhov/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mnoukhov", "id": 3391297, "login": "mnoukhov", "node_id": "MDQ6VXNlcjMzOTEyOTc=", "organizations_url": "https://api.github.com/users/mnoukhov/orgs", "received_events_url": "https://api.github.com/users/mnoukhov/received_events", "repos_url": "https://api.github.com/users/mnoukhov/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mnoukhov/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mnoukhov/subscriptions", "type": "User", "url": "https://api.github.com/users/mnoukhov" }
[]
open
false
null
[]
null
1
2023-12-02T21:35:17Z
2023-12-04T16:13:10Z
null
NONE
null
### Describe the bug When a dataset is updated on the hub, using `load_dataset` will load the locally cached dataset instead of re-downloading the updated dataset ### Steps to reproduce the bug Here is a minimal example script to 1. create an initial dataset and upload 2. download it so it is stored in cache 3. change the dataset and re-upload 4. redownload ```python import time from datasets import Dataset, DatasetDict, DownloadMode, load_dataset username = "YOUR_USERNAME_HERE" initial = Dataset.from_dict({"foo": [1, 2, 3]}) print(f"Intial {initial['foo']}") initial_ds = DatasetDict({"train": initial}) initial_ds.push_to_hub("test") time.sleep(1) download = load_dataset(f"{username}/test", split="train") changed = download.map(lambda x: {"foo": x["foo"] + 1}) print(f"Changed {changed['foo']}") changed.push_to_hub("test") time.sleep(1) download_again = load_dataset(f"{username}/test", split="train") print(f"Download Changed {download_again['foo']}") # >>> gives the out-dated [1,2,3] when it should be changed [2,3,4] ``` The redownloaded dataset should be the changed dataset but it is actually the cached, initial dataset. Force-redownloading gives the correct dataset ```python download_again_force = load_dataset(f"{username}/test", split="train", download_mode=DownloadMode.FORCE_REDOWNLOAD) print(f"Force Download Changed {download_again_force['foo']}") # >>> [2,3,4] ``` ### Expected behavior I assumed there should be some sort of hashing that should check for changes in the dataset and re-download if the hashes don't match ### Environment info - `datasets` version: 2.15.0 β”‚ - Platform: Linux-5.15.0-1028-nvidia-x86_64-with-glibc2.17 β”‚ - Python version: 3.8.17 β”‚ - `huggingface_hub` version: 0.19.4 β”‚ - PyArrow version: 13.0.0 β”‚ - Pandas version: 2.0.3 β”‚ - `fsspec` version: 2023.6.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6465/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6465/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6464
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6464/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6464/comments
https://api.github.com/repos/huggingface/datasets/issues/6464/events
https://github.com/huggingface/datasets/pull/6464
2,020,860,462
PR_kwDODunzps5g5djo
6,464
Add concurrent loading of shards to datasets.load_from_disk
{ "avatar_url": "https://avatars.githubusercontent.com/u/51880718?v=4", "events_url": "https://api.github.com/users/kkoutini/events{/privacy}", "followers_url": "https://api.github.com/users/kkoutini/followers", "following_url": "https://api.github.com/users/kkoutini/following{/other_user}", "gists_url": "https://api.github.com/users/kkoutini/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kkoutini", "id": 51880718, "login": "kkoutini", "node_id": "MDQ6VXNlcjUxODgwNzE4", "organizations_url": "https://api.github.com/users/kkoutini/orgs", "received_events_url": "https://api.github.com/users/kkoutini/received_events", "repos_url": "https://api.github.com/users/kkoutini/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kkoutini/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kkoutini/subscriptions", "type": "User", "url": "https://api.github.com/users/kkoutini" }
[]
closed
false
null
[]
null
8
2023-12-01T13:13:53Z
2024-01-26T15:17:43Z
2024-01-26T15:10:26Z
CONTRIBUTOR
null
In some file systems (like luster), memory mapping arrow files takes time. This can be accelerated by performing the mmap in parallel on processes or threads. - Threads seem to be faster than processes when gathering the list of tables from the workers (see https://github.com/huggingface/datasets/issues/2252). - I'm not sure if using threads would respect theΒ `IN_MEMORY_MAX_SIZE` config. - I'm not sure if we need to exposeΒ num_procΒ fromΒ `BaseReader.read`Β toΒ `DatasetBuilder.as_dataset`. Since `Β DatasetBuilder.as_dataset` is used in many places beside `load_dataset`. ### Tests on luster file system (on a shared partial node): Loading 1231 shards of ~2GBs. The files were pre-loaded in another process before the script runs (couldn't get a fresh node). ```python import logging from time import perf_counter import datasets logger = datasets.logging.get_logger(__name__) datasets.logging.set_verbosity_info() logging.basicConfig(level=logging.DEBUG, format="%(message)s") class catchtime: # context to measure loading time: https://stackoverflow.com/questions/33987060/python-context-manager-that-measures-time def __init__(self, debug_print="Time", logger=logger): self.debug_print = debug_print self.logger = logger def __enter__(self): self.start = perf_counter() return self def __exit__(self, type, value, traceback): self.time = perf_counter() - self.start readout = f"{self.debug_print}: {self.time:.3f} seconds" self.logger.info(readout) dataset_path="" # warmup with catchtime("Loading in parallel", logger=logger): ds = datasets.load_from_disk(dataset_path,num_proc=16) # num_proc=16 with catchtime("Loading in parallel", logger=logger): ds = datasets.load_from_disk(dataset_path,num_proc=16) # num_proc=32 with catchtime("Loading in parallel", logger=logger): ds = datasets.load_from_disk(dataset_path,num_proc=32) # num_proc=1 with catchtime("Loading in conseq", logger=logger): ds = datasets.load_from_disk(dataset_path,num_proc=1) ``` #### Run 1 ``` open file: .../dataset_dict.json Loading the dataset from disk using 16 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [01:28<00:00, 13.96shards/s] Loading in parallel: 88.690 seconds open file: .../dataset_dict.json Loading the dataset from disk using 16 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [01:48<00:00, 11.31shards/s] Loading in parallel: 109.339 seconds open file: .../dataset_dict.json Loading the dataset from disk using 32 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [01:06<00:00, 18.56shards/s] Loading in parallel: 66.931 seconds open file: .../dataset_dict.json Loading the dataset from disk: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [05:09<00:00, 3.98shards/s] Loading in conseq: 309.792 seconds ``` #### Run 2 ``` open file: .../dataset_dict.json Loading the dataset from disk using 16 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [01:38<00:00, 12.53shards/s] Loading in parallel: 98.831 seconds open file: .../dataset_dict.json Loading the dataset from disk using 16 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [02:01<00:00, 10.16shards/s] Loading in parallel: 121.669 seconds open file: .../dataset_dict.json Loading the dataset from disk using 32 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [01:07<00:00, 18.18shards/s] Loading in parallel: 68.192 seconds open file: .../dataset_dict.json Loading the dataset from disk: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [05:19<00:00, 3.86shards/s] Loading in conseq: 319.759 seconds ``` #### Run 3 ``` open file: .../dataset_dict.json Loading the dataset from disk using 16 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [01:36<00:00, 12.74shards/s] Loading in parallel: 96.936 seconds open file: .../dataset_dict.json Loading the dataset from disk using 16 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [02:00<00:00, 10.24shards/s] Loading in parallel: 120.761 seconds open file: .../dataset_dict.json Loading the dataset from disk using 32 threads: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [01:08<00:00, 18.04shards/s] Loading in parallel: 68.666 seconds open file: .../dataset_dict.json Loading the dataset from disk: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1231/1231 [05:35<00:00, 3.67shards/s] Loading in conseq: 335.777 seconds ``` fix #2252
{ "+1": 2, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6464/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6464/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6464.diff", "html_url": "https://github.com/huggingface/datasets/pull/6464", "merged_at": "2024-01-26T15:10:26Z", "patch_url": "https://github.com/huggingface/datasets/pull/6464.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6464" }
true
https://api.github.com/repos/huggingface/datasets/issues/6463
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6463/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6463/comments
https://api.github.com/repos/huggingface/datasets/issues/6463/events
https://github.com/huggingface/datasets/pull/6463
2,020,702,967
PR_kwDODunzps5g46_4
6,463
Disable benchmarks in PRs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
2023-12-01T11:35:30Z
2023-12-01T12:09:09Z
2023-12-01T12:03:04Z
MEMBER
null
In order to keep PR pages less spammy / more readable. Having the benchmarks on commits on `main` is enough imo
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6463/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6463/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6463.diff", "html_url": "https://github.com/huggingface/datasets/pull/6463", "merged_at": "2023-12-01T12:03:04Z", "patch_url": "https://github.com/huggingface/datasets/pull/6463.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6463" }
true
https://api.github.com/repos/huggingface/datasets/issues/6462
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6462/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6462/comments
https://api.github.com/repos/huggingface/datasets/issues/6462/events
https://github.com/huggingface/datasets/pull/6462
2,019,238,388
PR_kwDODunzps5gz68T
6,462
Missing DatasetNotFoundError
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
2
2023-11-30T18:09:43Z
2023-11-30T18:36:40Z
2023-11-30T18:30:30Z
MEMBER
null
continuation of https://github.com/huggingface/datasets/pull/6431 this should fix the CI in https://github.com/huggingface/datasets/pull/6458 too
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6462/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6462/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6462.diff", "html_url": "https://github.com/huggingface/datasets/pull/6462", "merged_at": "2023-11-30T18:30:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/6462.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6462" }
true
https://api.github.com/repos/huggingface/datasets/issues/6461
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6461/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6461/comments
https://api.github.com/repos/huggingface/datasets/issues/6461/events
https://github.com/huggingface/datasets/pull/6461
2,018,850,731
PR_kwDODunzps5gykvO
6,461
Fix shard retry mechanism in `push_to_hub`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
5
2023-11-30T14:57:14Z
2023-12-01T17:57:39Z
2023-12-01T17:51:33Z
COLLABORATOR
null
When it fails, `preupload_lfs_files` throws a [`RuntimeError`](https://github.com/huggingface/huggingface_hub/blob/5eefebee2c150a2df950ab710db350e96c711433/src/huggingface_hub/_commit_api.py#L402) error and chains the original HTTP error. This PR modifies the retry mechanism's error handling to account for that. Fix https://github.com/huggingface/datasets/issues/6392
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6461/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6461/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6461.diff", "html_url": "https://github.com/huggingface/datasets/pull/6461", "merged_at": "2023-12-01T17:51:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/6461.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6461" }
true
https://api.github.com/repos/huggingface/datasets/issues/6460
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6460/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6460/comments
https://api.github.com/repos/huggingface/datasets/issues/6460/events
https://github.com/huggingface/datasets/issues/6460
2,017,433,899
I_kwDODunzps54P5kr
6,460
jsonlines files don't load with `load_dataset`
{ "avatar_url": "https://avatars.githubusercontent.com/u/41377532?v=4", "events_url": "https://api.github.com/users/serenalotreck/events{/privacy}", "followers_url": "https://api.github.com/users/serenalotreck/followers", "following_url": "https://api.github.com/users/serenalotreck/following{/other_user}", "gists_url": "https://api.github.com/users/serenalotreck/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/serenalotreck", "id": 41377532, "login": "serenalotreck", "node_id": "MDQ6VXNlcjQxMzc3NTMy", "organizations_url": "https://api.github.com/users/serenalotreck/orgs", "received_events_url": "https://api.github.com/users/serenalotreck/received_events", "repos_url": "https://api.github.com/users/serenalotreck/repos", "site_admin": false, "starred_url": "https://api.github.com/users/serenalotreck/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/serenalotreck/subscriptions", "type": "User", "url": "https://api.github.com/users/serenalotreck" }
[]
closed
false
null
[]
null
4
2023-11-29T21:20:11Z
2023-12-29T02:58:29Z
2023-12-05T13:30:53Z
NONE
null
### Describe the bug While [the docs](https://huggingface.co/docs/datasets/upload_dataset#upload-dataset) seem to state that `.jsonl` is a supported extension for `datasets`, loading the dataset results in a `JSONDecodeError`. ### Steps to reproduce the bug Code: ``` from datasets import load_dataset dset = load_dataset('slotreck/pickle') ``` Traceback: ``` Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 925/925 [00:00<00:00, 3.11MB/s] Downloading and preparing dataset json/slotreck--pickle to /mnt/home/lotrecks/.cache/huggingface/datasets/slotreck___json/slotreck--pickle-0c311f36ed032b04/0.0.0/8bb11242116d547c741b2e8a1f18598ffdd40a1d4f2a2872c7a28b697434bc96... Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 589k/589k [00:00<00:00, 18.9MB/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 104k/104k [00:00<00:00, 4.61MB/s] Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 170k/170k [00:00<00:00, 7.71MB/s] Downloading data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 3.77it/s] Extracting data files: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3/3 [00:00<00:00, 523.92it/s] Generating train split: 0 examples [00:00, ? examples/s]Failed to read file '/mnt/home/lotrecks/.cache/huggingface/datasets/downloads/6ec07bb2f279c9377036af6948532513fa8f48244c672d2644a2d7018ee5c9cb' with error <class 'pyarrow.lib.ArrowInvalid'>: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0 Traceback (most recent call last): File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 144, in _generate_tables dataset = json.load(f) File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/__init__.py", line 296, in load parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw) File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/__init__.py", line 348, in loads return _default_decoder.decode(s) File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/json/decoder.py", line 340, in decode raise JSONDecodeError("Extra data", s, end) json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 3086) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1879, in _prepare_split_single for _, table in generator: File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 147, in _generate_tables raise e File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables io.BytesIO(batch), read_options=paj.ReadOptions(block_size=block_size) File "pyarrow/_json.pyx", line 259, in pyarrow._json.read_json File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: JSON parse error: Column(/ner/[]/[]/[]) changed from number to string in row 0 The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/load.py", line 1815, in load_dataset storage_options=storage_options, File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 913, in download_and_prepare **download_and_prepare_kwargs, File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1004, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1768, in _prepare_split gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args File "/mnt/home/lotrecks/anaconda3/envs/pickle/lib/python3.7/site-packages/datasets/builder.py", line 1912, in _prepare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset ``` ### Expected behavior For the dataset to be loaded without error. ### Environment info - `datasets` version: 2.13.1 - Platform: Linux-3.10.0-1160.80.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core - Python version: 3.7.12 - Huggingface_hub version: 0.15.1 - PyArrow version: 8.0.0 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6460/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6460/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6459
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6459/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6459/comments
https://api.github.com/repos/huggingface/datasets/issues/6459/events
https://github.com/huggingface/datasets/pull/6459
2,017,029,380
PR_kwDODunzps5gsWlz
6,459
Retrieve cached datasets that were pushed to hub when offline
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
3
2023-11-29T16:56:15Z
2024-03-25T13:55:42Z
2024-03-25T13:55:42Z
MEMBER
null
I drafted the logic to retrieve a no-script dataset in the cache. For example it can reload datasets that were pushed to hub if they exist in the cache. example: ```python >>> Dataset.from_dict({"a": [1, 2]}).push_to_hub("lhoestq/tmp") >>> load_dataset("lhoestq/tmp") DatasetDict({ train: Dataset({ features: ['a'], num_rows: 2 }) }) ``` and later, without connection: ```python >>> load_dataset("lhoestq/tmp") Using the latest cached version of the dataset from /Users/quentinlhoest/.cache/huggingface/datasets/lhoestq___tmp/*/*/0b3caccda1725efb(last modified on Wed Nov 29 16:50:27 2023) since it couldn't be found locally at lhoestq/tmp. DatasetDict({ train: Dataset({ features: ['a'], num_rows: 2 }) }) ``` fix https://github.com/huggingface/datasets/issues/3547 ## Implementation details (EDITED) I continued in https://github.com/huggingface/datasets/pull/6493, see the changes there TODO: - [x] tests - [ ] compatible with https://github.com/huggingface/datasets/pull/6458
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6459/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6459/timeline
null
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/6459.diff", "html_url": "https://github.com/huggingface/datasets/pull/6459", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6459.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6459" }
true
https://api.github.com/repos/huggingface/datasets/issues/6458
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6458/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6458/comments
https://api.github.com/repos/huggingface/datasets/issues/6458/events
https://github.com/huggingface/datasets/pull/6458
2,016,577,761
PR_kwDODunzps5gqy4M
6,458
Lazy data files resolution
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
20
2023-11-29T13:18:44Z
2024-02-08T14:41:35Z
2024-02-08T14:41:35Z
MEMBER
null
Related to discussion at https://github.com/huggingface/datasets/pull/6255 this makes this code run in 2sec instead of >10sec ```python from datasets import load_dataset ds = load_dataset("glue", "sst2", streaming=True, trust_remote_code=False) ``` For some datasets with many configs and files it can be up to 100x faster. This is particularly important now that some datasets will be loaded from the Parquet export instead of the scripts. The data files are only resolved in the builder `__init__`. To do so I added DataFilesPatternsList and DataFilesPatternsDict that have `.resolve()` to return resolved DataFilesList and DataFilesDict
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6458/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6458/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6458.diff", "html_url": "https://github.com/huggingface/datasets/pull/6458", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6458.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6458" }
true
https://api.github.com/repos/huggingface/datasets/issues/6457
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6457/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6457/comments
https://api.github.com/repos/huggingface/datasets/issues/6457/events
https://github.com/huggingface/datasets/issues/6457
2,015,650,563
I_kwDODunzps54JGMD
6,457
`TypeError`: huggingface_hub.hf_file_system.HfFileSystem.find() got multiple values for keyword argument 'maxdepth'
{ "avatar_url": "https://avatars.githubusercontent.com/u/79070834?v=4", "events_url": "https://api.github.com/users/wasertech/events{/privacy}", "followers_url": "https://api.github.com/users/wasertech/followers", "following_url": "https://api.github.com/users/wasertech/following{/other_user}", "gists_url": "https://api.github.com/users/wasertech/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/wasertech", "id": 79070834, "login": "wasertech", "node_id": "MDQ6VXNlcjc5MDcwODM0", "organizations_url": "https://api.github.com/users/wasertech/orgs", "received_events_url": "https://api.github.com/users/wasertech/received_events", "repos_url": "https://api.github.com/users/wasertech/repos", "site_admin": false, "starred_url": "https://api.github.com/users/wasertech/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/wasertech/subscriptions", "type": "User", "url": "https://api.github.com/users/wasertech" }
[]
closed
false
null
[]
null
5
2023-11-29T01:57:36Z
2023-11-29T15:39:03Z
2023-11-29T02:02:38Z
NONE
null
### Describe the bug Please see https://github.com/huggingface/huggingface_hub/issues/1872 ### Steps to reproduce the bug Please see https://github.com/huggingface/huggingface_hub/issues/1872 ### Expected behavior Please see https://github.com/huggingface/huggingface_hub/issues/1872 ### Environment info Please see https://github.com/huggingface/huggingface_hub/issues/1872
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6457/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6457/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6456
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6456/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6456/comments
https://api.github.com/repos/huggingface/datasets/issues/6456/events
https://github.com/huggingface/datasets/pull/6456
2,015,186,090
PR_kwDODunzps5gmDJY
6,456
Don't require trust_remote_code in inspect_dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
3
2023-11-28T19:47:07Z
2023-11-30T10:40:23Z
2023-11-30T10:34:12Z
MEMBER
null
don't require `trust_remote_code` in (deprecated) `inspect_dataset` (it defeats its purpose) (not super important but we might as well keep it until the next major release) this is needed to fix the tests in https://github.com/huggingface/datasets/pull/6448
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6456/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6456/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6456.diff", "html_url": "https://github.com/huggingface/datasets/pull/6456", "merged_at": "2023-11-30T10:34:12Z", "patch_url": "https://github.com/huggingface/datasets/pull/6456.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6456" }
true
https://api.github.com/repos/huggingface/datasets/issues/6454
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6454/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6454/comments
https://api.github.com/repos/huggingface/datasets/issues/6454/events
https://github.com/huggingface/datasets/pull/6454
2,013,001,584
PR_kwDODunzps5gej3H
6,454
Refactor `dill` logic
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
5
2023-11-27T20:01:25Z
2023-11-28T16:29:58Z
2023-11-28T16:29:31Z
COLLABORATOR
null
Refactor the `dill` logic to make it easier to maintain (and fix some issues along the way) It makes the following improvements to the serialization API: * consistent order of a `dict`'s keys * support for hashing `torch.compile`-ed modules and functions * deprecates `datasets.fingerprint.hashregister` as the `hashregister`-ed reducers are never invoked anyways (does not support nested data as `pickle`/`dill` do) ~~TODO: optimize hashing of `pa.Table` and `datasets.table.Table`~~ The `pa_array.to_string` approach is faster for large arrays because it outputs the first 10 and last 10 elements (by default). The problem is that this can produce identical hashes for non-identical arrays if their differing elements get ellipsed... Fix https://github.com/huggingface/datasets/issues/6440, fix https://github.com/huggingface/datasets/issues/5839
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6454/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6454/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6454.diff", "html_url": "https://github.com/huggingface/datasets/pull/6454", "merged_at": "2023-11-28T16:29:31Z", "patch_url": "https://github.com/huggingface/datasets/pull/6454.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6454" }
true
https://api.github.com/repos/huggingface/datasets/issues/6453
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6453/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6453/comments
https://api.github.com/repos/huggingface/datasets/issues/6453/events
https://github.com/huggingface/datasets/pull/6453
2,011,907,787
PR_kwDODunzps5ga0rv
6,453
Update hub-docs reference
{ "avatar_url": "https://avatars.githubusercontent.com/u/11827707?v=4", "events_url": "https://api.github.com/users/mishig25/events{/privacy}", "followers_url": "https://api.github.com/users/mishig25/followers", "following_url": "https://api.github.com/users/mishig25/following{/other_user}", "gists_url": "https://api.github.com/users/mishig25/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mishig25", "id": 11827707, "login": "mishig25", "node_id": "MDQ6VXNlcjExODI3NzA3", "organizations_url": "https://api.github.com/users/mishig25/orgs", "received_events_url": "https://api.github.com/users/mishig25/received_events", "repos_url": "https://api.github.com/users/mishig25/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mishig25/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mishig25/subscriptions", "type": "User", "url": "https://api.github.com/users/mishig25" }
[]
closed
false
null
[]
null
3
2023-11-27T09:57:20Z
2023-11-27T10:23:44Z
2023-11-27T10:17:34Z
NONE
null
Follow up to huggingface/huggingface.js#296
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6453/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6453/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6453.diff", "html_url": "https://github.com/huggingface/datasets/pull/6453", "merged_at": "2023-11-27T10:17:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/6453.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6453" }
true
https://api.github.com/repos/huggingface/datasets/issues/6452
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6452/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6452/comments
https://api.github.com/repos/huggingface/datasets/issues/6452/events
https://github.com/huggingface/datasets/pull/6452
2,011,632,708
PR_kwDODunzps5gZ5oe
6,452
Praveen_repo_pull_req
{ "avatar_url": "https://avatars.githubusercontent.com/u/151713216?v=4", "events_url": "https://api.github.com/users/Praveenhh/events{/privacy}", "followers_url": "https://api.github.com/users/Praveenhh/followers", "following_url": "https://api.github.com/users/Praveenhh/following{/other_user}", "gists_url": "https://api.github.com/users/Praveenhh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Praveenhh", "id": 151713216, "login": "Praveenhh", "node_id": "U_kgDOCQr1wA", "organizations_url": "https://api.github.com/users/Praveenhh/orgs", "received_events_url": "https://api.github.com/users/Praveenhh/received_events", "repos_url": "https://api.github.com/users/Praveenhh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Praveenhh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Praveenhh/subscriptions", "type": "User", "url": "https://api.github.com/users/Praveenhh" }
[]
closed
false
null
[]
null
0
2023-11-27T07:07:50Z
2023-11-27T09:28:00Z
2023-11-27T09:28:00Z
NONE
null
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6452/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6452/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6452.diff", "html_url": "https://github.com/huggingface/datasets/pull/6452", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6452.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6452" }
true
https://api.github.com/repos/huggingface/datasets/issues/6451
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6451/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6451/comments
https://api.github.com/repos/huggingface/datasets/issues/6451/events
https://github.com/huggingface/datasets/issues/6451
2,010,693,912
I_kwDODunzps532MEY
6,451
Unable to read "marsyas/gtzan" data
{ "avatar_url": "https://avatars.githubusercontent.com/u/32300890?v=4", "events_url": "https://api.github.com/users/gerald-wrona/events{/privacy}", "followers_url": "https://api.github.com/users/gerald-wrona/followers", "following_url": "https://api.github.com/users/gerald-wrona/following{/other_user}", "gists_url": "https://api.github.com/users/gerald-wrona/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gerald-wrona", "id": 32300890, "login": "gerald-wrona", "node_id": "MDQ6VXNlcjMyMzAwODkw", "organizations_url": "https://api.github.com/users/gerald-wrona/orgs", "received_events_url": "https://api.github.com/users/gerald-wrona/received_events", "repos_url": "https://api.github.com/users/gerald-wrona/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gerald-wrona/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gerald-wrona/subscriptions", "type": "User", "url": "https://api.github.com/users/gerald-wrona" }
[]
closed
false
null
[]
null
3
2023-11-25T15:13:17Z
2023-12-01T12:53:46Z
2023-11-27T09:36:25Z
NONE
null
Hi, this is my code and the error: ``` from datasets import load_dataset gtzan = load_dataset("marsyas/gtzan", "all") ``` [error_trace.txt](https://github.com/huggingface/datasets/files/13464397/error_trace.txt) [audio_yml.txt](https://github.com/huggingface/datasets/files/13464410/audio_yml.txt) Python 3.11.5 Jupyter Notebook 6.5.4 Windows 10 I'm able to download and work with other datasets, but not this one. For example, both these below work fine: ``` from datasets import load_dataset dataset = load_dataset("facebook/voxpopuli", "pl", split="train", streaming=True) minds = load_dataset("PolyAI/minds14", name="en-US", split="train") ``` Thanks for your help https://huggingface.co/datasets/marsyas/gtzan/tree/main
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6451/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6451/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6450
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6450/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6450/comments
https://api.github.com/repos/huggingface/datasets/issues/6450/events
https://github.com/huggingface/datasets/issues/6450
2,009,491,386
I_kwDODunzps53xme6
6,450
Support multiple image/audio columns in ImageFolder/AudioFolder
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "cfd3d7", "default": true, "description": "This issue or pull request already exists", "id": 1935892865, "name": "duplicate", "node_id": "MDU6TGFiZWwxOTM1ODkyODY1", "url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate" }, { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
1
2023-11-24T10:34:09Z
2023-11-28T11:07:17Z
2023-11-24T17:24:38Z
CONTRIBUTOR
null
### Feature request Have a metadata.csv file with multiple columns that point to relative image or audio files. ### Motivation Currently, ImageFolder allows one column, called `file_name`, pointing to relative image files. On the same model, AudioFolder allows one column, called `file_name`, pointing to relative audio files. But it's not possible to have two image columns, or to have two audio column, or to have one audio column and one image column. ### Your contribution no specific contribution
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6450/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6450/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6449
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6449/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6449/comments
https://api.github.com/repos/huggingface/datasets/issues/6449/events
https://github.com/huggingface/datasets/pull/6449
2,008,617,992
PR_kwDODunzps5gQCVZ
6,449
Fix metadata file resolution when inferred pattern is `**`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
6
2023-11-23T17:35:02Z
2023-11-27T10:02:56Z
2023-11-24T17:13:02Z
COLLABORATOR
null
Refetch metadata files in case they were dropped by `filter_extensions` in the previous step. Fix #6442
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6449/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6449/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6449.diff", "html_url": "https://github.com/huggingface/datasets/pull/6449", "merged_at": "2023-11-24T17:13:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/6449.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6449" }
true
https://api.github.com/repos/huggingface/datasets/issues/6448
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6448/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6448/comments
https://api.github.com/repos/huggingface/datasets/issues/6448/events
https://github.com/huggingface/datasets/pull/6448
2,008,614,985
PR_kwDODunzps5gQBsE
6,448
Use parquet export if possible
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
24
2023-11-23T17:31:57Z
2023-12-01T17:57:17Z
2023-12-01T17:50:59Z
MEMBER
null
The idea is to make this code work for datasets with scripts if they have a Parquet export ```python ds = load_dataset("squad", trust_remote_code=False) ``` And more generally, it means we use the Parquet export whenever it's possible (it's safer and faster than dataset scripts). I also added a `config.USE_PARQUET_EXPORT` variable to use in the datasets-server parquet conversion job - [x] Needs https://github.com/huggingface/datasets/pull/6429 to be merged first cc @severo I use the /parquet and /info endpoints from datasets-server
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 2, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/6448/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6448/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6448.diff", "html_url": "https://github.com/huggingface/datasets/pull/6448", "merged_at": "2023-12-01T17:50:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/6448.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6448" }
true
https://api.github.com/repos/huggingface/datasets/issues/6447
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6447/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6447/comments
https://api.github.com/repos/huggingface/datasets/issues/6447/events
https://github.com/huggingface/datasets/issues/6447
2,008,195,298
I_kwDODunzps53sqDi
6,447
Support one dataset loader per config when using YAML
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
open
false
null
[]
null
0
2023-11-23T13:03:07Z
2023-11-23T13:03:07Z
null
CONTRIBUTOR
null
### Feature request See https://huggingface.co/datasets/datasets-examples/doc-unsupported-1 I would like to use CSV loader for the "csv" config, JSONL loader for the "jsonl" config, etc. ### Motivation It would be more flexible for the users ### Your contribution No specific contribution
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6447/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6447/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6446
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6446/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6446/comments
https://api.github.com/repos/huggingface/datasets/issues/6446/events
https://github.com/huggingface/datasets/issues/6446
2,007,092,708
I_kwDODunzps53oc3k
6,446
Speech Commands v2 dataset doesn't match AST-v2 config
{ "avatar_url": "https://avatars.githubusercontent.com/u/18024303?v=4", "events_url": "https://api.github.com/users/vymao/events{/privacy}", "followers_url": "https://api.github.com/users/vymao/followers", "following_url": "https://api.github.com/users/vymao/following{/other_user}", "gists_url": "https://api.github.com/users/vymao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vymao", "id": 18024303, "login": "vymao", "node_id": "MDQ6VXNlcjE4MDI0MzAz", "organizations_url": "https://api.github.com/users/vymao/orgs", "received_events_url": "https://api.github.com/users/vymao/received_events", "repos_url": "https://api.github.com/users/vymao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vymao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vymao/subscriptions", "type": "User", "url": "https://api.github.com/users/vymao" }
[]
closed
false
null
[]
null
3
2023-11-22T20:46:36Z
2023-11-28T14:46:08Z
2023-11-28T14:46:08Z
NONE
null
### Describe the bug [According](https://huggingface.co/MIT/ast-finetuned-speech-commands-v2) to `MIT/ast-finetuned-speech-commands-v2`, the model was trained on the Speech Commands v2 dataset. However, while the model config says the model should have 35 class labels, the dataset itself has 36 class labels. Moreover, the class labels themselves don't match between the model config and the dataset. It is difficult to reproduce the data used to fine tune `MIT/ast-finetuned-speech-commands-v2`. ### Steps to reproduce the bug ``` >>> model = ASTForAudioClassification.from_pretrained("MIT/ast-finetuned-speech-commands-v2") >>> model.config.id2label {0: 'backward', 1: 'follow', 2: 'five', 3: 'bed', 4: 'zero', 5: 'on', 6: 'learn', 7: 'two', 8: 'house', 9: 'tree', 10: 'dog', 11: 'stop', 12: 'seven', 13: 'eight', 14: 'down', 15: 'six', 16: 'forward', 17: 'cat', 18: 'right', 19: 'visual', 20: 'four', 21: 'wow', 22: 'no', 23: 'nine', 24: 'off', 25: 'three', 26: 'left', 27: 'marvin', 28: 'yes', 29: 'up', 30: 'sheila', 31: 'happy', 32: 'bird', 33: 'go', 34: 'one'} >>> dataset = load_dataset("speech_commands", "v0.02", split="test") >>> torch.unique(torch.Tensor(dataset['label'])) tensor([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 10., 11., 12., 13., 14., 15., 16., 17., 18., 19., 20., 21., 22., 23., 24., 25., 26., 27., 28., 29., 30., 31., 32., 33., 34., 35.]) ``` If you try to explore the [dataset itself](https://huggingface.co/datasets/speech_commands/viewer/v0.02/test), you can see that the id to label does not match what is provided by `model.config.id2label`. ### Expected behavior The labels should match completely and there should be the same number of label classes between the model config and the dataset itself. ### Environment info datasets = 2.14.6, transformers = 4.33.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6446/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6446/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6445
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6445/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6445/comments
https://api.github.com/repos/huggingface/datasets/issues/6445/events
https://github.com/huggingface/datasets/pull/6445
2,006,958,595
PR_kwDODunzps5gKg2d
6,445
Use `filelock` package for file locking
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
4
2023-11-22T19:04:45Z
2023-11-23T18:47:30Z
2023-11-23T18:41:23Z
COLLABORATOR
null
Use the `filelock` package instead of `datasets.utils.filelock` for file locking to be consistent with `huggingface_hub` and not to be responsible for improving the `filelock` capabilities πŸ™‚. (Reverts https://github.com/huggingface/datasets/pull/859, but these `INFO` logs are not printed by default (anymore?), so this should be okay)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6445/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6445/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6445.diff", "html_url": "https://github.com/huggingface/datasets/pull/6445", "merged_at": "2023-11-23T18:41:22Z", "patch_url": "https://github.com/huggingface/datasets/pull/6445.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6445" }
true
https://api.github.com/repos/huggingface/datasets/issues/6444
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6444/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6444/comments
https://api.github.com/repos/huggingface/datasets/issues/6444/events
https://github.com/huggingface/datasets/pull/6444
2,006,842,179
PR_kwDODunzps5gKG_e
6,444
Remove `Table.__getstate__` and `Table.__setstate__`
{ "avatar_url": "https://avatars.githubusercontent.com/u/36994684?v=4", "events_url": "https://api.github.com/users/LZHgrla/events{/privacy}", "followers_url": "https://api.github.com/users/LZHgrla/followers", "following_url": "https://api.github.com/users/LZHgrla/following{/other_user}", "gists_url": "https://api.github.com/users/LZHgrla/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LZHgrla", "id": 36994684, "login": "LZHgrla", "node_id": "MDQ6VXNlcjM2OTk0Njg0", "organizations_url": "https://api.github.com/users/LZHgrla/orgs", "received_events_url": "https://api.github.com/users/LZHgrla/received_events", "repos_url": "https://api.github.com/users/LZHgrla/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LZHgrla/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LZHgrla/subscriptions", "type": "User", "url": "https://api.github.com/users/LZHgrla" }
[]
closed
false
null
[]
null
4
2023-11-22T17:55:10Z
2023-11-23T15:19:43Z
2023-11-23T15:13:28Z
CONTRIBUTOR
null
When using distributed training, the code of `os.remove(filename)` may be executed separately by each rank, leading to `FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmprxxxxxxx.arrow'` ```python from torch import distributed as dist if dist.get_rank() == 0: dataset = process_dataset(*args, **kwargs) objects = [dataset] else: objects = [None] dist.broadcast_object_list(objects, src=0) dataset = objects[0] ```
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6444/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6444/timeline
null
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6444.diff", "html_url": "https://github.com/huggingface/datasets/pull/6444", "merged_at": "2023-11-23T15:13:28Z", "patch_url": "https://github.com/huggingface/datasets/pull/6444.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6444" }
true
https://api.github.com/repos/huggingface/datasets/issues/6443
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6443/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6443/comments
https://api.github.com/repos/huggingface/datasets/issues/6443/events
https://github.com/huggingface/datasets/issues/6443
2,006,568,368
I_kwDODunzps53mc2w
6,443
Trouble loading files defined in YAML explicitly
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
2
2023-11-22T15:18:10Z
2023-11-23T09:06:20Z
null
CONTRIBUTOR
null
Look at https://huggingface.co/datasets/severo/doc-yaml-2 It's a reproduction of the example given in the docs at https://huggingface.co/docs/hub/datasets-manual-configuration ``` You can select multiple files per split using a list of paths: my_dataset_repository/ β”œβ”€β”€ README.md β”œβ”€β”€ data/ β”‚ β”œβ”€β”€ abc.csv β”‚ └── def.csv └── holdout/ └── ghi.csv --- configs: - config_name: default data_files: - split: train path: - "data/abc.csv" - "data/def.csv" - split: test path: "holdout/ghi.csv" --- ``` It raises the following error: ``` Error code: ConfigNamesError Exception: FileNotFoundError Message: Couldn't find a dataset script at /src/services/worker/severo/doc-yaml-2/doc-yaml-2.py or any data file in the same directory. Couldn't find 'severo/doc-yaml-2' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/severo/doc-yaml-2@938a0578fb4c6bc9da7d80b06a3ba39c2834b0c2/data/def.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.arrow', '.txt', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip'] Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 65, in compute_config_names_response for config in sorted(get_dataset_config_names(path=dataset, token=hf_token)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 351, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1507, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at /src/services/worker/severo/doc-yaml-2/doc-yaml-2.py or any data file in the same directory. Couldn't find 'severo/doc-yaml-2' on the Hugging Face Hub either: FileNotFoundError: Unable to find 'hf://datasets/severo/doc-yaml-2@938a0578fb4c6bc9da7d80b06a3ba39c2834b0c2/data/def.csv' with any supported extension ['.csv', '.tsv', '.json', '.jsonl', '.parquet', '.arrow', '.txt', '.blp', '.bmp', '.dib', '.bufr', '.cur', '.pcx', '.dcx', '.dds', '.ps', '.eps', '.fit', '.fits', '.fli', '.flc', '.ftc', '.ftu', '.gbr', '.gif', '.grib', '.h5', '.hdf', '.png', '.apng', '.jp2', '.j2k', '.jpc', '.jpf', '.jpx', '.j2c', '.icns', '.ico', '.im', '.iim', '.tif', '.tiff', '.jfif', '.jpe', '.jpg', '.jpeg', '.mpg', '.mpeg', '.msp', '.pcd', '.pxr', '.pbm', '.pgm', '.ppm', '.pnm', '.psd', '.bw', '.rgb', '.rgba', '.sgi', '.ras', '.tga', '.icb', '.vda', '.vst', '.webp', '.wmf', '.emf', '.xbm', '.xpm', '.BLP', '.BMP', '.DIB', '.BUFR', '.CUR', '.PCX', '.DCX', '.DDS', '.PS', '.EPS', '.FIT', '.FITS', '.FLI', '.FLC', '.FTC', '.FTU', '.GBR', '.GIF', '.GRIB', '.H5', '.HDF', '.PNG', '.APNG', '.JP2', '.J2K', '.JPC', '.JPF', '.JPX', '.J2C', '.ICNS', '.ICO', '.IM', '.IIM', '.TIF', '.TIFF', '.JFIF', '.JPE', '.JPG', '.JPEG', '.MPG', '.MPEG', '.MSP', '.PCD', '.PXR', '.PBM', '.PGM', '.PPM', '.PNM', '.PSD', '.BW', '.RGB', '.RGBA', '.SGI', '.RAS', '.TGA', '.ICB', '.VDA', '.VST', '.WEBP', '.WMF', '.EMF', '.XBM', '.XPM', '.aiff', '.au', '.avr', '.caf', '.flac', '.htk', '.svx', '.mat4', '.mat5', '.mpc2k', '.ogg', '.paf', '.pvf', '.raw', '.rf64', '.sd2', '.sds', '.ircam', '.voc', '.w64', '.wav', '.nist', '.wavex', '.wve', '.xi', '.mp3', '.opus', '.AIFF', '.AU', '.AVR', '.CAF', '.FLAC', '.HTK', '.SVX', '.MAT4', '.MAT5', '.MPC2K', '.OGG', '.PAF', '.PVF', '.RAW', '.RF64', '.SD2', '.SDS', '.IRCAM', '.VOC', '.W64', '.WAV', '.NIST', '.WAVEX', '.WVE', '.XI', '.MP3', '.OPUS', '.zip'] ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6443/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6443/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6442
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6442/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6442/comments
https://api.github.com/repos/huggingface/datasets/issues/6442/events
https://github.com/huggingface/datasets/issues/6442
2,006,086,907
I_kwDODunzps53knT7
6,442
Trouble loading image folder with additional features - metadata file ignored
{ "avatar_url": "https://avatars.githubusercontent.com/u/57615435?v=4", "events_url": "https://api.github.com/users/linoytsaban/events{/privacy}", "followers_url": "https://api.github.com/users/linoytsaban/followers", "following_url": "https://api.github.com/users/linoytsaban/following{/other_user}", "gists_url": "https://api.github.com/users/linoytsaban/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/linoytsaban", "id": 57615435, "login": "linoytsaban", "node_id": "MDQ6VXNlcjU3NjE1NDM1", "organizations_url": "https://api.github.com/users/linoytsaban/orgs", "received_events_url": "https://api.github.com/users/linoytsaban/received_events", "repos_url": "https://api.github.com/users/linoytsaban/repos", "site_admin": false, "starred_url": "https://api.github.com/users/linoytsaban/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/linoytsaban/subscriptions", "type": "User", "url": "https://api.github.com/users/linoytsaban" }
[]
closed
false
null
[]
null
1
2023-11-22T11:01:35Z
2023-11-24T17:13:03Z
2023-11-24T17:13:03Z
NONE
null
### Describe the bug Loading image folder with a caption column using `load_dataset(<image_folder_path>)` doesn't load the captions. When loading a local image folder with captions using `datasets==2.13.0` ``` from datasets import load_dataset data = load_dataset(<image_folder_path>) data.column_names ``` yields `{'train': ['image', 'prompt']}` but when using `datasets==2.15.0` yeilds `{'train': ['image']}` Putting the images and `metadata.jsonl` file into a nested `train` folder **or** loading with `load_dataset("imagefolder", data_dir=<image_folder_path>)` solves the issue and yields `{'train': ['image', 'prompt']}` ### Steps to reproduce the bug 1. create a folder `<image_folder_path>` that contains images and a metadata file with additional features- e.g. "prompt" 2. run: ``` from datasets import load_dataset data = load_dataset("<image_folder_path>") data.column_names ``` ### Expected behavior `{'train': ['image', 'prompt']}` ### Environment info - `datasets` version: 2.15.0 - Platform: Linux-5.15.120+-x86_64-with-glibc2.35 - Python version: 3.10.12 - `huggingface_hub` version: 0.19.4 - PyArrow version: 9.0.0 - Pandas version: 1.5.3 - `fsspec` version: 2023.6.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6442/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6442/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6441
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6441/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6441/comments
https://api.github.com/repos/huggingface/datasets/issues/6441/events
https://github.com/huggingface/datasets/issues/6441
2,004,985,857
I_kwDODunzps53gagB
6,441
Trouble Loading a Gated Dataset For User with Granted Permission
{ "avatar_url": "https://avatars.githubusercontent.com/u/124715309?v=4", "events_url": "https://api.github.com/users/e-trop/events{/privacy}", "followers_url": "https://api.github.com/users/e-trop/followers", "following_url": "https://api.github.com/users/e-trop/following{/other_user}", "gists_url": "https://api.github.com/users/e-trop/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/e-trop", "id": 124715309, "login": "e-trop", "node_id": "U_kgDOB28BLQ", "organizations_url": "https://api.github.com/users/e-trop/orgs", "received_events_url": "https://api.github.com/users/e-trop/received_events", "repos_url": "https://api.github.com/users/e-trop/repos", "site_admin": false, "starred_url": "https://api.github.com/users/e-trop/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/e-trop/subscriptions", "type": "User", "url": "https://api.github.com/users/e-trop" }
[]
closed
false
null
[]
null
3
2023-11-21T19:24:36Z
2023-12-13T08:27:16Z
2023-12-13T08:27:16Z
NONE
null
### Describe the bug I have granted permissions to several users to access a gated huggingface dataset. The users accepted the invite and when trying to load the dataset using their access token they get `FileNotFoundError: Couldn't find a dataset script at .....` . Also when they try to click the url link for the dataset they get a 404 error. ### Steps to reproduce the bug 1. Grant access to gated dataset for specific users 2. Users accept invitation 3. Users login to hugging face hub using cli login 4. Users run load_dataset ### Expected behavior Dataset is loaded normally for users who were granted access to the gated dataset. ### Environment info datasets==2.15.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6441/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6441/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6440
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6440/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6440/comments
https://api.github.com/repos/huggingface/datasets/issues/6440/events
https://github.com/huggingface/datasets/issues/6440
2,004,509,301
I_kwDODunzps53emJ1
6,440
`.map` not hashing under python 3.9
{ "avatar_url": "https://avatars.githubusercontent.com/u/9058204?v=4", "events_url": "https://api.github.com/users/changyeli/events{/privacy}", "followers_url": "https://api.github.com/users/changyeli/followers", "following_url": "https://api.github.com/users/changyeli/following{/other_user}", "gists_url": "https://api.github.com/users/changyeli/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/changyeli", "id": 9058204, "login": "changyeli", "node_id": "MDQ6VXNlcjkwNTgyMDQ=", "organizations_url": "https://api.github.com/users/changyeli/orgs", "received_events_url": "https://api.github.com/users/changyeli/received_events", "repos_url": "https://api.github.com/users/changyeli/repos", "site_admin": false, "starred_url": "https://api.github.com/users/changyeli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/changyeli/subscriptions", "type": "User", "url": "https://api.github.com/users/changyeli" }
[]
closed
false
null
[]
null
2
2023-11-21T15:14:54Z
2023-11-28T16:29:33Z
2023-11-28T16:29:33Z
NONE
null
### Describe the bug The `.map` function cannot hash under python 3.9. Tried to use [the solution here](https://github.com/huggingface/datasets/issues/4521#issuecomment-1205166653), but still get the same message: `Parameter 'function'=<function map_to_pred at 0x7fa0b49ead30> of the transform datasets.arrow_dataset.Dataset._map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed.` ### Steps to reproduce the bug ```python def map_to_pred(batch): """ Perform inference on an audio batch Parameters: batch (dict): A dictionary containing audio data and other related information. Returns: dict: The input batch dictionary with added prediction and transcription fields. """ audio = batch['audio'] input_features = processor( audio['array'], sampling_rate=audio['sampling_rate'], return_tensors="pt").input_features input_features = input_features.to('cuda') with torch.no_grad(): predicted_ids = model.generate(input_features) preds = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0] batch['prediction'] = processor.tokenizer._normalize(preds) batch["transcription"] = processor.tokenizer._normalize(batch['transcription']) return batch MODEL_CARD = "openai/whisper-small" MODEL_NAME = MODEL_CARD.rsplit('/', maxsplit=1)[-1] model = WhisperForConditionalGeneration.from_pretrained(MODEL_CARD) processor = AutoProcessor.from_pretrained( MODEL_CARD, language="english", task="transcribe") model = torch.compile(model) dt = load_dataset("audiofolder", data_dir=config['DATA']['dataset'], split="test") dt = dt.cast_column("audio", Audio(sampling_rate=16000)) result = coraal_dt.map(map_to_pred, num_proc=16) ``` ### Expected behavior Hashed and cached dataset starts inferencing ### Environment info - `transformers` version: 4.35.0 - Platform: Linux-5.14.0-284.30.1.el9_2.x86_64-x86_64-with-glibc2.34 - Python version: 3.9.18 - Huggingface_hub version: 0.17.3 - Safetensors version: 0.4.0 - Accelerate version: 0.24.1 - Accelerate config: not found - PyTorch version (GPU?): 2.1.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6440/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6440/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/6439
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6439/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6439/comments
https://api.github.com/repos/huggingface/datasets/issues/6439/events
https://github.com/huggingface/datasets/issues/6439
2,002,916,514
I_kwDODunzps53YhSi
6,439
Download + preparation speed of datasets.load_dataset is 20x slower than huggingface hub snapshot and manual loding
{ "avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4", "events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}", "followers_url": "https://api.github.com/users/AntreasAntoniou/followers", "following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}", "gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AntreasAntoniou", "id": 10792502, "login": "AntreasAntoniou", "node_id": "MDQ6VXNlcjEwNzkyNTAy", "organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs", "received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events", "repos_url": "https://api.github.com/users/AntreasAntoniou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions", "type": "User", "url": "https://api.github.com/users/AntreasAntoniou" }
[]
open
false
null
[]
null
0
2023-11-20T20:07:23Z
2023-11-20T20:07:37Z
null
NONE
null
### Describe the bug I am working with a dataset I am trying to publish. The path is Antreas/TALI. It's a fairly large dataset, and contains images, video, audio and text. I have been having multiple problems when the dataset is being downloaded using the load_dataset function -- even with 64 workers taking more than 7 days to process. With snapshot download it takes 12 hours, and that includes the dataset preparation done using load_dataset and passing the dataset parquet file paths. Find the script I am using below: ```python import multiprocessing as mp import pathlib from typing import Optional import datasets from rich import print from tqdm import tqdm def download_dataset_via_hub( dataset_name: str, dataset_download_path: pathlib.Path, num_download_workers: int = mp.cpu_count(), ): import huggingface_hub as hf_hub download_folder = hf_hub.snapshot_download( repo_id=dataset_name, repo_type="dataset", cache_dir=dataset_download_path, resume_download=True, max_workers=num_download_workers, ignore_patterns=[], ) return pathlib.Path(download_folder) / "data" def load_dataset_via_hub( dataset_download_path: pathlib.Path, num_download_workers: int = mp.cpu_count(), dataset_name: Optional[str] = None, ): from dataclasses import dataclass, field from datasets import ClassLabel, Features, Image, Sequence, Value dataset_path = download_dataset_via_hub( dataset_download_path=dataset_download_path, num_download_workers=num_download_workers, dataset_name=dataset_name, ) # Building a list of file paths for validation set train_files = [ file.as_posix() for file in pathlib.Path(dataset_path).glob("*.parquet") if "train" in file.as_posix() ] val_files = [ file.as_posix() for file in pathlib.Path(dataset_path).glob("*.parquet") if "val" in file.as_posix() ] test_files = [ file.as_posix() for file in pathlib.Path(dataset_path).glob("*.parquet") if "test" in file.as_posix() ] print( f"Found {len(test_files)} files for testing set, {len(train_files)} for training set and {len(val_files)} for validation set" ) data_files = { "test": test_files, "val": val_files, "train": train_files, } features = Features( { "image": Image( decode=True ), # Set `decode=True` if you want to decode the images, otherwise `decode=False` "image_url": Value("string"), "item_idx": Value("int64"), "wit_features": Sequence( { "attribution_passes_lang_id": Value("bool"), "caption_alt_text_description": Value("string"), "caption_reference_description": Value("string"), "caption_title_and_reference_description": Value("string"), "context_page_description": Value("string"), "context_section_description": Value("string"), "hierarchical_section_title": Value("string"), "is_main_image": Value("bool"), "language": Value("string"), "page_changed_recently": Value("bool"), "page_title": Value("string"), "page_url": Value("string"), "section_title": Value("string"), } ), "wit_idx": Value("int64"), "youtube_title_text": Value("string"), "youtube_description_text": Value("string"), "youtube_video_content": Value("binary"), "youtube_video_starting_time": Value("string"), "youtube_subtitle_text": Value("string"), "youtube_video_size": Value("int64"), "youtube_video_file_path": Value("string"), } ) dataset = datasets.load_dataset( "parquet" if dataset_name is None else dataset_name, data_files=data_files, features=features, num_proc=1, cache_dir=dataset_download_path / "cache", ) return dataset if __name__ == "__main__": dataset_cache = pathlib.Path("/disk/scratch_fast0/tali/") dataset = load_dataset_via_hub(dataset_cache, dataset_name="Antreas/TALI")[ "test" ] for sample in tqdm(dataset): print(list(sample.keys())) ``` Also, streaming this dataset has been a very painfully slow process. Streaming the train set takes 15m to start, and streaming the test and val sets takes 3 hours to start! ### Steps to reproduce the bug 1. Run the code I provided to get a sense of how fast snapshot + manual is 2. Run datasets.load_dataset("Antreas/TALI") to get a sense of the speed of that OP. 3. You should now have an appreciation of how long these things take. ### Expected behavior The load dataset function should be at least as fast as the huggingface snapshot download function in terms of downloading dataset files. Not 20 times slower. ### Environment info - `datasets` version: 2.14.5 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.17.3 - PyArrow version: 13.0.0 - Pandas version: 2.1.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6439/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6439/timeline
null
null
null
null
false