Datasets:
Tasks:
Text Retrieval
Sub-tasks:
fact-checking-retrieval
Languages:
English
Size:
10K<n<100K
ArXiv:
License:
fix: licence and add parquet format
#9
by
vincentkoc
- opened
- README.md +92 -64
- data/test.parquet +3 -0
- data/train.parquet +3 -0
- data/validation.parquet +3 -0
README.md
CHANGED
|
@@ -7,7 +7,7 @@ language_creators:
|
|
| 7 |
language:
|
| 8 |
- en
|
| 9 |
license:
|
| 10 |
-
-
|
| 11 |
multilinguality:
|
| 12 |
- monolingual
|
| 13 |
size_categories:
|
|
@@ -54,13 +54,16 @@ dataset_info:
|
|
| 54 |
- name: test
|
| 55 |
num_bytes: 927513
|
| 56 |
num_examples: 4000
|
| 57 |
-
download_size:
|
| 58 |
dataset_size: 7758943
|
| 59 |
---
|
| 60 |
|
| 61 |
# Dataset Card for HoVer
|
| 62 |
|
|
|
|
|
|
|
| 63 |
## Table of Contents
|
|
|
|
| 64 |
- [Dataset Description](#dataset-description)
|
| 65 |
- [Dataset Summary](#dataset-summary)
|
| 66 |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
|
@@ -73,119 +76,144 @@ dataset_info:
|
|
| 73 |
- [Curation Rationale](#curation-rationale)
|
| 74 |
- [Source Data](#source-data)
|
| 75 |
- [Annotations](#annotations)
|
| 76 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
| 77 |
-
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
| 78 |
-
- [Social Impact of Dataset](#social-impact-of-dataset)
|
| 79 |
-
- [Discussion of Biases](#discussion-of-biases)
|
| 80 |
-
- [Other Known Limitations](#other-known-limitations)
|
| 81 |
- [Additional Information](#additional-information)
|
| 82 |
-
- [Dataset Curators](#dataset-curators)
|
| 83 |
- [Licensing Information](#licensing-information)
|
| 84 |
- [Citation Information](#citation-information)
|
| 85 |
- [Contributions](#contributions)
|
| 86 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 87 |
## Dataset Description
|
| 88 |
|
| 89 |
- **Homepage:** https://hover-nlp.github.io/
|
| 90 |
- **Repository:** https://github.com/hover-nlp/hover
|
| 91 |
- **Paper:** https://arxiv.org/abs/2011.03088
|
| 92 |
- **Leaderboard:** https://hover-nlp.github.io/
|
| 93 |
-
- **Point of Contact:** [More Information Needed]
|
| 94 |
|
| 95 |
### Dataset Summary
|
| 96 |
|
| 97 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 98 |
|
| 99 |
### Supported Tasks and Leaderboards
|
| 100 |
|
| 101 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 102 |
|
| 103 |
### Languages
|
| 104 |
|
| 105 |
-
|
| 106 |
|
| 107 |
## Dataset Structure
|
| 108 |
|
| 109 |
### Data Instances
|
| 110 |
|
| 111 |
-
A sample training set
|
| 112 |
-
|
| 113 |
-
```
|
| 114 |
-
{
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 115 |
```
|
| 116 |
|
| 117 |
-
|
| 118 |
-
|
| 119 |
|
| 120 |
### Data Fields
|
| 121 |
|
| 122 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 123 |
|
| 124 |
### Data Splits
|
| 125 |
|
| 126 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 127 |
|
| 128 |
## Dataset Creation
|
| 129 |
|
| 130 |
### Curation Rationale
|
| 131 |
|
| 132 |
-
|
| 133 |
|
| 134 |
### Source Data
|
| 135 |
|
| 136 |
-
|
| 137 |
-
|
| 138 |
-
#### Initial Data Collection and Normalization
|
| 139 |
-
|
| 140 |
-
[More Information Needed]
|
| 141 |
-
|
| 142 |
-
#### Who are the source language producers?
|
| 143 |
-
|
| 144 |
-
[More Information Needed]
|
| 145 |
|
| 146 |
### Annotations
|
| 147 |
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
#### Annotation process
|
| 151 |
-
|
| 152 |
-
[More Information Needed]
|
| 153 |
-
|
| 154 |
-
#### Who are the annotators?
|
| 155 |
-
|
| 156 |
-
[More Information Needed]
|
| 157 |
-
|
| 158 |
-
### Personal and Sensitive Information
|
| 159 |
-
|
| 160 |
-
[More Information Needed]
|
| 161 |
-
|
| 162 |
-
## Considerations for Using the Data
|
| 163 |
-
|
| 164 |
-
### Social Impact of Dataset
|
| 165 |
-
|
| 166 |
-
[More Information Needed]
|
| 167 |
-
|
| 168 |
-
### Discussion of Biases
|
| 169 |
-
|
| 170 |
-
[More Information Needed]
|
| 171 |
-
|
| 172 |
-
### Other Known Limitations
|
| 173 |
-
|
| 174 |
-
[More Information Needed]
|
| 175 |
|
| 176 |
## Additional Information
|
| 177 |
|
| 178 |
-
### Dataset Curators
|
| 179 |
-
|
| 180 |
-
[More Information Needed]
|
| 181 |
-
|
| 182 |
### Licensing Information
|
| 183 |
|
| 184 |
-
|
| 185 |
|
| 186 |
### Citation Information
|
| 187 |
|
| 188 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 189 |
### Contributions
|
| 190 |
|
| 191 |
-
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding
|
|
|
|
| 7 |
language:
|
| 8 |
- en
|
| 9 |
license:
|
| 10 |
+
- mit
|
| 11 |
multilinguality:
|
| 12 |
- monolingual
|
| 13 |
size_categories:
|
|
|
|
| 54 |
- name: test
|
| 55 |
num_bytes: 927513
|
| 56 |
num_examples: 4000
|
| 57 |
+
download_size: 3428352
|
| 58 |
dataset_size: 7758943
|
| 59 |
---
|
| 60 |
|
| 61 |
# Dataset Card for HoVer
|
| 62 |
|
| 63 |
+
> **Note**: This is a scriptless, Parquet-based version of the HoVer dataset for seamless integration with HuggingFace `datasets` library. No `trust_remote_code` required!
|
| 64 |
+
|
| 65 |
## Table of Contents
|
| 66 |
+
- [Quick Start](#quick-start)
|
| 67 |
- [Dataset Description](#dataset-description)
|
| 68 |
- [Dataset Summary](#dataset-summary)
|
| 69 |
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
|
|
|
| 76 |
- [Curation Rationale](#curation-rationale)
|
| 77 |
- [Source Data](#source-data)
|
| 78 |
- [Annotations](#annotations)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
- [Additional Information](#additional-information)
|
|
|
|
| 80 |
- [Licensing Information](#licensing-information)
|
| 81 |
- [Citation Information](#citation-information)
|
| 82 |
- [Contributions](#contributions)
|
| 83 |
|
| 84 |
+
## Quick Start
|
| 85 |
+
|
| 86 |
+
```python
|
| 87 |
+
from datasets import load_dataset
|
| 88 |
+
|
| 89 |
+
# Load the dataset (no trust_remote_code needed!)
|
| 90 |
+
dataset = load_dataset("hover-nlp/hover")
|
| 91 |
+
|
| 92 |
+
# Access splits
|
| 93 |
+
train = dataset["train"]
|
| 94 |
+
validation = dataset["validation"]
|
| 95 |
+
test = dataset["test"]
|
| 96 |
+
|
| 97 |
+
# Example usage
|
| 98 |
+
print(train[0])
|
| 99 |
+
# {
|
| 100 |
+
# 'id': 0,
|
| 101 |
+
# 'uid': '330ca632-e83f-4011-b11b-0d0158145036',
|
| 102 |
+
# 'claim': 'Skagen Painter Peder Severin Krøyer favored naturalism...',
|
| 103 |
+
# 'supporting_facts': [{'key': 'Kristian Zahrtmann', 'value': 0}, ...],
|
| 104 |
+
# 'label': 1, # 0: NOT_SUPPORTED, 1: SUPPORTED
|
| 105 |
+
# 'num_hops': 3,
|
| 106 |
+
# 'hpqa_id': '5ab7a86d5542995dae37e986'
|
| 107 |
+
# }
|
| 108 |
+
```
|
| 109 |
+
|
| 110 |
## Dataset Description
|
| 111 |
|
| 112 |
- **Homepage:** https://hover-nlp.github.io/
|
| 113 |
- **Repository:** https://github.com/hover-nlp/hover
|
| 114 |
- **Paper:** https://arxiv.org/abs/2011.03088
|
| 115 |
- **Leaderboard:** https://hover-nlp.github.io/
|
|
|
|
| 116 |
|
| 117 |
### Dataset Summary
|
| 118 |
|
| 119 |
+
HoVer (HOP VERification) is an open-domain, many-hop fact extraction and claim verification dataset built upon the Wikipedia corpus. The dataset contains claims that require reasoning over multiple documents (multi-hop) to verify whether they are supported or not supported by evidence.
|
| 120 |
+
|
| 121 |
+
The original 2-hop claims are adapted from question-answer pairs from HotpotQA. It was collected by a team of NLP researchers at UNC Chapel Hill and Verisk Analytics.
|
| 122 |
+
|
| 123 |
+
This version provides the dataset in Parquet format for efficient loading and compatibility with modern data processing pipelines, eliminating the need for custom loading scripts.
|
| 124 |
|
| 125 |
### Supported Tasks and Leaderboards
|
| 126 |
|
| 127 |
+
- **Fact Verification**: Determine whether a claim is SUPPORTED or NOT_SUPPORTED based on evidence from Wikipedia articles
|
| 128 |
+
- **Multi-hop Reasoning**: Claims require reasoning across multiple documents (indicated by `num_hops` field)
|
| 129 |
+
- **Evidence Retrieval**: Identify relevant supporting facts from source documents
|
| 130 |
+
|
| 131 |
+
The official leaderboard is available at https://hover-nlp.github.io/
|
| 132 |
|
| 133 |
### Languages
|
| 134 |
|
| 135 |
+
English (en)
|
| 136 |
|
| 137 |
## Dataset Structure
|
| 138 |
|
| 139 |
### Data Instances
|
| 140 |
|
| 141 |
+
A sample training set example:
|
| 142 |
+
|
| 143 |
+
```json
|
| 144 |
+
{
|
| 145 |
+
"id": 14856,
|
| 146 |
+
"uid": "a0cf45ea-b5cd-4c4e-9ffa-73b39ebd78ce",
|
| 147 |
+
"claim": "The park at which Tivolis Koncertsal is located opened on 15 August 1843.",
|
| 148 |
+
"supporting_facts": [
|
| 149 |
+
{"key": "Tivolis Koncertsal", "value": 0},
|
| 150 |
+
{"key": "Tivoli Gardens", "value": 1}
|
| 151 |
+
],
|
| 152 |
+
"label": 1,
|
| 153 |
+
"num_hops": 2,
|
| 154 |
+
"hpqa_id": "5abca1a55542993a06baf937"
|
| 155 |
+
}
|
| 156 |
```
|
| 157 |
|
| 158 |
+
**Note**: In the test set, only `id`, `uid`, and `claim` fields contain meaningful data. The `label` is set to `-1`, `num_hops` to `-1`, `hpqa_id` to `"None"`, and `supporting_facts` is an empty list, as these are withheld for evaluation purposes.
|
|
|
|
| 159 |
|
| 160 |
### Data Fields
|
| 161 |
|
| 162 |
+
- **id** (`int32`): Sequential identifier for the example within its split
|
| 163 |
+
- **uid** (`string`): Unique identifier (UUID) for the claim
|
| 164 |
+
- **claim** (`string`): The claim statement to be verified
|
| 165 |
+
- **supporting_facts** (`list`): List of evidence facts, where each fact contains:
|
| 166 |
+
- **key** (`string`): Title of the Wikipedia article
|
| 167 |
+
- **value** (`int32`): Sentence index within that article
|
| 168 |
+
- **label** (`ClassLabel`): Verification label with values:
|
| 169 |
+
- `0`: NOT_SUPPORTED - The claim is not supported by the evidence
|
| 170 |
+
- `1`: SUPPORTED - The claim is supported by the evidence
|
| 171 |
+
- `-1`: Unknown (used in test set)
|
| 172 |
+
- **num_hops** (`int32`): Number of reasoning hops required (typically 2-4 for this dataset)
|
| 173 |
+
- **hpqa_id** (`string`): Original HotpotQA question ID from which the claim was derived
|
| 174 |
|
| 175 |
### Data Splits
|
| 176 |
|
| 177 |
+
| Split | Examples |
|
| 178 |
+
|-------|----------|
|
| 179 |
+
| Train | 18,171 |
|
| 180 |
+
| Validation | 4,000 |
|
| 181 |
+
| Test | 4,000 |
|
| 182 |
+
| **Total** | **26,171** |
|
| 183 |
+
|
| 184 |
+
The splits maintain the original distribution from the HoVer dataset.
|
| 185 |
|
| 186 |
## Dataset Creation
|
| 187 |
|
| 188 |
### Curation Rationale
|
| 189 |
|
| 190 |
+
HoVer was created to address the challenge of multi-hop fact verification, where claims require reasoning across multiple documents. The dataset was built to push the boundaries of claim verification systems beyond single-document fact-checking.
|
| 191 |
|
| 192 |
### Source Data
|
| 193 |
|
| 194 |
+
The dataset is built upon Wikipedia as the knowledge source. Claims are adapted from HotpotQA question-answer pairs and modified to create verification statements that require multi-hop reasoning.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 195 |
|
| 196 |
### Annotations
|
| 197 |
|
| 198 |
+
The dataset was annotated by expert annotators who identified supporting facts across multiple Wikipedia articles and determined whether claims were supported or not supported by the evidence.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 199 |
|
| 200 |
## Additional Information
|
| 201 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 202 |
### Licensing Information
|
| 203 |
|
| 204 |
+
This dataset is licensed under the MIT License.
|
| 205 |
|
| 206 |
### Citation Information
|
| 207 |
|
| 208 |
+
```bibtex
|
| 209 |
+
@inproceedings{jiang2020hover,
|
| 210 |
+
title={{HoVer}: A Dataset for Many-Hop Fact Extraction And Claim Verification},
|
| 211 |
+
author={Yichen Jiang and Shikha Bordia and Zheng Zhong and Charles Dognin and Maneesh Singh and Mohit Bansal},
|
| 212 |
+
booktitle={Findings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)},
|
| 213 |
+
year={2020}
|
| 214 |
+
}
|
| 215 |
+
```
|
| 216 |
+
|
| 217 |
### Contributions
|
| 218 |
|
| 219 |
+
Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding the original dataset and [@vincentkoc](https://github.com/vincentkoc) for creating this Parquet version.
|
data/test.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:96aba0e2f8f9a3deedb0357128a9ddacd7f655caf3f1b3238f2880c2fc0dfbac
|
| 3 |
+
size 539974
|
data/train.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:db36f7537f8b7c9d3c1a4e0bc145940b08ecc22dddf65aec25c40c88113fb13c
|
| 3 |
+
size 2191600
|
data/validation.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:2f8ea4b11e7f489329ed29a412d8948ec8099c1e20ba5085ea9b1e8a703e7d98
|
| 3 |
+
size 668296
|