mapo80 commited on
Commit
f597012
·
verified ·
1 Parent(s): 0e45551

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +193 -0
README.md ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - image-segmentation
5
+ - keypoint-detection
6
+ - object-detection
7
+ language:
8
+ - en
9
+ tags:
10
+ - document-detection
11
+ - corner-detection
12
+ - document-scanner
13
+ - quadrilateral-detection
14
+ - perspective-correction
15
+ - computer-vision
16
+ size_categories:
17
+ - 10K<n<100K
18
+ ---
19
+
20
+ # DocCornerDataset
21
+
22
+ A comprehensive dataset for **document corner detection** and **quadrilateral localization**. This dataset is designed for training models that detect the four corners of documents in natural images, enabling applications like document scanning, perspective correction, and automatic document cropping.
23
+
24
+ ## Dataset Description
25
+
26
+ DocCornerDataset contains **27,860 images** with precise corner annotations:
27
+ - **23,496 training samples**
28
+ - **4,364 validation samples**
29
+ - Includes both positive samples (with documents) and negative samples (without documents)
30
+
31
+ ### Key Features
32
+
33
+ - **High-quality annotations**: 4-corner coordinates (TL, TR, BR, BL) in normalized format [0-1]
34
+ - **Diverse sources**: Aggregated from multiple public datasets covering various document types
35
+ - **Negative samples**: Non-document images to reduce false positives
36
+ - **Pre-split data**: Ready-to-use train/validation splits
37
+ - **Parquet format**: Efficient storage with embedded images
38
+
39
+ ## Dataset Structure
40
+
41
+ The dataset is stored in Parquet format with the following columns:
42
+
43
+ | Column | Type | Description |
44
+ |--------|------|-------------|
45
+ | `image_bytes` | bytes | Raw JPEG image data |
46
+ | `filename` | string | Original filename |
47
+ | `has_document` | bool | True if image contains a document |
48
+ | `x0`, `y0` | float32 | Top-left corner (normalized 0-1) |
49
+ | `x1`, `y1` | float32 | Top-right corner (normalized 0-1) |
50
+ | `x2`, `y2` | float32 | Bottom-right corner (normalized 0-1) |
51
+ | `x3`, `y3` | float32 | Bottom-left corner (normalized 0-1) |
52
+
53
+ ## Source Datasets
54
+
55
+ This dataset aggregates and re-annotates images from multiple public sources:
56
+
57
+ | Source Dataset | Samples | Description |
58
+ |----------------|---------|-------------|
59
+ | **MIDV-500** | ~9,500 | Mobile Identity Document Video dataset |
60
+ | **AutoCapture** | ~8,000 | Auto-captured document images |
61
+ | **MIDV-2019** | ~1,400 | Extended mobile ID document dataset |
62
+ | **SmartDoc-QA** | ~1,400 | Document images for QA tasks |
63
+ | **Sample Dataset** | ~1,000 | Mixed document samples |
64
+ | **Four Corners Detection** | ~950 | Corner detection focused dataset |
65
+ | **Document Segmentation** | ~950 | Curated segmentation samples |
66
+ | **ReceiptExtractor** | ~620 | Receipt and ticket images |
67
+ | **Receipt Instance Segmentation** | ~200 | Receipt instance annotations |
68
+ | **CORD v2** | ~80 | Consolidated Receipt Dataset |
69
+ | **Negative Samples** | ~4,300 | Non-document background images |
70
+
71
+ ## Loading the Dataset
72
+
73
+ ### Using PyArrow/Pandas
74
+
75
+ ```python
76
+ import pyarrow.parquet as pq
77
+ import pandas as pd
78
+ from PIL import Image
79
+ import io
80
+
81
+ # Load train data
82
+ train_df = pd.read_parquet("hf://datasets/mapo80/DocCornerDataset/data/train_chunk000.parquet")
83
+
84
+ # View a sample
85
+ sample = train_df.iloc[0]
86
+ image = Image.open(io.BytesIO(sample['image_bytes']))
87
+ corners = [sample['x0'], sample['y0'], sample['x1'], sample['y1'],
88
+ sample['x2'], sample['y2'], sample['x3'], sample['y3']]
89
+ print(f"Filename: {sample['filename']}")
90
+ print(f"Has document: {sample['has_document']}")
91
+ print(f"Corners: {corners}")
92
+ image.show()
93
+ ```
94
+
95
+ ### Using HuggingFace Datasets
96
+
97
+ ```python
98
+ from datasets import load_dataset
99
+ from PIL import Image
100
+ import io
101
+
102
+ # Load the dataset
103
+ dataset = load_dataset("mapo80/DocCornerDataset", data_files={
104
+ "train": "data/train_chunk*.parquet",
105
+ "validation": "data/val_chunk*.parquet"
106
+ })
107
+
108
+ # View a sample
109
+ sample = dataset["train"][0]
110
+ image = Image.open(io.BytesIO(sample['image_bytes']))
111
+ print(f"Filename: {sample['filename']}")
112
+ print(f"Corners: x0={sample['x0']:.3f}, y0={sample['y0']:.3f}, ...")
113
+ ```
114
+
115
+ ### Using PyTorch DataLoader
116
+
117
+ ```python
118
+ import torch
119
+ from torch.utils.data import Dataset, DataLoader
120
+ import pyarrow.parquet as pq
121
+ from PIL import Image
122
+ import io
123
+ import torchvision.transforms as T
124
+
125
+ class DocCornerDataset(Dataset):
126
+ def __init__(self, parquet_files, transform=None):
127
+ self.data = pq.ParquetDataset(parquet_files).read().to_pandas()
128
+ self.transform = transform or T.Compose([
129
+ T.Resize((224, 224)),
130
+ T.ToTensor(),
131
+ T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
132
+ ])
133
+
134
+ def __len__(self):
135
+ return len(self.data)
136
+
137
+ def __getitem__(self, idx):
138
+ row = self.data.iloc[idx]
139
+ image = Image.open(io.BytesIO(row['image_bytes'])).convert('RGB')
140
+ image = self.transform(image)
141
+
142
+ corners = torch.tensor([
143
+ row['x0'], row['y0'], row['x1'], row['y1'],
144
+ row['x2'], row['y2'], row['x3'], row['y3']
145
+ ], dtype=torch.float32)
146
+
147
+ has_doc = torch.tensor(row['has_document'], dtype=torch.float32)
148
+
149
+ return image, corners, has_doc
150
+
151
+ # Usage
152
+ train_files = ["data/train_chunk000.parquet", "data/train_chunk001.parquet", ...]
153
+ dataset = DocCornerDataset(train_files)
154
+ loader = DataLoader(dataset, batch_size=32, shuffle=True)
155
+ ```
156
+
157
+ ## Use Cases
158
+
159
+ - **Document Corner Detection**: Train models to localize document corners
160
+ - **Document Scanning Apps**: Build automatic document capture features
161
+ - **Perspective Correction**: Detect quadrilaterals for perspective transformation
162
+ - **Document Segmentation**: Segment documents from background
163
+ - **OCR Preprocessing**: Improve OCR accuracy with proper document alignment
164
+
165
+ ## Citation
166
+
167
+ If you use this dataset in your research, please cite:
168
+
169
+ ```bibtex
170
+ @dataset{doccornerdataset2024,
171
+ title={DocCornerDataset: A Comprehensive Dataset for Document Corner Detection},
172
+ author={mapo80},
173
+ year={2024},
174
+ publisher={Hugging Face},
175
+ url={https://huggingface.co/datasets/mapo80/DocCornerDataset}
176
+ }
177
+ ```
178
+
179
+ ### Source Dataset Citations
180
+
181
+ Please also consider citing the original source datasets:
182
+
183
+ - **MIDV-500/2019**: Bulatov et al., "MIDV-500: A Dataset for Identity Documents Analysis and Recognition on Mobile Devices in Video Stream"
184
+ - **SmartDoc**: Burie et al., "ICDAR 2015 Competition on Smartphone Document Capture and OCR"
185
+ - **CORD**: Park et al., "CORD: A Consolidated Receipt Dataset for Post-OCR Parsing"
186
+
187
+ ## License
188
+
189
+ This dataset is released under the **CC-BY-4.0** license. Please respect the licenses of the original source datasets when using this data.
190
+
191
+ ## Acknowledgments
192
+
193
+ This dataset was created by aggregating and re-annotating images from multiple public document datasets. We thank the creators of the original datasets for making their data publicly available.