Update README.md
Browse files
README.md
CHANGED
|
@@ -1,5 +1,12 @@
|
|
| 1 |
---
|
| 2 |
license: openrail
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
configs:
|
| 4 |
- config_name: default
|
| 5 |
data_files:
|
|
@@ -23,4 +30,87 @@ dataset_info:
|
|
| 23 |
num_examples: 50
|
| 24 |
download_size: 38738419
|
| 25 |
dataset_size: 39054025
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: openrail
|
| 3 |
+
tags:
|
| 4 |
+
- robotics
|
| 5 |
+
- trajectory-prediction
|
| 6 |
+
- manipulation
|
| 7 |
+
- computer-vision
|
| 8 |
+
- time-series
|
| 9 |
+
pretty_name: Codatta Robotic Manipulation Trajectory
|
| 10 |
configs:
|
| 11 |
- config_name: default
|
| 12 |
data_files:
|
|
|
|
| 30 |
num_examples: 50
|
| 31 |
download_size: 38738419
|
| 32 |
dataset_size: 39054025
|
| 33 |
+
language:
|
| 34 |
+
- en
|
| 35 |
+
size_categories:
|
| 36 |
+
- n<1K
|
| 37 |
---
|
| 38 |
+
|
| 39 |
+
# Codatta Robotic Manipulation Trajectory (Sample)
|
| 40 |
+
|
| 41 |
+
## Dataset Summary
|
| 42 |
+
|
| 43 |
+
This dataset contains high-quality annotated trajectories of robotic gripper manipulations. It is designed to train models for fine-grained control, trajectory prediction, and object interaction tasks.
|
| 44 |
+
|
| 45 |
+
Produced by **Codatta**, this dataset focuses on third-person views of robotic arms performing pick-and-place or manipulation tasks. Each sample includes the raw video, a visualization of the trajectory, and a rigorous JSON annotation of keyframes and coordinate points.
|
| 46 |
+
|
| 47 |
+
**Note:** This is a sample dataset containing **50 annotated examples**.
|
| 48 |
+
|
| 49 |
+
## Supported Tasks
|
| 50 |
+
|
| 51 |
+
* **Trajectory Prediction:** Predicting the path of a gripper based on visual context.
|
| 52 |
+
* **Keyframe Extraction:** Identifying critical moments in a manipulation task (e.g., contact, velocity change).
|
| 53 |
+
* **Robotic Control:** Imitation learning from human-demonstrated or teleoperated data.
|
| 54 |
+
|
| 55 |
+
## Dataset Structure
|
| 56 |
+
|
| 57 |
+
### Data Fields
|
| 58 |
+
|
| 59 |
+
* **`id`** (string): Unique identifier for the trajectory sequence.
|
| 60 |
+
* **`total_frames`** (int32): Total number of frames in the video sequence.
|
| 61 |
+
* **`video_path`** (string): Path to the source MP4 video file recording the manipulation action.
|
| 62 |
+
* **`trajectory_image`** (image): A JPEG preview showing the overlaid trajectory path or keyframe visualization.
|
| 63 |
+
* **`annotations`** (string): A JSON-formatted string containing the detailed coordinate data.
|
| 64 |
+
* *Structure:* Contains lists of keyframes, timestamp, and the 5-point coordinates for the gripper in each annotated frame.
|
| 65 |
+
|
| 66 |
+
### Data Preview
|
| 67 |
+
*(Hugging Face's viewer will automatically render the `trajectory_image` here)*
|
| 68 |
+
|
| 69 |
+
## Annotation Standards
|
| 70 |
+
|
| 71 |
+
The data was annotated following a strict protocol to ensure precision and consistency.
|
| 72 |
+
|
| 73 |
+
### 1. Viewpoint Scope
|
| 74 |
+
* **Included:** Third-person views (fixed camera recording the robot).
|
| 75 |
+
* [cite_start]**Excluded:** First-person views (Eye-in-Hand) are explicitly excluded to ensure consistent coordinate mapping[cite: 5, 15].
|
| 76 |
+
|
| 77 |
+
### 2. Keyframe Selection
|
| 78 |
+
Annotations are not dense (every frame) but sparse, focusing on **Keyframes** that define the motion logic. [cite_start]A Keyframe is defined by the following events [cite: 20-25]:
|
| 79 |
+
1. [cite_start]**Start Frame:** The gripper first appears in the screen[cite: 21].
|
| 80 |
+
2. [cite_start]**End Frame:** The gripper leaves the screen[cite: 22].
|
| 81 |
+
3. [cite_start]**Velocity Change:** Frames where the speed direction suddenly changes (marking the minimum speed point)[cite: 23].
|
| 82 |
+
4. [cite_start]**State Change:** Frames where the gripper opens or closes[cite: 24].
|
| 83 |
+
5. [cite_start]**Contact:** The precise moment the gripper touches the object[cite: 25].
|
| 84 |
+
|
| 85 |
+
### 3. The 5-Point Annotation Method
|
| 86 |
+
[cite_start]For every annotated keyframe, the gripper is labeled with **5 specific coordinate points** to capture its pose and state accurately[cite: 27]:
|
| 87 |
+
|
| 88 |
+
| Point ID | Description | Location Detail |
|
| 89 |
+
| :--- | :--- | :--- |
|
| 90 |
+
| **Point 1 & 2** | **Fingertips** | [cite_start]Center of the bottom edge of the gripper tips[cite: 28, 29]. |
|
| 91 |
+
| **Point 3 & 4** | **Gripper Ends** | [cite_start]The rearmost points of the closing area (indicating the finger direction)[cite: 31]. |
|
| 92 |
+
| **Point 5** | **Tiger's Mouth** | [cite_start]The center of the crossbeam (base of the gripper)[cite: 32]. |
|
| 93 |
+
|
| 94 |
+
### 4. Quality Control
|
| 95 |
+
* [cite_start]**Accuracy:** All datasets passed a rigorous quality assurance process with a minimum **95% accuracy rate**[cite: 78].
|
| 96 |
+
* **Occlusion Handling:** If the gripper is partially occluded, points are estimated based on object geometry. [cite_start]Sequences where the gripper is fully occluded or only shows a side profile without clear features are discarded[cite: 58, 63].
|
| 97 |
+
|
| 98 |
+
## Usage Example
|
| 99 |
+
|
| 100 |
+
```python
|
| 101 |
+
from datasets import load_dataset
|
| 102 |
+
import json
|
| 103 |
+
|
| 104 |
+
# Load the dataset
|
| 105 |
+
ds = load_dataset("Codatta/robotic-manipulation-trajectory", split="train")
|
| 106 |
+
|
| 107 |
+
# Access a sample
|
| 108 |
+
sample = ds[0]
|
| 109 |
+
|
| 110 |
+
# View the image
|
| 111 |
+
print(f"Trajectory ID: {sample['id']}")
|
| 112 |
+
sample['trajectory_image'].show()
|
| 113 |
+
|
| 114 |
+
# Parse annotations
|
| 115 |
+
annotations = json.loads(sample['annotations'])
|
| 116 |
+
print(f"Keyframes count: {len(annotations)}")
|