text stringlengths 20 34 |
|---|
scene23.jo.4f.phase1 |
scene24.jo.4f.phase1 |
scene25.jo.4f.phase1 |
scene54.TENNISCOURTS.Ground.phase0 |
scene54.TENNISCOURTS.Ground.phase2 |
scene55.TENNISCOURTS.Ground.phase0 |
scene55.TENNISCOURTS.Ground.phase1 |
scene55.TENNISCOURTS.Ground.phase2 |
scene56.jsom.atrium.phase1 |
scene57.jsom.atrium.phase0 |
scene57.jsom.atrium.phase1 |
scene57.jsom.atrium.phase2 |
RPX: Robot Perception X
A real-world RGB-D benchmark for evaluating robot perception under embodied deployment conditions.
Dataset at a glance
| Multi-object scenes (MOS) | 12 (3 phases each: clutter / interaction / clean) |
| Single-object scenes (SOS) | 0 (one 360° collection per object) |
| Total files | 133,506 |
| Total bytes | 23.5 GB |
Modality inventory
| modality | files | bytes |
|---|---|---|
cam_pose |
9,006 | 5.0 MB |
depth |
9,006 | 1.5 GB |
fisheye |
18,012 | 5.6 GB |
jpg |
250 | 10.7 MB |
rgb |
9,006 | 4.0 GB |
sam2/_meta |
127 | 64.0 KB |
sam2/bbox_overlay |
72 | 23.8 MB |
sam2/contour_gt_masks |
8,960 | 3.1 GB |
sam2/dino_output |
43,227 | 3.5 GB |
sam2/masks |
8,960 | 26.8 MB |
sam2/masks_contour_with_hidden |
8,960 | 3.1 GB |
sam2/palette |
8,960 | 28.9 MB |
sam2/rgb_and_mask |
8,960 | 2.7 GB |
Quick start
pip install "rpx-benchmark[hub]"
hf auth login
from rpx_benchmark.dataset_hub import download_for_task
# Pull just RGB + masks for the Easy difficulty tier — never the whole repo.
res = download_for_task(task="segmentation", split="easy",
repo_id="itaykadosh/rpx-test")
print(res.local_dir, res.matched_scenes)
# Or from the CLI:
python -m rpx_benchmark.dataset_hub.cli download \
--task segmentation --split easy \
--repo-id itaykadosh/rpx-test
A subsequent call for a different task on the same split (e.g.
relative_pose) reuses the cached RGB tars and only fetches the new
modality (cam_pose) as the delta.
Repo layout
itaykadosh/rpx-test/
├── manifest/
│ ├── frames_v1.parquet # per-frame metadata (always pulled, ~30 MB)
│ └── current.json # default version per label modality
├── splits/
│ ├── scene_splits.json
│ ├── easy.txt medium.txt hard.txt
├── scenes/<scene_id>/<phase>/ # MOS
│ ├── rgb.tar depth.tar fisheye.tar
│ └── labels/{cam_pose,masks,masks_aux,sam2_meta,vqa}/v1.tar
├── objects/<object_id>/0/ # SOS
│ └── (same modality structure)
├── objects_meta/ # questionnaire dedup
│ ├── _index.json
│ └── <object_id>/questionnaire.json
└── README.md ← this file
Tasks
Multi-object (use a difficulty split)
| recipe | inputs → labels |
|---|---|
monocular_depth |
['rgb'] → ['depth'] |
rgbd_segmentation |
['depth', 'rgb'] → ['masks'] |
segmentation |
['rgb'] → ['masks'] |
relative_pose |
['rgb'] → ['cam_pose'] |
rgbd_relative_pose |
['depth', 'rgb'] → ['cam_pose'] |
stereo_depth |
['fisheye'] → ['depth'] |
object_tracking |
['rgb'] → ['masks'] |
vqa |
['rgb'] → ['questionnaire', 'vqa'] |
Single-object (no split — these are object templates)
| recipe | inputs → labels |
|---|---|
object_templates |
['rgb'] → ['masks'] |
object_templates_rgbd |
['depth', 'rgb'] → ['masks'] |
object_pose_library |
['depth', 'rgb'] → ['cam_pose', 'masks'] |
Label versioning
Labels live at labels/<name>/v<N>.tar. Newer versions land at new
paths; old versions stay reachable for reproducibility.
| modality | current version |
|---|---|
masks |
v1 |
masks_aux |
v1 |
sam2_meta |
v1 |
cam_pose |
v1 |
To pin to a specific version:
download_for_task(
task="relative_pose", split="easy", repo_id="itaykadosh/rpx-test",
label_versions={"cam_pose": "v1"}, # don't auto-upgrade to v2
)
Citation
@misc{rpx2026,
title = {RPX: Robot Perception X — A real-world RGB-D benchmark for
embodied perception},
author = {IRVL UT Dallas},
year = 2026,
url = {https://huggingface.co/datasets/itaykadosh/rpx-test},
}
License
Released under the cc-by-4.0 license.
- Downloads last month
- 39