Request Access to Ego-Exo Manufacturing

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

This dataset contains ego+exo video pairs and standalone sessions from real shoe manufacturing operations. Please fill in the form below to request access.

Log in or Sign Up to review the conditions and access this dataset content.

Ego-Exo Manufacturing Dataset

Dataset Preview

Ego-Exo Manufacturing is a video dataset of real shoe manufacturing operations with two configs.

  • synced_pairs — temporally synchronized ego+exo pairs with expert annotations. Videos are in a separate gated repo: skill-ai/ego-exo-manufacturing-synced.
  • standalone contains individual sessions from workers, with action labels.

Dataset Statistics

Attribute Value
Total video content ~275 hours
Synced ego-exo pairs 40 groups
Synced ego duration ~36 hours
Synced exo duration ~36 hours
Standalone ego sessions 80 (~101 hours)
Standalone exo sessions 137 (~102 hours)
Storage size ~1.1 TiB
Ego frame rate 30 fps
Exo frame rate ~25–30 fps (varies per group)
Resolution 1080p
Format H.264 / MP4
Audio No
Face privacy Exo faces Gaussian-blurred
Annotations Atomic action descriptions, expert commentary

Dataset Structure

The dataset has two configs: synced_pairs (temporally aligned ego+exo pairs with full annotations) and standalone (individual unmatched sessions with action labels).

ego-exo-manufacturing/
├── synced_pairs/
│   ├── takes.json                          # Root metadata — one entry per group
│   ├── takes/
│   │   └── groupXX/
│   │       ├── ego01.mp4                   # Trimmed first-person video
│   │       └── exo01_blurred.mp4           # Trimmed overhead video (faces blurred)
│   └── annotations/
│       ├── atomic_descriptions/groupXX.json
│       └── expert_commentary/groupXX.json
└── standalone/
    ├── ego/NNN/*.mp4                       # NNN = 001–080
    ├── exo/NNN/*.mp4                       # NNN = 001–137
    └── labels/
        ├── ego/NNN.json
        └── exo/NNN.json

takes.json

Root-level index for synced_pairs. Both videos in each group are trimmed so that frame 0 of ego01.mp4 and frame 0 of exo01_blurred.mp4 correspond to the same real-world moment.

[
  {
    "group_id": "group34",
    "ego_video": "takes/group34/ego01.mp4",
    "exo_video": "takes/group34/exo01_blurred.mp4",
    "exo_blurred": true,
    "ego_duration_s": 3670.018,
    "ego_fps": 30.0,
    "exo_fps": 26.336
  }
]

annotations/atomic_descriptions/groupXX.json

Timestamped action narrations in Ego-Exo4D atomic description format, transcribed from worker narration audio.

{
  "annotations": {
    "group01_ego_shoe_001": [
      {
        "annotation_uid": "9a9d8b89-1a7e-580a-8434-fc13586af3c0",
        "annotator_id": "worker_annotator_001",
        "rejected": false,
        "reject_reason": null,
        "descriptions": [
          {
            "text": "C folds the foam paper with both hands.",
            "timestamp_sec": 2.0,
            "narration_subject": "C",
            "ego_visible": true,
            "unsure": false,
            "_ext": {
              "hand_used": "both",
              "is_essential": true,
              "notes": "An older worker nearby was teaching, so this took longer."
            }
          }
        ]
      }
    ]
  }
}

annotations/expert_commentary/groupXX.json

Rich structured expert annotation covering task description, procedural keysteps, proficiency assessment, and mistake labels.

{
  "meta": {
    "dataset": "Shoe Manufacturing v1",
    "annotation_method": "worker_narration_transcribed"
  },
  "take_uid": "group01",
  "task_name": "Shoe box packaging paper folding and placement",
  "scenario_name": "shoe_packaging",
  "action_descriptions": {
    "annotator_id": "worker_annotator_001",
    "descriptions": [
      { "text": "The worker folds the packaging paper in half...", "type": "task_overview" },
      { "text": "Align the short sides facing each other, then fold.", "type": "how" },
      { "text": "The purpose of folding is to create a barrier...", "type": "why" }
    ]
  },
  "proficiency_scores": {
    "overall": 4.3,
    "dimensions": {
      "speed":     { "score": 5, "max": 5 },
      "precision": { "score": 4, "max": 5 },
      "fluency":   { "score": 4, "max": 5 }
    }
  },
  "skill_level": {
    "label": "proficient",
    "reasoning": "Worker was being taught at the beginning but quickly became proficient."
  },
  "mistake_labels": {
    "has_mistakes": false,
    "annotations": [],
    "common_failure_modes": [
      { "description": "Not aligning edges before folding.", "severity": "minor" }
    ]
  },
  "procedure": {
    "keysteps": [
      { "step_id": 1, "label": "Align short edges facing each other" },
      { "step_id": 2, "label": "Fold along short edge" }
    ],
    "dependencies": [
      { "from": 1, "to": 2, "type": "required" }
    ],
    "observed_order": [1, 2, 3, 4, 5, 6],
    "deviations_from_standard": []
  },
  "expert_commentary": {
    "commentary_data": [
      { "text": "The most common mistake is not aligning edges before folding.", "type": "tip_for_improvement" },
      { "text": "Overall: the worker is fast and proficient.", "type": "overall_assessment" }
    ]
  }
}

standalone/labels/ego|exo/NNN.json

Per-session action label.

{
  "stage_name": "cementing sole",
  "shoe_component": "sole",
  "activity": "applying adhesive",
  "tools_or_materials": ["cement brush", "rubber sole", "adhesive"],
  "camera_perspective": "ego",
  "hands_visible": true,
  "anomalies": ["none"],
  "confidence": "high",
  "sop": "Worker applies adhesive evenly to the shoe sole contact surface before pressing."
}

Loading the Dataset

Load metadata

import json
from huggingface_hub import hf_hub_download

takes_path = hf_hub_download(
    repo_id="skill-ai/ego-exo-manufacturing",
    filename="synced_pairs/takes.json",
    repo_type="dataset",
)
with open(takes_path) as f:
    takes = json.load(f)

print(f"{len(takes)} groups")
# 40 groups

Load a synced video pair

import cv2
from huggingface_hub import hf_hub_download

REPO = "skill-ai/ego-exo-manufacturing"
GROUP = "group34"

ego_path = hf_hub_download(REPO, f"synced_pairs/takes/{GROUP}/ego01.mp4", repo_type="dataset")
exo_path = hf_hub_download(REPO, f"synced_pairs/takes/{GROUP}/exo01_blurred.mp4", repo_type="dataset")

ego_cap = cv2.VideoCapture(ego_path)
exo_cap = cv2.VideoCapture(exo_path)

# Frame N from ego_cap == Frame N from exo_cap (same real-world moment)
ret_e, ego_frame = ego_cap.read()
ret_x, exo_frame = exo_cap.read()

Load annotations

import json
from huggingface_hub import hf_hub_download

REPO = "skill-ai/ego-exo-manufacturing"
GROUP = "group34"

# Atomic descriptions — timestamped narrations
ann_path = hf_hub_download(REPO, f"synced_pairs/annotations/atomic_descriptions/{GROUP}.json", repo_type="dataset")
with open(ann_path) as f:
    ann = json.load(f)

for key, entries in ann["annotations"].items():
    for entry in entries:
        for desc in entry["descriptions"]:
            print(f"t={desc['timestamp_sec']:.1f}s  {desc['text']}")

# Expert commentary — proficiency, keysteps, mistakes
ec_path = hf_hub_download(REPO, f"synced_pairs/annotations/expert_commentary/{GROUP}.json", repo_type="dataset")
with open(ec_path) as f:
    ec = json.load(f)

print(f"Task: {ec['task_name']}")
print(f"Skill level: {ec['skill_level']['label']}  ({ec['proficiency_scores']['overall']}/5)")
for step in ec["procedure"]["keysteps"]:
    print(f"  Step {step['step_id']}: {step['label']}")

License

Licensed under Apache 2.0.

Citation

@dataset{egosexomanufacturing2026,
  title  = {Ego-Exo Manufacturing},
  year   = {2026},
  url    = {https://huggingface.co/datasets/skill-ai/ego-exo-manufacturing},
}
Downloads last month
65

Collection including skill-ai/ego-exo-manufacturing