Search for a command to run...
COMFI (human-robot Collaboration Oriented Markerless For Industry) is a multimodal dataset designed to advance markerless motion capture, ergonomics, and Human–Robot Collaboration (HRC) in factory settings. COMFI contains 4.5 hours of synchronized and spatially co-registered streams acquired from 18 participants performing 24 tasks that span everyday movements (e.g., walking, sit-to-stand) and ergonomically demanding industrial actions (lifting, overhead work, bolting, sanding, welding), with the addition of two HRC scenarios in which a Franka Emika Panda is guided by the human while holding a tool. For a total of 86.5Go of data, it includes: calibrated multi-view RGB videos (40Hz), optical motion capture markers and joint centers positions, as well as joint angles (100 and 40Hz), 6D ground-reaction forces (1000 and 40Hz), and robot telemetry (200 and 40Hz). Camera intrinsics/extrinsics, global triggers, and software-barrier synchronization for webcams are distributed, along with participant-scaled human Universal Robot Description Files that adhere to International Society of Biomechancis conventions, enabling kinematics, dynamics, and torque estimation. Videos are anonymized while preserving facial cues useful to markerless pipelines. Accompanying code supports loading, calibration, and visualization. COMFI enables rigorous benchmarking of markerless pose estimation under occlusion and clutter against reference systems, allowing to extend current state-of-the-art algorithms to complex industrial scenarios. COMFI is expected to catalyze reproducible, cross-disciplinary research toward safer, more ergonomic HRC. For a better usage, users are recommended to save all the videos zip files into one folder called videos.