Skeletal Data Matching and Merging from Multiple RGB-D Sensors for Room-Scale Distant Interaction with Multiple Surfaces

Authors

Coppens A., Maquil V.

Reference

Electronics (Switzerland), vol. 14, n° 4, art. no. 790, 2025

Description

Using a commodity RGB-D sensor is a popular and cost-effective way to enable interaction at room scale, as such a device supports body tracking functionality at a reasonable price point. Even though the capabilities of such devices might be enough for applications like entertainment systems where a person plays in front of a television, this type of sensor is unfortunately sensitive to occlusions from objects or other people, who might be in the way in more sophisticated room-scale set-ups. One may use multiple RGB-D sensors and aggregate the collected data to address the occlusion problem, increase the tracking range, and improve accuracy. However, doing so requires the gathering of calibration information with regard to the sensors themselves and also regarding their relative placement on interactable surfaces. Another challenging consequence of relying on multiple sensors is the need to perform skeleton matching and merging based on their respective body tracking data (e.g., so that skeletons from different sensors but belonging to the same person are recognised as such). The present contribution focuses on approaches to tackling these issues. Ultimately, it contributes a working human interaction tracking system, leveraging multiple RGB-D sensors to provide unobtrusive and occlusion-resilient understanding capabilities. This constitutes a suitable basis for room-scale experiences such as those based on wall-sized displays.

Link

doi:10.3390/electronics14040790

Share this page: