A multi-sensor dataset of human-human handover

The article describes a multi-sensor dataset of human-human handovers composed of over 1000 recordings collected from 18 volunteers. The recordings refer to 76 test configurations, which consider different volunteer׳s starting positions and roles, objects to pass and motion strategies. In all experiments, we acquire 6-axis inertial data from two smartwatches, the 15-joint skeleton model of one volunteer with an RGB-D camera and the upper-body model of both persons using a total of 20 motion capture markers. The recordings are annotated with videos and questionnaires about the perceived characteristics of the handover.

[1]  T. Kanda,et al.  Robot mediated round table: Analysis of the effect of robot's gaze , 2002, Proceedings. 11th IEEE International Workshop on Robot and Human Interactive Communication.

[2]  Satoshi Endo,et al.  Experimental testing of the CogLaboration prototype system for fluent Human-Robot object handover interactions , 2014, The 23rd IEEE International Symposium on Robot and Human Interactive Communication.

[3]  E. Hall,et al.  The Hidden Dimension , 1970 .

[4]  Siddhartha S. Srinivasa,et al.  Toward seamless human-robot handovers , 2013, Journal of Human-Robot Interaction.