Structured Human-Head Pose Representation for Estimation Using Fuzzy Lattice Reasoning (FLR)

Human-robot interaction has been a significant area of research with the widespread use of social robots. Many modalities can be used to achieve interaction, including vision. For each modality, many methodologies have been proposed, with varying degrees of effectiveness and efficiency in terms of the computational power needed. The varied nature of these algorithms makes data fusion a complex and application specific task. This paper introduces a novel Lattice Computing -based methodology to interpret visual stimuli for head pose estimation. An investigation of the various parameters involved and initial results are presented. The aim is to determine head pose in robot-assisted therapy settings and use it in decision making. This work is part of a broader effort to use the Lattice Computing (LC) paradigm as a unified methodology for sensory data interpretation in human-robot interaction.