This work proposes a probabilistic framework for combining high level information such as user activities, from a human user wearing a smart watch, and probabilistic information such as room connectivity from an assistive mobile robot for semantic mapping and user room level localization in domestic environments. The main idea is to leverage the semantic information provided by the user activities and the accurate metric map created by an assistive robot. The conceptual information is modeled as a probabilistic chain-graph. The user is equipped with only a smart watch, and we detect complex activities and a coarse trajectory using inertial data. We perform activity detection using a Long Short-Term Memory Recurrent Neural Network. The robot is equipped with an RGB-D camera, and creates a topological map of the environment. Both the user and the robot build a conceptual map composed by room categories on top of the low-level trajectory. When the robot and the user meet, the user's conceptual map is fused with the robot's conceptual map. The robot is able to match activities with types of rooms, learning a semantic representation of the environment over time (room types), while the user is able to be localized at room level by exploiting the precise map built by the robot. Preliminary ongoing tests show the feasibility of the approach.
[1]
Patric Jensfelt,et al.
Large-scale semantic mapping and reasoning with heterogeneous modalities
,
2012,
2012 IEEE International Conference on Robotics and Automation.
[2]
Qi Cheng,et al.
Robot semantic mapping through wearable sensor-based human activity recognition
,
2012,
2012 IEEE International Conference on Robotics and Automation.
[3]
John J. Leonard,et al.
Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age
,
2016,
IEEE Transactions on Robotics.
[4]
Gerhard Tröster,et al.
S-SMART
,
2016,
ACM Trans. Intell. Syst. Technol..