This article proposes a method for understanding user commands based on visual attention. Normally, fuzzy linguistic terms such as “very little” are commonly included in voice commands. Therefore, a robot’s capacity to understand such information is vital for effective human-robot interaction. However, the quantitative meaning of such information strongly depends on the spatial arrangement of the surrounding environment. Therefore, a visual attention system (VAS) is introduced to evaluate fuzzy linguistic information based on the environmental conditions. It is assumed that the corresponding distance value for a particular fuzzy linguistic command depends on the spatial arrangement of the surrounding objects. Therefore, a fuzzy-logic-based voice command evaluation system (VCES) is proposed to assess the uncertain information in user commands based on the average distance to the surrounding objects. A situation of object manipulation to rearrange the user’s working space is simulated to illustrate the system. This is demonstrated with a PA-10 robot manipulator.
[1]
Patrick Suppes,et al.
Language and Learning for Robots
,
1994
.
[2]
K. Izumi,et al.
Controlling a robot manipulator with fuzzy voice commands guided by visual motor coordination learning
,
2008,
2008 SICE Annual Conference.
[3]
Danica Kragic,et al.
Vision for robotic object manipulation in domestic settings
,
2005,
Robotics Auton. Syst..
[4]
K. Kiguchi,et al.
Modular fuzzy-neuro controller driven by spoken language commands
,
2004,
IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics).
[5]
Kiyotaka Izumi,et al.
Knowledge Acquisition by a Sub-coach in a Coach-Player System for Controlling a Robot(Multi-agent and Learning,Session: TP2-A)
,
2004
.
[6]
Keigo Watanabe,et al.
Learning of Object Identification by Natural Language Controlled Robots
,
2006
.