Cognitive Map Architecture Facilitation of Human–Robot Interaction in Humanoid Robots

ion Certain components serve as intermediaries between high-level decision components and low-level robot behavior. Specifically, they transform high-level directives to low-level expressions of behavior. The Task Matrix allows high-level symbolic commands to be transformed into physically realizable actions. The Multimodal Communication component takes symbolic utterances and coordinates both speech and gestures. Superficially, this role seems similar to the executive layer in layered robot architectures like 3T. However, in our architecture the entire layer has been encapsulated and partitioned into several different components with clear responsibilities. This modularity allows these behaviors to be managed and maintained separately without refactoring other parts of the architecture. For example in the Task Matrix, by employing a plug-in based mechanism for expanding the number of tasks, and providing access to all tasks through a single component interface, a cleaner mechanism for dynamically adding and removing tasks is achieved. If the robot’s hardware or joint configuration is modified, changes only need be made in the Task Matrix while keeping the rest of the system untouched. The Multimodal Communication module separates the content in the application from the style in which that content is expressed during communication. Any changes done in this component will result in immediately changed behavior in all applications that use it.

[1]  Christian Goerick,et al.  Researching and developing a real-time infrastructure for intelligent systems - Evolution of an integrated approach , 2008, Robotics Auton. Syst..

[2]  Kristinn R. Thórisson,et al.  Constructionist Design Methodology for Interactive Intelligences , 2004, AI Mag..

[3]  Rodney A. Brooks,et al.  A Robust Layered Control Syste For A Mobile Robot , 2022 .

[4]  Bruce A. MacDonald,et al.  Player 2.0: Toward a Practical Robot Programming Framework , 2008 .

[5]  Reid G. Simmons,et al.  Robotic Systems Architectures and Programming , 2008, Springer Handbook of Robotics.

[6]  Kristinn R. Thórisson,et al.  Mind Model for Multimodal Communicative Creatures and Humanoids , 1999, Appl. Artif. Intell..

[7]  Ravi Kiran Sarvadevabhatla,et al.  The memory game: Creating a human-robot interactive scenario for ASIMO , 2008, 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems.

[8]  Alessandro Farinelli,et al.  Design and Implementation of Modular Software for Programming Mobile Robots , 2006 .

[9]  Sebastian Thrun,et al.  Perspectives on standardization in mobile robot programming: the Carnegie Mellon Navigation (CARMEN) Toolkit , 2003, Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453).

[10]  Kristinn R. Thórisson,et al.  Whiteboards: Scheduling Blackboards for Semantic Routing of Messages & Streams , 2005 .

[11]  Richard T. Vaughan,et al.  The Player/Stage Project: Tools for Multi-Robot and Distributed Sensor Systems , 2003 .

[12]  Shuuji Kajita,et al.  OpenHRP: Open Architecture Humanoid Robotics Platform , 2004, Int. J. Robotics Res..

[13]  John K. Tsotsos,et al.  Robot middleware must support task-directed perception , 2007 .

[14]  Petr Madecki Microsoft Robotics Studio - použití jazyka VPL , 2010 .

[15]  Victor Ng-Thow-Hing,et al.  Randomized multi-modal motion planning for a humanoid robot manipulation task , 2011, Int. J. Robotics Res..

[16]  Jean-Christophe Baillie,et al.  Universal programming interfaces for robotic devices , 2005, sOc-EUSAI '05.

[17]  Nils J. Nilsson,et al.  Shakey the Robot , 1984 .

[18]  E. Gat On Three-Layer Architectures , 1997 .