Intuitive Robotic Operator Control (IROC): Integration of Gesture Recognition With An Unmanned Ground Vehicle and Heads Up Display
暂无分享,去创建一个
Currently, fielded ground robotic platforms are controlled by a human operator via constant, direct input from a controller. This approach requires constant attention on the part of the operator, decreasing situational awareness (SA). In scenarios where the robotic asset is non-line-of-sight (non-LOS), the operator must monitor visual feedback, which is typically in the form of a video feed and/or visualization. With the increasing use of personal radios, smart devices/wearable computers, and network connectivity by individual warfighters, the need for an unobtrusive means of robotic control and feedback is becoming more necessary. A proposed intuitive robotic operator control (IROC) involving a heads up display (HUD), instrumented gesture recognition glove, and ground robotic asset is described in this paper. Under the direction of the Marine Corps Warfighting Laboratory (MCWL) Futures Directorate, AnthroTronix, Inc. (ATinc) is implementing the described integration for completion and demonstration by 30 September 2016. Background Currently fielded ground robotic platforms are directly controlled by an operator using direct human input via a controller, often with a gamepad-like joystick controller operated by hand. Robots are operated using these direct operator control interfaces either where the operator has view of the robot and hence direct visual feedback of its performance or where the operator cannot see the robot, and visual feedback is provided by either a video feed and/or a visualization of the robot within its environment. At the same time, individual warfighters are increasing their use of (both in possessing and being tied into) personal radios, smart devices/wearable computers, and network connectivity at the squad level. Direct operator control of robots is tactically undesirable, and limits the robot’s usefulness because it requires (at least) one warfighter’s attention to operate, and usually more to provide the operator with security since the operator is ‘heads-down,” rendering him vulnerable in tactical situations. These current operational conditions for using ground robots therefore diminish a unit’s (e.g. squad)’s warfighting capability. The multitude of demands caused by operating a ground robot on a warfighter’s attention can negatively impact performance (Mitchell, Samms, Glumm, Krausman, Brelsford, & Garrett, 2004). These demands can lead to poor decision-making, reduced response time, and generally poor overall performance as the individual must divert their attentional resources toward processing information as opposed to performing tasks (Wickens, 2002, 2008). Given the increase in cognitive demands on solders in the form of complex technological systems, a need exists for more intuitive operator control of ground robots. Military operations are inherently dynamic environments and thus present physical and cognitive challenges to the human operators. In order for human teams to work together effectively, they must have accurate shared mental models, which are defined to be knowledge structures held by team members that enable them to understand task conditions which are used to coordinate their actions and adapt their behavior to task demands and the actions of the other team members (Cannon-Bowers, 1993). In the case of human-robot Proceedings of the 2016 Ground Vehicle Systems Engineering and Technology Symposium (GVSETS) Intuitive Robotic Operator Control (IROC): Integration of Gesture Recognition with an Unmanned Ground Vehicle Baraniecki, et al. Page 2 of 5 teams, these shared mental models are also necessary for successful team performance. However, humans and robots do not always perceive and process information in the same manner, which creates a barrier to information and task sharing. Moreover, the current state of artificial intelligence (AI) is such that humans are still necessary for direct or supervisory control to perform most tasks. Effort Goals and Scope AnthroTronix, Inc. (ATinc), a research and development engineering firm specializing in advanced human-machine interface devices, has extensive experience developing multimodal interfaces for communication and command/control of computer-based systems such as wearable computers and robotic platforms (Vice et al, 2001, Vice et al 2005). ATinc has expertise in basic and applied research and development related to military training, and has conducted extensive research and development involving multimodal interfaces, including sensor-based motion tracking and gestural interface technologies, multimodal feedback devices, and mobile computing systems. In order to address the complications of tactical situations, ATinc, under contract to and with direction from the Marine Corps Warfighting Lab (MCWL), is implementing an intuitive, integrated, interactive approach to robotic control. This approach covers both the method as well as implementation of the robotic control. Method refers to command input via hand gestures while implementation refers to the use of NuGlove, an instrumented glove that recognizes hand gestures. The user will NuGlove and make gestures that correspond to commands to be sent to the robot. Video feedback will be displayed on the heads up display. This comprehensive system allows the warfighter both control and information access without introducing an interruption into the task flow. The complete system that ATinc is implementing is comprised of a Heads Up Display (HUD), a NuGlove Instrumented Glove, an Android Device, and an Endeavor Robotics PackBot 510 with FasTac. Figure 1 shows how the components of the system will interact. Figure 1. Diagram of IROC System Heads Up Display (HUD) Heads up Displays (HUDs) allow users to view data and information without requiring that they move their heads or look away from their normal viewpoints. Users do not have to switch between heads down and heads up in order to obtain crucial mission information. This is especially relevant to tactical environments, where situation awareness maintenance is key. The HUD is able to provide necessary information on command, however the user is also able to return focus to the current task almost instantly. Figure 2. Heads Up Display NuGlove Instrumented Gesture Recognition System During combat maneuvers, dismount warfighters will typically use hand-and-arm signals for communication. The warfighters will use an established set of hand-and-arm signals, which aids in the maintenance of shared mental models within teams. It also allows the warfighter to maintain noise discipline. The use of NuGlove to capture and relay this information allows the commands to be sent to multiple team members simultaneously and without the need for line-ofsight. NuGlove uses sensors that are small, lightweight, and unobtrusively incorporated into the warfighters’ current field gloves. The concept of gesture recognition is a key Proceedings of the 2016 Ground Vehicle Systems Engineering and Technology Symposium (GVSETS) Intuitive Robotic Operator Control (IROC): Integration of Gesture Recognition with an Unmanned Ground Vehicle Baraniecki, et al. Page 3 of 5 component of what developers refer to as a perceptual user interface (PUI). The goal of such a design is to enhance the efficiency and ease of use for the underlying application design in order to maximize usability. The use of inertial measurement unit (IMU) sensor technologies for gesture recognition allows for a technically-feasible, near-term approach within uncontrolled environments. Figure 3. NuGlove Instrumented Glove NuGlove for Robotic Control NuGlove provides an efficient means of control over a robotic asset. It provides a solution that allows for single-hand control, whereas standard gamepad controllers require twohanded control. The control is directly scalable to the range of motion of the hand. Additionally, hand gestures are an intuitive motion known to humans. The range of hand postures provides a wide range of potential commands for a robotic asset. Dynamic gesture recognition also allows for an intuitive means of direct control without the interaction with an additional controller. Methods of Gesture Recognition There are many ways to accomplish gesture recognition as a whole. The capabilities of the NuGlove IMUs, specifically, provide multiple methods of gesture recognition implementation. Gesture recognition is broken down into two main categories: static and dynamic gestures. Static Gestures Recognition of static gestures can be separated into two methods, based on the way the software recognizes the user input. Discrete Hand Postures – this means the hand itself is oriented in a unique manner, recognized by the software. A basic example of this would be the difference between making a “point” gesture and a halt (closed fist) gesture, with the hand location in space staying the same. Unique Overall Hand Positions – this means the hand posture can be the same, however the position of the hand changes over time. An example would be making a two-fingers (“peace sign”) gesture while moving the wrist to change hand location but keeping the hand posture the same. Dynamic Gestures Dynamic gesture recognition can be accomplished through multiple methods. In this categorization, it is important to highlight that dynamic gestures are being characterized by both their user implementation as well as the implementation of the commands by the software. By using this categorization, we are provided with an overall depiction of the dynamic gesture recognition process. Proportional Control — the movement of a static gesture is tied to the output response. This is typically used for direct control over the movement of the responding system. An example of this would be direct drive control via hand movement. Dynamic Gesture Recognition via a Series of Static Gestures – this is accomplished by recognizing a series of discrete static gestures in succession, during a distinct time p
[1] Allison Druin,et al. Therapeutic play with a storytelling robot , 2001, CHI Extended Abstracts.
[2] Christopher D. Wickens,et al. Multiple resources and performance prediction , 2002 .
[3] Christopher D. Wickens,et al. Multiple Resources and Mental Workload , 2008, Hum. Factors.