Building a Mobile Augmented Reality System for Embedded Training: Lessons Learned

Mobile augmented reality (AR) is a method for providing a “head up display” to individual dismounted users. A user wears a miniaturized computer system, tracking sensors, and a see-through graphics display. The system superimposes three-dimensional spatially registered graphics and sounds onto the user’s perception of the real world. Because information can be presented in a head up and hands free way, it has the potential to revolutionize the way in which information is presented to individuals. A mobile AR system can insert friendly, neutral, and enemy computer-generated forces (CGFs) into the real world for training and mission rehearsal applications. The CGFs are drawn realistically and properly occluded with respect to the real world. The behaviors of the CGFs are generated from two Semi-Automated Forces (SAF) systems: JointSAF and OneSAF. The AR user appears as an individual combatant entity in the SAF system. The AR user's position and orientation are fed to the SAF system, and the state of the SAF entities is reflected in the AR display. The SAF entities react to the AR user just as they do any other individual combatant entity, and the AR user interacts with the CGFs in real time. In this paper, we document the development of a prototype mobile AR system for embedded training and its usage in MOUT-like situations. We discuss the tradeoffs of the components of the hardware (tracking technologies, display technologies, computing technologies) and the software (networking, SAF systems, CGF generation, model construction), and we describe the lessons that have been learned from implementing several scenarios.