Guest Editorial: Special Issue on Haptics, Virtual, and Augmented Reality

OVER the last decade or so, the related fields of virtual reality, augmented reality, and haptics have seen tremendous progress and expanded application domains. New graphics and haptic displays, faster and distributed computing hardware, better authoring software, and human factors studies lead to systems that are closer to meeting the user’s high expectations. The new awareness in the visualization and graphics community lead to a call for papers for this special iisue on haptics, virtual, and augmented reality of our transactions. The four guest editors and the editorial office had the demanding task of selecting 11 papers out of nearly 100 manuscripts that were received. We relied heavily on the many reviewers of this special issue and we would like to thank them warmly for the diligent work they did. One of the current challenges in virtual and augmented reality is omnidirectional stereo graphics, specifically the way to capture video that can then be seen in stereo, regardless of the user’s viewing direction. The special issue starts with the paper written by Tanaka and Tachi on “Tornado: Omnistereo Video Imaging with Rotating Optics.” The system uses two cameras and an optics assembly inside a rotating cylindrical shutter surface. Depending on the rotating speed, the stereo image is synthesized through optics (low rpm) or postprocessing (high rpm). The second paper, “Development of Anthropomorphic Multi-D.O.F. Master-Slave Arm for Mutual Telexistence” by Tadakuma, Asahara, Kajimoto, Kawakami, and Tachi, describes a novel robotic arm. The authors are working toward teleexistence, which places a physical avatar of the operator—a robot—in a remote location. Such technology enables humans to work in remote or hostile environments, such as contaminated sites, unstable mines, burning structures, or battlefields. Here, the authors describe the mechanical design and control algorithms for an anthropomorphic arm on such a robot. The operator controls the robot’s arm using an exoskeleton around his own arm and sees what the rest of the robot sees by using a headmounted display coupled with cameras on the robot’s head. Tadakuma et al. compared three control methods and found that impedance control was the most appropriate for their system. Another issue related to telepresence and distributed virtual environments is fast and smooth communication between remote locations. This is the subject of the paper “Data Streaming in Telepresence Environments” by Lamboray, Wurmlin, and Gross. Data streams in networked virtual environments are analyzed and classified according to their traffic characteristics. Special emphasis is placed on geometry enhanced (3D) video. The paper presents a simulated analysis of network latency and bandwidth occurring in a system that connects two CAVE-like environments and displays the user’s entire real body as an avatar. The 3D avatar model is constructed in real-time using computer vision techniques applied to the images of a collection of cameras that view the user. Remote collaboration in virtual environments is an important application area; the paper’s combination of a detailed system description and careful analysis will provide a solid basis for application development and future research in this area. A way to improve simulation response in distributed virtual environments is to reduce network load by incorporating more “intelligence” into object models (including avatars). This is the subject of the paper “Dynamic Interactions in Physically Realistic Collaborative Virtual Environments” by Jorissen, Wijnants, and Lamotte. Their approach uses inverse kinematics to provide more realism in avatar movements and make object interactions application independent. One way to further improve user-object interaction is to model its companion haptic feedback. In the paper “6-DOF Haptic Rendering Using Spatialized Normal Cone Search,” Johnson, Willemsen, and Cohen present a novel haptic rendering algorithm for displaying interaction forces and torques between two polygonal objects. By using spatialized normal cones, the algorithm detects collision robustly. It maintains local distance extrema between the virtual environment and the object moved by the haptic device to compute the resulting haptic feedback. The approach is demonstrated on models consisting of tens of thousands of triangles, tested on several complex geometric scenes, and applied to a virtual prototyping application. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 11, NO. 6, NOVEMBER/DECEMBER 2005 611