Guest Editors Introduction: Special Section on the ACM Symposium on Virtual Reality Software and Technology 2015

THIS special section of IEEE Transactions on Visualization and Computer Graphics (TVCG) presents extended versions of three selected papers from the 2015 ACM Symposium on Virtual Reality Software and Technology (VRST’15). Past VRST symposia were held in Hong Kong (2010), Toronto (2012), Singapore (2013), and Edinburgh (2014). In 2015, VRST was held for the first time in the mainland of China, Beijing from November 13 to 15, 2015. VRST is an international forum for the exchange of experience and knowledge among researchers and developers concerning virtual reality software and technology. A major goal is to provide an opportunity for VR researchers to interact, share new results, show live demonstrations of their work, and discuss emerging directions for the field. This year VRST received more than 100 submissions and 18 regular papers were accepted and published in the conference proceedings. In addition, 11 short papers and 16 posters were also selected and included as a part of the conference proceedings. These papers were evaluated on the basis of their significance, novelty, and technical rigor. All the papers went through two rounds of reviews. Each submission received at least three reviews from at least one Program Committee (PC) member and two external reviewers. The discussion among the PC members and external reviewers was held online for a period of one week. Among the accepted submissions, we carefully selected three best papers and invited the respective authors to submit the extended versions to this special section of IEEE TVCG. These papers underwent the full TVCG review process. Below, we provide a brief description of each paper included in this special section. The paper “Optimal Camera Placement for Motion Capture Systems” presents a good analysis of the problems involved in placing cameras in a CAVE-like environment and shows how camera placement can be optimized by taking into account object point distributions and occlusions, while considering placement constraints, triangulation convergence angles, and camera-object distances. The paper introduces two methods for camera placement: one is based on a metric that computes target point visibility in the presence of dynamic occlusion from cameras with “good” views, and the other is based on the distribution of views of target points. The paper also proposed efficient algorithms for estimating the optimal configuration of cameras for the two metrics and a given distribution of target points. The paper on “JackIn Head: Immersive Visual Telepresence System with Omnidirectional Wearable Camera” introduces a visual telepresence system with an omnidirectional wearable camera with image motion stabilization. Spherical omnidirectional video footage taken around the head of a local user is stabilized and then broadcast to others, allowing remote users to explore the scene independently of the local user’s head direction. Also, they conducted user study to analyze the study results. Their findings show that establishing shared understanding, or common ground, in a process known as “grounding”, is important. Also, typical real-world tasks during collaboration primarily consist of three phases; object/location identification, procedural instruction, and comprehension monitoring. They confirm that those three phases cannot be avoidable and are important even in immersive remote collaboration systems. The paper on “Dynamic Projection Mapping onto Deforming Non-rigid Surface using Deformable Dot Cluster Marker” provides the description of a high-speed tracking algorithm for localizing an IR dot pattern on deformable objects. The tracking algorithm is performed in a 4-step process, utilizing two pieces of information unique to each dot in the pattern: the tracking state of the dot, and the location of the dot in the camera coordinate frame. The tracking process begins with an initialization step where the initial location of each dot is recorded. Using a high speed camera the first step of the tracking process involves updating the location of each dot within each new frame. The second step uses extrapolated data from visible dots to estimate the location of occluded/non-visible dots. Step three uses a heuristic function to identify false positives of visible dots. These three steps are optimized using a parallelization scheme to process each dot independently. The final stage of the tracking algorithm resolves duplicate locations through hashing. The algorithm is applied to a projector-based AR system in which a perspectively correct image is projected onto deformable materials marked with an invisible IR dot pattern. For information on obtaining reprints of this article, please send e-mail to: reprints@ieee.org, and reference the Digital Object Identifier below. Digital Object Identifier no. 10.1109/TVCG.2016.2643818 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, VOL. 23, NO. 3, MARCH 2017 1207