Development of VR visualization system including deep learning architecture for improving teleoperability

The recognition and visualization of the situations in remote areas help the tele-operator work more effective and safe. This paper proposes the visualization system which can acquire and display 2D and 3D image-based environmental information for the operator. The proposed visualization system consists of egocentric viewer and exocentric viewer system. The egocentric viewer system shows VR information based on the object detection results using deep learning architecture. The exocentric viewer system shows the pose and space constraint information of the remote operation robot to be controlled on the 3D point cloud data scene that is capable of point-of view conversion.

[1]  Won-Chang Lee,et al.  Remote Control System for a Mobile Robot with a Robotic Arm , 2016 .

[2]  Takeshi Oishi,et al.  Development of interface for teleoperation of humanoid robot using task model method , 2016, 2016 IEEE/SICE International Symposium on System Integration (SII).

[3]  Trevor Darrell,et al.  Caffe: Convolutional Architecture for Fast Feature Embedding , 2014, ACM Multimedia.

[4]  François Michaud,et al.  Egocentric and exocentric teleoperation interface using real-time, 3D video projection , 2009, 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI).

[5]  Jee-Hwan Ryu,et al.  The Effect of Asynchronous Haptic and Video Feedback on Teleoperation and a Comment for Improving the Performance , 2012 .

[6]  Chang Hoi Kim,et al.  Application of robotics for the nuclear power plants in Korea , 2010, 2010 1st International Conference on Applied Robotics for the Power Industry.

[7]  Kaiming He,et al.  Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks , 2015, IEEE Transactions on Pattern Analysis and Machine Intelligence.