Considerations for accurate inclusion of staff member body tracking in a top-down view virtual reality display of a scattered radiation dose map during fluoroscopic interventional procedures

The functionality of a real-time, top-down view virtual reality (VR) display of scattered radiation during fluoroscopic interventional procedures is being expanded to incorporate automatic input of staff member locations. Microsoft Kinect V2 depth sensing camera input was integrated into an open-source Robot Operating System (ROS) wrapper to facilitate automatic extraction of relative landmark body feature coordinates. Coordinates for the torso are selected to represent the staff member location in the selected plane of scatter; these coordinates are stored in a text file to be input into the real-time scatter display system (SDS). Accuracy of the depth sensing camera was evaluated using a pinhole camera model. This model was also implemented in an ROS wrapper to calibrate the Microsoft Kinect V2. Calibrated values were then implemented within a coordinate transformation algorithm which converts the physical distance measurements in the frame of the Kinect to normalized coordinates used in Matlab for visualization of the top-down horizontal plane of the interventional suite. Impact on real-time performance was evaluated for both staff member position update on-screen as well as for the update of SDS image frames.