Crowdsourcing video-based workface assessment for construction activity analysis

Today, the availability of multiple cameras on every jobsite is reshaping the way construction activities are being monitored. Research has focused on addressing the limitations of manual workface assessment from these videos via computer vision algorithms. Despite the rapid explosion of these algorithms, the ability to automatically recognize worker and equipment activities from videos is still limited. By crowd-sourcing the task of workface assessment from jobsite videos, this paper aims to overcome the limitations of the current practice and provides a large empirical dataset that can serve as the basis for developing video-based activity recognition methods. As such, an intuitive web-based platform for massive marketplaces such as Amazon Mechanical Turk (AMT) is introduced that engages the intelligence of non-expert crowd for interpretations of selected group of frames from these videos and then it automates remaining workface assessment tasks based on the initial interpretations. To validate, several experiments are conducted on videos from concrete placement operations. The results show that engaging AMT non-experts together with computer vision algorithms can provide assessment results with an accuracy of 85%. This minimizes the time needed for workface assessment, and allows the practitioners to focus their time on the more important task of root-cause analysis for performance improvements. This platform also provides significantly large datasets with ground truth for algorithmic development purposes.

[1]  Berardo Naticchia,et al.  Design and first development of an automated real‐time safety management system for construction sites , 2009 .

[2]  Patricio A. Vela,et al.  Personnel tracking on construction sites using video cameras , 2009, Adv. Eng. Informatics.

[3]  Deva Ramanan,et al.  Efficiently Scaling up Crowdsourced Video Annotation , 2012, International Journal of Computer Vision.

[4]  Jochen Teizer,et al.  Automatic spatio-temporal analysis of construction site equipment operations using GPS data , 2013 .

[5]  Mani Golparvar-Fard,et al.  Automated Worker Activity Analysis in Indoor Environments for Direct-Work Rate Improvement from long sequences of RGB-D Images , 2014 .

[6]  Ioannis K. Brilakis,et al.  Automated vision tracking of project related entities , 2011, Adv. Eng. Informatics.

[7]  David L. Sheinberg,et al.  Visual object recognition. , 1996, Annual review of neuroscience.

[8]  Bill Triggs,et al.  Histograms of oriented gradients for human detection , 2005, 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05).

[9]  David A. Forsyth,et al.  Utility data annotation with Amazon Mechanical Turk , 2008, 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops.

[10]  Patricio A. Vela,et al.  Performance evaluation of ultra wideband technology for construction resource location tracking in harsh environments , 2011 .

[11]  Dong Zhai,et al.  Relationship between Automation and Integration of Construction Information Systems and Labor Productivity , 2009 .

[12]  Mani Golparvar-Fard,et al.  Automated 2D detection of construction equipment and workers from site video streams using histograms of oriented gradients and colors , 2013 .

[13]  Carlos H. Caldas,et al.  Vision-based action recognition in the internal construction site using interactions between worker actions and construction objects , 2013 .

[14]  Ioannis Brilakis,et al.  Construction worker detection in video frames for initializing vision trackers , 2012 .

[15]  Juan Carlos Niebles,et al.  Vision-based action recognition of earthmoving equipment using spatio-temporal features and support vector machine classifiers , 2013, Adv. Eng. Informatics.

[16]  Juan Carlos Niebles,et al.  Collecting and Annotating Human Activities in Web Videos , 2014, ICMR.

[17]  Zhongke Shi,et al.  Tracking multiple workers on construction sites using video cameras , 2010, Adv. Eng. Informatics.

[18]  Abhinav Peddi Development of human pose analyzing algorithms for the determination of construction productivity in real-time , 2009 .

[19]  Brenda McCabe,et al.  Automated Visual Recognition of Dump Trucks in Construction Videos , 2012, J. Comput. Civ. Eng..

[20]  Mani Golparvar-Fard,et al.  Vision-based workface assessment using depth images for activity analysis of interior construction operations , 2014 .

[21]  Jie Gong,et al.  An object recognition, tracking, and contextual reasoning-based video interpretation method for rapid productivity analysis of construction operations , 2011 .

[22]  Fei-Fei Li,et al.  ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.

[23]  J. Andrade-Cetto Object Recognition , 2003 .

[24]  Jochen Teizer,et al.  Leveraging passive RFID technology for construction resource field mobility and status monitoring in a high-rise renovation project , 2012 .

[25]  Mani Golparvar-Fard,et al.  Automated Vision-based Recognition of Construction Worker Actions for Building Interior Construction , 2012 .

[26]  Seokho Chi,et al.  Automated Object Identification Using Optical Video Cameras on Construction Sites , 2011, Comput. Aided Civ. Infrastructure Eng..