Do you see what I see: crowdsource annotation of captured scenes
暂无分享,去创建一个
The Archive of Many Outdoor Scenes has captured 400 million images. Many of these cameras and images are of street intersections, a subset of which has experienced built environment improvements during the past seven years. We identified six cameras in Washington, DC, and uploaded 120 images from each before a built environment change (2007) and after (2010) to the crowdsourcing website Amazon Mechanical Turk (n=1,440). Five unique MTurk workers annotated each image, counting the number of pedestrians, cyclists, and vehicles. Two trained Research Assistants completed the same tasks. Reliability and validity statistics of MTurk workers revealed substantial agreement in annotating captured images of pedestrians and vehicles. Using the mean annotation of four MTurk workers proved most parsimonious for valid results. Crowdsourcing was shown to be a reliable and valid workforce for annotating images of outdoor human behavior.
[1] Robert Pless,et al. The global network of outdoor webcams: properties and applications , 2009, GIS.
[2] J. R. Landis,et al. The measurement of observer agreement for categorical data. , 1977, Biometrics.
[3] Deepti Adlakha,et al. Emerging technologies: webcams and crowd-sourcing to identify active transportation. , 2013, American journal of preventive medicine.