Do you see what I see: crowdsource annotation of captured scenes

The Archive of Many Outdoor Scenes has captured 400 million images. Many of these cameras and images are of street intersections, a subset of which has experienced built environment improvements during the past seven years. We identified six cameras in Washington, DC, and uploaded 120 images from each before a built environment change (2007) and after (2010) to the crowdsourcing website Amazon Mechanical Turk (n=1,440). Five unique MTurk workers annotated each image, counting the number of pedestrians, cyclists, and vehicles. Two trained Research Assistants completed the same tasks. Reliability and validity statistics of MTurk workers revealed substantial agreement in annotating captured images of pedestrians and vehicles. Using the mean annotation of four MTurk workers proved most parsimonious for valid results. Crowdsourcing was shown to be a reliable and valid workforce for annotating images of outdoor human behavior.