A common challenge with processing naturalistic driving data is that humans may need to categorize great volumes of recorded visual information. By means of the online platform CrowdFlower, we investigated the potential of crowdsourcing to categorize driving scene features (i.e., presence of other road users, straight road segments, etc.) at greater scale than a single person or a small team of researchers would be capable of. In total, 200 workers from 46 different countries participated in 1.5days. Validity and reliability were examined, both with and without embedding researcher generated control questions via the CrowdFlower mechanism known as Gold Test Questions (GTQs). By employing GTQs, we found significantly more valid (accurate) and reliable (consistent) identification of driving scene items from external workers. Specifically, at a small scale CrowdFlower Job of 48 three-second video segments, an accuracy (i.e., relative to the ratings of a confederate researcher) of 91% on items was found with GTQs compared to 78% without. A difference in bias was found, where without GTQs, external workers returned more false positives than with GTQs. At a larger scale CrowdFlower Job making exclusive use of GTQs, 12,862 three-second video segments were released for annotation. Infeasible (and self-defeating) to check the accuracy of each at this scale, a random subset of 1012 categorizations was validated and returned similar levels of accuracy (95%). In the small scale Job, where full video segments were repeated in triplicate, the percentage of unanimous agreement on the items was found significantly more consistent when using GTQs (90%) than without them (65%). Additionally, in the larger scale Job (where a single second of a video segment was overlapped by ratings of three sequentially neighboring segments), a mean unanimity of 94% was obtained with validated-as-correct ratings and 91% with non-validated ratings. Because the video segments overlapped in full for the small scale Job, and in part for the larger scale Job, it should be noted that such reliability reported here may not be directly comparable. Nonetheless, such results are both indicative of high levels of obtained rating reliability. Overall, our results provide compelling evidence for CrowdFlower, via use of GTQs, being able to yield more accurate and consistent crowdsourced categorizations of naturalistic driving scene contents than when used without such a control mechanism. Such annotations in such short periods of time present a potentially powerful resource in driving research and driving automation development.
[1]
Ann Williamson,et al.
Naturalistic driving studies: literature review and planning for the Australian Naturalistic Driving Study
,
2012
.
[2]
G. Underwood,et al.
Driving Experience and the Functional Field of View
,
1999,
Perception.
[3]
David Crundall,et al.
A comparison of drivers' eye movements in filmed and simulated dangerous driving situations
,
2007
.
[4]
Stefanie Nowak,et al.
How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation
,
2010,
MIR '10.
[5]
Riender Happee,et al.
Public opinion on automated driving: results of an international questionnaire among 5000 respondents
,
2015
.
[6]
Mohammad Soleymani,et al.
Crowdsourcing for Affective Annotation of Video: Development of a Viewer-reported Boredom Corpus
,
2010
.
[7]
John Le,et al.
Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution
,
2010
.
[8]
Tal Oron-Gilad,et al.
Age, skill, and hazard perception in driving.
,
2010,
Accident; analysis and prevention.
[9]
Winter A. Mason,et al.
Internet research in psychology.
,
2015,
Annual review of psychology.