Influencing and Measuring Behaviour in Crowdsourced Activities
暂无分享,去创建一个
[1] Katharina Reinecke,et al. Personalized Feedback Versus Money: The Effect on Reliability of Subjective Data in Online Experimental Platforms , 2017, CSCW Companion.
[2] Serge Egelman,et al. The Anatomy of Smartphone Unlocking: A Field Study of Android Lock Screens , 2016, CHI.
[3] Tanja Aitamurto,et al. Unmasking the crowd: participants’ motivation factors, expectations, and profile in a crowdsourced law reform , 2017 .
[4] Jennifer Preece,et al. Citizen Science: New Research Challenges for Human–Computer Interaction , 2016, Int. J. Hum. Comput. Interact..
[5] Alex S. Taylor,et al. Re-Making Places: HCI, 'Community Building' and Change , 2016, CHI.
[6] A. Acquisti,et al. Beyond the Turk: Alternative Platforms for Crowdsourcing Behavioral Research , 2016 .
[7] Edward Cutrell,et al. Deterring Cheating in Online Environments , 2015, TCHI.
[8] Duncan P. Brumby,et al. Now Check Your Input: Brief Task Lockouts Encourage Checking, Longer Lockouts Encourage Task Switching , 2016, CHI.
[9] Ann Blandford,et al. Designing for dabblers and deterring drop-outs in citizen science , 2014, CHI.
[10] Anna L. Cox,et al. Home is Where the Lab is: A Comparison of Online and Lab Data From a Time-sensitive Study of Interruption , 2015, Hum. Comput..
[11] Katharina Reinecke,et al. Doodle around the world: online scheduling behavior reflects cultural differences in time perception and group decision-making , 2013, CSCW.
[12] Andrea Wiggins,et al. Community-based Data Validation Practices in Citizen Science , 2016, CSCW.
[13] Aaron D. Shaw,et al. Designing incentives for inexpert human raters , 2011, CSCW.
[14] Qingming Huang,et al. Robust evaluation for quality of experience in crowdsourcing , 2013, ACM Multimedia.
[15] Anna L. Cox,et al. Diminished Control in Crowdsourcing , 2016, ACM Trans. Comput. Hum. Interact..
[16] Tara S. Behrend,et al. The viability of crowdsourcing for survey research , 2011, Behavior research methods.
[17] James D. Abbey,et al. Attention by design: Using attention checks to detect inattentive respondents and improve data quality , 2017 .
[18] Chris Callison-Burch,et al. A Data-Driven Analysis of Workers' Earnings on Amazon Mechanical Turk , 2017, CHI.
[19] Demetrios Zeinalipour-Yazti,et al. Crowdsourcing with Smartphones , 2012, IEEE Internet Computing.
[20] Christopher G. Harris. The Effects of Pay-to-Quit Incentives on Crowdworker Task Quality , 2015, CSCW.
[21] Penelope M. Sanderson,et al. The Effect of Individual Differences on How People Handle Interruptions , 2013 .
[22] Michael S. Bernstein,et al. Examining Crowd Work and Gig Work Through the Historical Lens of Piecework , 2017, CHI.
[23] C. Lintott,et al. Galaxy Zoo Green Peas: discovery of a class of compact extremely star-forming galaxies , 2009, 0907.4155.
[24] Antti Oulasvirta,et al. Model of visual search and selection time in linear menus , 2014, CHI.
[25] Duncan P. Brumby,et al. Frequency and Duration of Self-Initiated Task-Switching in an Online Investigation of Interrupted Performance , 2013, HCOMP.
[26] Sean A. Munson,et al. Beyond Abandonment to Next Steps: Understanding and Designing for Life after Personal Informatics Tool Use , 2016, CHI.
[27] Andrew Howes,et al. Strategies for Guiding Interactive Search: An Empirical Investigation Into the Consequences of Label Relevance for Assessment and Selection , 2008, Hum. Comput. Interact..
[28] D. Dunning. The Dunning–Kruger Effect , 2011 .
[29] Mirco Musolesi,et al. My Phone and Me: Understanding People's Receptivity to Mobile Notifications , 2016, CHI.
[30] Sun Young Park,et al. Technological and Organizational Adaptation of EMR Implementation in an Emergency Department , 2015, TCHI.
[31] Daniel J. Wigdor,et al. Slide to X: unlocking the potential of smartphone unlocking , 2014, CHI.
[32] Hojung Cha,et al. Piggyback CrowdSensing (PCS): energy efficient crowdsourcing of mobile sensor data by exploiting smartphone app opportunities , 2013, SenSys '13.
[33] Anna L. Cox,et al. Always On(line)?: User Experience of Smartwatches and their Role within Multi-Device Ecologies , 2017, CHI.
[34] Patrick Olivier,et al. Digital Civics: Citizen Empowerment With and Through Technology , 2016, CHI Extended Abstracts.
[35] Bonnie A. Nardi,et al. Not Just in it for the Money: A Qualitative Investigation of Workers' Perceived Benefits of Micro-task Crowdsourcing , 2015, 2015 48th Hawaii International Conference on System Sciences.
[36] Michael S. Bernstein,et al. Break It Down: A Comparison of Macro- and Microtasks , 2015, CHI.
[37] M. Graber,et al. Internet-based crowdsourcing and research ethics: the case for IRB review , 2012, Journal of Medical Ethics.
[38] Shaowen Bardzell,et al. Social Justice and Design: Power and oppression in collaborative systems , 2017, CSCW Companion.
[39] Katharina Reinecke,et al. LabintheWild: Conducting Large-Scale Online Experiments With Uncompensated Samples , 2015, CSCW.
[40] David J. Hauser,et al. It’s a Trap! Instructional Manipulation Checks Prompt Systematic Thinking on “Tricky” Tasks , 2015 .
[41] Jon Froehlich,et al. Differences in Crowdsourced vs. Lab-based Mobile and Desktop Input Performance Data , 2017, CHI.
[42] C. Lintott,et al. Galaxy Zoo: morphologies derived from visual inspection of galaxies from the Sloan Digital Sky Survey , 2008, 0804.4483.
[43] Marta E. Cecchinato,et al. Working 9-5?: Professional Differences in Email and Boundary Management Practices , 2015, CHI.
[44] Rick A Adams,et al. Crowdsourcing for Cognitive Science – The Utility of Smartphones , 2014, PloS one.
[45] Aniket Kittur,et al. CrowdScape: interactively visualizing user behavior and output , 2012, UIST.
[46] Jennifer Preece,et al. Accounting for Privacy in Citizen Science: Ethical Research in a Context of Openness , 2017, CSCW.
[47] Kate A. Ratliff,et al. Using Nonnaive Participants Can Reduce Effect Sizes , 2015, Psychological science.
[48] Lydia B. Chilton,et al. The labor economics of paid crowdsourcing , 2010, EC '10.
[49] Peter Hoonakker,et al. Questionnaire Survey Nonresponse: A Comparison of Postal Mail and Internet Surveys , 2009, Int. J. Hum. Comput. Interact..
[50] M. Haklay. How Good is Volunteered Geographical Information? A Comparative Study of OpenStreetMap and Ordnance Survey Datasets , 2010 .
[51] Katharina Reinecke,et al. Types of Motivation Affect Study Selection, Attention, and Dropouts in Online Experiments , 2017, Proc. ACM Hum. Comput. Interact..
[52] Duncan J. Watts,et al. Financial incentives and the "performance of crowds" , 2009, HCOMP '09.
[53] C. Potter,et al. Citizen science as seen by scientists: Methodological, epistemological and ethical dimensions , 2014, Public understanding of science.
[54] Adam Marcus,et al. The Effects of Sequence and Delay on Crowd Work , 2015, CHI.
[55] A. Cox,et al. Motivations, learning and creativity in online citizen science , 2016 .
[56] Denzil Ferreira,et al. AWARE: Mobile Context Instrumentation Framework , 2015, Front. ICT.
[57] Jesse Chandler,et al. Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers , 2013, Behavior Research Methods.
[58] Anna L. Cox,et al. Media Multitasking at Home: A Video Observation Study of Concurrent TV and Mobile Device Usage , 2017, TVX.
[59] Michael B. Twidale,et al. Design Facets of Crowdsourcing , 2015 .
[60] Robert E. Kraut,et al. Why pay?: exploring how financial incentives are used for question & answer , 2010, CHI.
[61] Martha Larson,et al. Crowdsourcing as self-fulfilling prophecy: Influence of discarding workers in subjective assessment tasks , 2016, 2016 14th International Workshop on Content-Based Multimedia Indexing (CBMI).
[62] Anna L. Cox,et al. Exploring Citizen Psych-Science and the Motivations of Errordiary Volunteers , 2014, Hum. Comput..
[63] Gary Marsden,et al. After access: challenges facing mobile-only internet users in the developing world , 2010, CHI.
[64] Stefan Dietze,et al. Using Worker Self-Assessments for Competence-Based Pre-Selection in Crowdsourcing Microtasks , 2017, ACM Trans. Comput. Hum. Interact..
[65] M. Six Silberman,et al. Ethics and tactics of professional crowdwork , 2010, XRDS.
[66] Jim Foster,et al. Using eDNA to develop a national citizen science-based monitoring programme for the great crested newt (Triturus cristatus) , 2015 .
[67] Per Ola Kristensson,et al. Improving two-thumb text entry on touchscreen devices , 2013, CHI.
[68] Martin Pielot,et al. An in-situ study of mobile phone notifications , 2014, MobileHCI '14.
[69] Katharina Reinecke,et al. Crowdsourcing performance evaluations of user interfaces , 2013, CHI.
[70] Peng Dai,et al. And Now for Something Completely Different: Improving Crowdsourcing Workflows with Micro-Diversions , 2015, CSCW.
[71] Wayne D. Gray. Game-XP: Action Games as Experimental Paradigms for Cognitive Science , 2017, Top. Cogn. Sci..
[72] H. G. D. Zúñiga,et al. Influence of social media use on discussion network heterogeneity and civic engagement: The moderating role of personality traits , 2013 .
[73] Anne M. Land-Zandstra,et al. Citizen science on a smartphone: Participants’ motivations and learning , 2016, Public understanding of science.
[74] Duncan P. Brumby,et al. Task Lockouts Induce Crowdworkers to Switch to Other Activities , 2015, CHI Extended Abstracts.
[75] Peng Dai,et al. Inserting Micro-Breaks into Crowdsourcing Workflows , 2013, HCOMP.
[76] Daniel M. Oppenheimer,et al. Instructional Manipulation Checks: Detecting Satisficing to Increase Statistical Power , 2009 .
[77] Kevin B. Wright,et al. Researching Internet-Based Populations: Advantages and Disadvantages of Online Survey Research, Online Questionnaire Authoring Software Packages, and Web Survey Services , 2006, J. Comput. Mediat. Commun..
[78] David G. Rand,et al. The promise of Mechanical Turk: how online labor markets can help theorists run behavioral experiments. , 2012, Journal of theoretical biology.
[79] J. Suls,et al. Flawed Self-Assessment , 2004, Psychological science in the public interest : a journal of the American Psychological Society.
[80] Mark D. Dunlop,et al. Multidimensional pareto optimization of touchscreen keyboards for speed, familiarity and improved spell checking , 2012, CHI.
[81] Caroline Jay,et al. To Sign Up, or not to Sign Up?: Maximizing Citizen Science Contribution Rates through Optional Registration , 2016, CHI.
[82] Susanne Bødker,et al. Third-wave HCI, 10 years later---participation and sharing , 2015, Interactions.
[83] Duncan P. Brumby,et al. How does knowing what you are looking for change visual search behavior? , 2014, CHI.
[84] Duncan P. Brumby,et al. Visual Grouping in Menu Interfaces , 2015, CHI.
[85] Derek Ruths,et al. How One Microtask Affects Another , 2016, CHI.
[86] Aniket Kittur,et al. Instrumenting the crowd: using implicit behavioral measures to predict task performance , 2011, UIST.
[87] Johannes Schöning,et al. The Geography of Pokémon GO: Beneficial and Problematic Effects on Places and Movement , 2017, CHI.
[88] Brian L. Sullivan,et al. eBird: A citizen-based bird observation network in the biological sciences , 2009 .
[89] Olivier Festor,et al. CrowdOut: A mobile crowdsourcing service for road safety in digital cities , 2014, 2014 IEEE International Conference on Pervasive Computing and Communication Workshops (PERCOM WORKSHOPS).
[90] Ann Blandford,et al. Beyond Self-Tracking and Reminders: Designing Smartphone Apps That Support Habit Formation , 2015, CHI.
[91] J. Silvertown. A new dawn for citizen science. , 2009, Trends in ecology & evolution.
[92] Kevin C. Elliott,et al. A framework for addressing ethical issues in citizen science , 2015 .
[93] Elena Paslaru Bontas Simperl,et al. From Crowd to Community: A Survey of Online Community Features in Citizen Science Projects , 2017, CSCW.
[94] Michael S. Bernstein,et al. We Are Dynamo: Overcoming Stalling and Friction in Collective Action for Crowd Workers , 2015, CHI.
[95] Anna L. Cox,et al. Exploring the effects of non-monetary reimbursement for participants in HCI research , 2017, Hum. Comput..
[96] Michael S. Bernstein,et al. Twitch crowdsourcing: crowd contributions in short bursts of time , 2014, CHI.
[97] Dana Chandler,et al. Preventing Satisficing in Online Surveys: A "Kapcha" to Ensure Higher Quality Data , 2010 .
[98] K. Nakayama,et al. Is the Web as good as the lab? Comparable performance from Web and lab in cognitive/perceptual experiments , 2012, Psychonomic Bulletin & Review.
[99] M. Six Silberman,et al. Turkopticon: interrupting worker invisibility in amazon mechanical turk , 2013, CHI.
[100] Jesse J. Chandler,et al. Crowdsourcing Samples in Cognitive Science , 2017, Trends in Cognitive Sciences.
[101] Eben M. Haber,et al. Creek watch: pairing usefulness and usability for successful citizen science , 2011, CHI.
[102] Michael S. Bernstein,et al. The future of crowd work , 2013, CSCW.