Human-Computer Interaction and Collective Intelligence

The lessons of HCI can therefore be brought to bear on different aspects of collective intelligence. On the one hand, the people in the collective (the crowd) will only contribute if there are proper incentives and if the interface guides them in usable and meaningful ways. On the other, those interested in leveraging the collective need usable ways of coordinating, making sense of, and extracting value from the collective work that is being done, often on their behalf. Ultimately, collective intelligence involves the co-design of technical infrastructure and human-human interaction: a socio-technical system. In crowdsourcing, we might differentiate between two broad classes of users: requesters and crowd members. The requesters are the individuals or group for whom work is done or who takes the responsibility to aggregate the work done by the collective. The crowd member (or crowd worker) is one of many people to contribute. While we often use the word “worker,” crowd workers do not have need to be (and often aren’t) contributing as part of what we might consider standard “work.” They may work for pay or not, work for small periods of time or contribute for days to a project they care about, and they may work in such a way as each individual’s contribution may be difficult to discern from the collective final output. HCI has a long history of studying not only the interaction between individuals with technology, but also the interaction of groups with or mediated by technology. For example, computer-supported cooperative work (CSCW) investigates how to allow groups to accomplish tasks together using a shared or distributed computer interfaces, either at the same time or asynchronously. Current crowdsourcing research alters some of the standard assumptions about the size, composition, and stability of these groups, but the fundamental approaches remain the same. For instance, workers drawn from the crowd may be less reliable than groups of employees working on a shared task, and group membership in the crowd may change more quickly. There are three main vectors of study for HCI and collective intelligence. The first is directed crowdsourcing, where a single individual attempts to recruit and guide a large set of people to help accomplish a goal. The second is collaborative crowdsourcing, where a group gathers based on shared interest and self-determine their organization and work. The third vector is passive crowdsourcing, where the crowd or collective may never meet or coordinate, but it is still possible to mine their collective behavior patterns for information. We cover each vector in turn. We conclude with a list of challenges for researches in HCI related to crowdsourcing and collective intelligence.

[1]  M. S. Ackerman,et al.  Answer Garden: a tool for growing organizational memory , 2015, COCS '90.

[2]  Etienne Wenger,et al.  Situated Learning: Legitimate Peripheral Participation , 1991 .

[3]  William Rathie,et al.  From Rubbish! The Archaeology of Garbage , 1992 .

[4]  W. Rathje,et al.  Rubbish!: The Archaeology of Garbage , 1992 .

[5]  James D. Hollan,et al.  Edit wear and read wear , 1992, CHI.

[6]  Guy Haworth,et al.  KQQKQQ and the Kasparov-World Game , 1999, J. Int. Comput. Games Assoc..

[7]  E. Deci,et al.  Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. , 2000, The American psychologist.

[8]  Peter Norvig,et al.  Can Distributed Volunteers Accomplish Massive Data Analysis Tasks , 2001 .

[9]  Erik T. Mueller,et al.  Open Mind Common Sense: Knowledge Acquisition from the General Public , 2002, OTM.

[10]  Zahir Tari,et al.  On the Move to Meaningful Internet Systems 2002: CoopIS, DOA, and ODBASE , 2002, Lecture Notes in Computer Science.

[11]  Robert E. Kraut,et al.  Experiment 1 : Motivating Conversational Contributions Through Group Homogeneity and Individual Uniqueness , 2010 .

[12]  Laura A. Dabbish,et al.  Labeling images with a computer game , 2004, AAAI Spring Symposium: Knowledge Collection from Volunteer Contributors.

[13]  안태경 Social Science Research Network , 2005 .

[14]  Antonio Torralba,et al.  LabelMe: A Database and Web-Based Tool for Image Annotation , 2008, International Journal of Computer Vision.

[15]  Aniket Kittur,et al.  He says, she says: conflict and coordination in Wikipedia , 2007, CHI.

[16]  Martin Wattenberg,et al.  Voyagers and voyeurs: supporting asynchronous collaborative information visualization , 2007, CHI.

[17]  Martin Wattenberg,et al.  ManyEyes: a Site for Visualization at Internet Scale , 2007, IEEE Transactions on Visualization and Computer Graphics.

[18]  Mor Naaman,et al.  Towards automatic extraction of event and place semantics from flickr tags , 2007, SIGIR.

[19]  Amy Bruckman,et al.  Leadership in online creative collaboration , 2008, CSCW.

[20]  Ed H. Chi,et al.  Towards a model of understanding social search , 2008, SSM '08.

[21]  Manuel Blum,et al.  reCAPTCHA: Human-Based Character Recognition via Web Security Measures , 2008, Science.

[22]  Aniket Kittur,et al.  Harnessing the wisdom of crowds in wikipedia: quality through coordination , 2008, CSCW.

[23]  C. Lintott,et al.  Galaxy Zoo: morphologies derived from visual inspection of galaxies from the Sloan Digital Sky Survey , 2008, 0804.4483.

[24]  Michael S. Bernstein,et al.  Personalization via friendsourcing , 2010, TCHI.

[25]  Matthew J. Salganik,et al.  Web-Based Experiments for the Study of Collective Social Dynamics in Cultural Markets , 2009, Top. Cogn. Sci..

[26]  Eric Rosenbaum,et al.  Scratch: programming for all , 2009, Commun. ACM.

[27]  N. Stanietsky,et al.  The interaction of TIGIT with PVR and PVRL2 inhibits human NK cell cytotoxicity , 2009, Proceedings of the National Academy of Sciences.

[28]  Duncan J. Watts,et al.  Financial incentives and the "performance of crowds" , 2009, HCOMP '09.

[29]  Lydia B. Chilton,et al.  Seaweed: a web application for designing economic games , 2009, HCOMP '09.

[30]  B. Shneiderman,et al.  The Reader-to-Leader Framework: Motivating Technology-Mediated Social Participation , 2009 .

[31]  Lydia B. Chilton,et al.  TurKit: Tools for iterative tasks on mechanical turk , 2009, 2009 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC).

[32]  Aaron Halfaker,et al.  Wikipedians are born, not made: a study of power editors on Wikipedia , 2009, GROUP.

[33]  Michael S. Bernstein,et al.  Soylent: a word processor with a crowd inside , 2010, UIST.

[34]  Scott R. Klemmer,et al.  What would other programmers do: suggesting solutions to error messages , 2010, CHI.

[35]  Leysia Palen,et al.  Microblogging during two natural hazards events: what twitter may contribute to situational awareness , 2010, CHI.

[36]  Darren Gergle,et al.  Egalitarians at the gate: one-sided gatekeeping practices in social media , 2010, CSCW '10.

[37]  Damon Horowitz,et al.  The anatomy of a large-scale social search engine , 2010, WWW '10.

[38]  Rob Miller,et al.  VizWiz: nearly real-time answers to visual questions , 2010, UIST.

[39]  Adrien Treuille,et al.  Predicting protein structures with a multiplayer online game , 2010, Nature.

[40]  Kenneth Y. Goldberg,et al.  Opinion space: a scalable tool for browsing online comments , 2010, CHI.

[41]  Jeffrey P. Bigham,et al.  VizWiz: nearly real-time answers to visual questions , 2010, W4A.

[42]  Meredith Ringel Morris,et al.  What do people ask their social networks, and why?: a survey study of status message q&a behavior , 2010, CHI.

[43]  Amy Bruckman,et al.  Why it works (when it works): success factors in online creative collaboration , 2010, GROUP.

[44]  Lada A. Adamic,et al.  The Party Is Over Here: Structure and Content in the 2010 Election , 2011, ICWSM.

[45]  Amy J. Ko,et al.  REFLECT: Supporting Active Listening and Grounding on the Web through Restatement , 2011 .

[46]  Michael S. Bernstein,et al.  Crowds in two seconds: enabling realtime crowd-powered interfaces , 2011, UIST.

[47]  Alan Borning,et al.  ConsiderIt: improving structured public deliberation , 2011, CHI EA '11.

[48]  Björn Hartmann,et al.  Turkomatic: automatic recursive task and workflow design for mechanical turk , 2011, Human Computation.

[49]  Aniket Kittur,et al.  CrowdForge: crowdsourcing complex work , 2011, UIST.

[50]  David Lee,et al.  Widescope - A social platform for serious conversations on the Web , 2011, ArXiv.

[51]  Krzysztof Z. Gajos,et al.  Platemate: crowdsourcing nutritional analysis from food photographs , 2011, UIST.

[52]  Z. Popovic,et al.  Crystal structure of a monomeric retroviral protease solved by protein folding game players , 2011, Nature Structural &Molecular Biology.

[53]  Alexis Battle,et al.  The jabberwocky programming environment for structured social computing , 2011, UIST.

[54]  E. Gerber,et al.  Crowdfunding : Why People Are Motivated to Post and Fund Projects on Crowdfunding Platforms , 2011 .

[55]  Joseph M. Hellerstein,et al.  Searching for Jim Gray: a technical overview , 2011, CACM.

[56]  Aniket Kittur,et al.  The polymath project: lessons from a successful online collaboration in mathematics , 2011, CHI.

[57]  Adam Fourney,et al.  Query-feature graphs: bridging user vocabulary and system functionality , 2011, UIST '11.

[58]  Rob Miller,et al.  Real-time crowd control of existing interfaces , 2011, UIST.

[59]  Zoran Popovic,et al.  PhotoCity: training experts at large-scale image acquisition through a competitive game , 2011, CHI.

[60]  Daniel Gayo-Avello,et al.  A Meta-Analysis of State-of-the-Art Electoral Prediction From Twitter Data , 2012, ArXiv.

[61]  Norman M. Sadeh,et al.  The Livehoods Project: Utilizing Social Media to Understand the Dynamics of a City , 2012, ICWSM.

[62]  P. Resnick,et al.  Building Successful Online Communities: Evidence-Based Social Design , 2012 .

[63]  Aaron D. Shaw,et al.  Social desirability bias and self-reports of motivation: a study of amazon mechanical turk in the US and India , 2012, CHI.

[64]  Henry A. Kautz,et al.  Modeling Spread of Disease from Social Interactions , 2012, ICWSM.

[65]  Walter S. Lasecki,et al.  Real-time captioning by groups of non-experts , 2012, UIST.

[66]  R. Bonney,et al.  Citizen Science: Public Participation in Environmental Research , 2012 .

[67]  Aniket Kittur,et al.  CrowdScape: interactively visualizing user behavior and output , 2012, UIST.

[68]  A. Clements Citizen Science: Public Participation in Environmental Research , 2013 .

[69]  Walter S. Lasecki,et al.  Warping time for more effective real-time crowdsourcing , 2013, CHI.

[70]  Sriram Subramanian,et al.  Talking about tactile experiences , 2013, CHI.

[71]  Jonathan T. Morgan,et al.  Tea and sympathy: crafting positive new user experiences on wikipedia , 2013, CSCW.

[72]  Daniel Gildea,et al.  Text Alignment for Real-Time Crowd Captioning , 2013, NAACL.

[73]  Michael S. Bernstein,et al.  Mechanical Turk is Not Anonymous , 2013 .

[74]  Sean A. Munson,et al.  Encouraging Reading of Diverse Political Viewpoints with a Browser Widget , 2013, ICWSM.

[75]  Eric Gilbert,et al.  Widespread underprovision on Reddit , 2013, CSCW.

[76]  Amy Bruckman,et al.  Redistributing leadership in online creative collaboration , 2013, CSCW.

[77]  Benjamin Mako Hill,et al.  The cost of collaboration for code and art: evidence from a remixing community , 2013, CSCW.

[78]  Michael S. Bernstein,et al.  The future of crowd work , 2013, CSCW.

[79]  Kate Starbird,et al.  Delivering patients to sacré coeur: collective intelligence in digital volunteer communities , 2013, CHI.

[80]  M. Six Silberman,et al.  Turkopticon: interrupting worker invisibility in amazon mechanical turk , 2013, CHI.

[81]  Burr Settles,et al.  Let's get together: the formation and success of online creative collaborations , 2013, CHI.

[82]  Lydia B. Chilton,et al.  Cascade: crowdsourcing taxonomy creation , 2013, CHI.

[83]  D. Lazer,et al.  The Parable of Google Flu: Traps in Big Data Analysis , 2014, Science.

[84]  Benjamin B. Bederson,et al.  AskSheet: efficient human computation for decision making with spreadsheets , 2014, CSCW.

[85]  Michael S. Bernstein,et al.  Catalyst: triggering collective action with thresholds , 2014, CSCW.

[86]  Gierad Laput,et al.  CommandSpace: modeling the relationships between tasks, descriptions and features , 2014, UIST.

[87]  Michael S. Bernstein,et al.  Emergent, crowd-scale programming practice in the IDE , 2014, CHI.

[88]  Jeffrey T. Hancock,et al.  Experimental evidence of massive-scale emotional contagion through social networks , 2014, Proceedings of the National Academy of Sciences.

[89]  Michael S. Bernstein,et al.  Ensemble: exploring complementary strengths of leaders and crowds in creative collaboration , 2014, CSCW.

[90]  Michael S. Bernstein,et al.  Expert crowdsourcing with flash teams , 2014, UIST.