Harnessing Collective Intelligence on Social Networks
暂无分享,去创建一个
[1] Push Singh,et al. The Public Acquisition of Commonsense Knowledge , 2002 .
[2] Marco Baroni,et al. Bootstrapping a Game with a Purpose for Commonsense Collection , 2012, TIST.
[3] E. Deci,et al. Intrinsic and Extrinsic Motivations: Classic Definitions and New Directions. , 2000, Contemporary educational psychology.
[4] Dan Klein,et al. Learning Accurate, Compact, and Interpretable Tree Annotation , 2006, ACL.
[5] D. Clery. Galaxy evolution. Galaxy zoo volunteers share pain and glory of research. , 2011, Science.
[6] Robert P. W. Duin,et al. Limits on the majority vote accuracy in classifier fusion , 2003, Pattern Analysis & Applications.
[7] Michael Rosemann,et al. Crowdsourcing Information Systems - A Systems Theory Perspective , 2011 .
[8] Michael S. Bernstein,et al. Soylent: a word processor with a crowd inside , 2010, UIST.
[9] Anne C. Rouse,et al. A Preliminary Taxonomy of Crowdsourcing , 2010 .
[10] Kevin Crowston,et al. Exploring Data Quality in Games With a Purpose , 2014 .
[11] Lennart E. Nacke,et al. From game design elements to gamefulness: defining "gamification" , 2011, MindTrek.
[12] Roberto Navigli,et al. It’s All Fun and Games until Someone Annotates: Video Games with a Purpose for Linguistic Annotation , 2014, TACL.
[13] Sameer Pradhan,et al. Unrestricted Coreference: Identifying Entities and Events in OntoNotes , 2007, International Conference on Semantic Computing (ICSC 2007).
[14] Xixi Luo,et al. MultiRank : Reputation Ranking for Generic Semantic Social Networks , 2009 .
[15] A. P. Dawid,et al. Maximum Likelihood Estimation of Observer Error‐Rates Using the EM Algorithm , 1979 .
[16] Panagiotis G. Ipeirotis. Demographics of Mechanical Turk , 2010 .
[17] Daren C. Brabham. Motivations for Participation in a Crowdsourcing Application to Improve Public Engagement in Transit Planning , 2012 .
[18] Benjamin B. Bederson,et al. Human computation: a survey and taxonomy of a growing field , 2011, CHI.
[19] Kyumin Lee,et al. The social honeypot project: protecting online communities from spammers , 2010, WWW '10.
[20] Chris Callison-Burch,et al. Fast, Cheap, and Creative: Evaluating Translation Quality Using Amazon’s Mechanical Turk , 2009, EMNLP.
[21] Renata Vieira,et al. A Corpus-based Investigation of Definite Description Use , 1997, CL.
[22] Johan Bos,et al. Gamification for Word Sense Labeling , 2013, IWCS.
[23] Timothy Chklovski,et al. Collecting paraphrase corpora from volunteer contributors , 2005, K-CAP '05.
[24] Jane Yung-jen Hsu,et al. Community-based game design: experiments on social games for commonsense data collection , 2009, HCOMP '09.
[25] Udo Kruschwitz,et al. Assessing Crowdsourcing Quality through Objective Tasks , 2012, LREC.
[26] Mathieu Lafourcade,et al. Making people play for Lexical Acquisition with the JeuxDeMots prototype , 2007 .
[27] Nils Diewald,et al. Web-based Annotation of Anaphoric Relations and Lexical Chains , 2007, LAW@ACL.
[28] Cliff O'Reilly,et al. User Performance Indicators In Task-Based Data Collection Systems , 2014, MindTheGap@iConference.
[29] L. Dodd,et al. On Estimating Diagnostic Accuracy From Studies With Multiple Raters and Partial Gold Standard Evaluation , 2008, Journal of the American Statistical Association.
[30] H. Kucera,et al. Computational analysis of present-day American English , 1967 .
[31] Faiza Khan Khattak. Quality Control of Crowd Labeling through Expert Evaluation , 2011 .
[32] Michael S. Bernstein,et al. Analytic Methods for Optimizing Realtime Crowdsourcing , 2012, ArXiv.
[33] Helmut Debelius,et al. Nudibranchs and Sea Snails: Indo-Pacific Field Guide , 1996 .
[34] Gerardo Hermosillo,et al. Learning From Crowds , 2010, J. Mach. Learn. Res..
[35] Luis von Ahn. Games with a Purpose , 2006, Computer.
[36] Andreas Vlachos,et al. Active Annotation , 2022 .
[37] R. P. Fishburne,et al. Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel , 1975 .
[38] Saul Sternberg,et al. The discovery of processing stages: Extensions of Donders' method , 1969 .
[39] Ron Artstein,et al. Underspecification and Anaphora: Theoretical Issues and Preliminary Evidence , 2006, Discourse Processes.
[40] Turk Paul Wais,et al. Towards Building a High-Quality Workforce with Mechanical , 2010 .
[41] Simone Paolo Ponzetto,et al. Knowledge Derived From Wikipedia For Computing Semantic Relatedness , 2007, J. Artif. Intell. Res..
[42] Yolanda Gil,et al. Improving the design of intelligent acquisition interfaces for collecting world knowledge from web contributors , 2005, K-CAP '05.
[43] Jon Chamberlain,et al. Groupsourcing: Distributed Problem Solving Using Social Networks , 2014, HCOMP.
[44] Antonio Torralba,et al. LabelMe: A Database and Web-Based Tool for Image Annotation , 2008, International Journal of Computer Vision.
[45] Renata Vieira,et al. An Empirically-based System for Processing Definite Descriptions , 2000, CL.
[46] Cynthia Rudin,et al. Approximating the crowd , 2014, Data Mining and Knowledge Discovery.
[47] Chris Callison-Burch,et al. Creating Speech and Language Data With Amazon’s Mechanical Turk , 2010, Mturk@HLT-NAACL.
[48] Noga Alon,et al. How Robust Is the Wisdom of the Crowds? , 2015, IJCAI.
[49] Laura A. Dabbish,et al. Designing games with a purpose , 2008, CACM.
[50] Udo Kruschwitz,et al. Using Games to Create Language Resources: Successes and Limitations of the Approach , 2013, The People's Web Meets NLP.
[51] Donghui Feng,et al. Acquiring High Quality Non-Expert Knowledge from On-Demand Workforce , 2009, PWNLP@IJCNLP.
[52] Eric Horvitz,et al. Combining human and machine intelligence in large-scale crowdsourcing , 2012, AAMAS.
[53] Johanna D. Moore,et al. Report on the Second NLG Challenge on Generating Instructions in Virtual Environments (GIVE-2) , 2010, INLG.
[54] Gerardo Hermosillo,et al. Supervised learning from multiple experts: whom to trust when everyone lies a bit , 2009, ICML '09.
[55] Kevin Crowston,et al. From Conservation to Crowdsourcing: A Typology of Citizen Science , 2011, 2011 44th Hawaii International Conference on System Sciences.
[56] Chrysanthos Dellarocas,et al. Harnessing Crowds: Mapping the Genome of Collective Intelligence , 2009 .
[57] Y. Benkler,et al. Commons‐based Peer Production and Virtue* , 2006 .
[58] Javier R. Movellan,et al. Whose Vote Should Count More: Optimal Integration of Labels from Labelers of Unknown Expertise , 2009, NIPS.
[59] Christopher Cunningham,et al. Gamification by Design - Implementing Game Mechanics in Web and Mobile Apps , 2011 .
[60] Christiane Fellbaum,et al. Book Reviews: WordNet: An Electronic Lexical Database , 1999, CL.
[61] Nicola L. Foster,et al. Coral-characterized benthic assemblages of the deep Northeast Atlantic: defining “Coral Gardens” to support future habitat mapping efforts , 2013 .
[62] James D. Herbsleb,et al. Transparency and Coordination in Peer Production , 2014, ArXiv.
[63] Malvina Nissim,et al. Uncovering Noun-Noun Compound Relations by Gamification , 2015, NODALIDA.
[64] Daniela Goecke,et al. SGF - An integrated model for multiple annotations and its application in a linguistic domain , 2008 .
[65] H. Lieberman. Common Consensus : a web-based game for collecting commonsense goals , 2007 .
[66] Lawrence G. Roberts,et al. Machine Perception of Three-Dimensional Solids , 1963, Outstanding Dissertations in the Computer Sciences.
[67] Christoph Meinel,et al. On Measuring Expertise in Collaborative Tagging Systems , 2009 .
[68] Massimo Poesio,et al. Long Distance Pronominalisation and Global Focus , 1998, COLING-ACL.
[69] P. Culverhouse,et al. Do experts make mistakes? A comparison of human and machine identification of dinoflagellates , 2003 .
[70] Pietro Perona,et al. Learning Generative Visual Models from Few Training Examples: An Incremental Bayesian Approach Tested on 101 Object Categories , 2004, 2004 Conference on Computer Vision and Pattern Recognition Workshop.
[71] Eduard H. Hovy,et al. A Taxonomy, Dataset, and Classifier for Automatic Noun Compound Interpretation , 2010, ACL.
[72] Benoît Sagot,et al. Influence of Pre-Annotation on POS-Tagged Corpus Development , 2010, Linguistic Annotation Workshop.
[73] J. Gee. Glued to Games: How Video Games Draw Us in and Hold Us Spellbound , 2012 .
[74] Lauretta Burke,et al. The economics of worldwide coral reef degradation , 2003 .
[75] I. Biederman. Recognition-by-components: a theory of human image understanding. , 1987, Psychological review.
[76] Qihao Weng,et al. A survey of image classification methods and techniques for improving classification performance , 2007 .
[77] Bob Carpenter,et al. The Benefits of a Model of Annotation , 2013, Transactions of the Association for Computational Linguistics.
[78] Johan Bos,et al. Developing a large semantically annotated corpus , 2012, LREC.
[79] Eric Schenk,et al. Towards a characterization of crowdsourcing practices , 2011 .
[80] Udo Kruschwitz,et al. Methods for Engaging and Evaluating Users of Human Computation Systems , 2013, Handbook of Human Computation.
[81] L. Jeppesen,et al. The Value of Openness in Scientific Problem Solving , 2007 .
[82] Jirí Mírovský,et al. Play the Language: Play Coreference , 2009, ACL.
[83] R. Paine. Food Web Complexity and Species Diversity , 1966, The American Naturalist.
[84] Michael Vitale,et al. The Wisdom of Crowds , 2015, Cell.
[85] Peta Wyeth,et al. GameFlow: a model for evaluating player enjoyment in games , 2005, CIE.
[86] Gianluca Stringhini,et al. Detecting spammers on social networks , 2010, ACSAC '10.
[87] Vladimir Zwass,et al. Co-Creation: Toward a Taxonomy and an Integrated Research Perspective , 2010, Int. J. Electron. Commer..
[88] Panagiotis G. Ipeirotis. Analyzing the Amazon Mechanical Turk marketplace , 2010, XRDS.
[89] Andrew B. Whinston,et al. Social Computing: An Overview , 2007, Commun. Assoc. Inf. Syst..
[90] David J. Kriegman,et al. Automated annotation of coral reef survey images , 2012, 2012 IEEE Conference on Computer Vision and Pattern Recognition.
[91] Jon Chamberlain. The annotation-validation (AV) model: rewarding contribution using retrospective agreement , 2014, GamifIR '14.
[92] Rajarshi Das,et al. Emerging theories and models of human computation systems: a brief survey , 2011, UbiCrowd '11.
[93] C. Wilkinson. Status of coral reefs of the world , 2000 .
[94] Jeffrey P. Bigham,et al. VizWiz: nearly real-time answers to visual questions , 2010, W4A.
[95] Udo Kruschwitz,et al. Motivations for Participation in Socially Networked Collective Intelligence Systems , 2012, ArXiv.
[96] Luc Van Gool,et al. The Pascal Visual Object Classes (VOC) Challenge , 2010, International Journal of Computer Vision.
[97] Daren C. Brabham. THE MYTH OF AMATEUR CROWDS , 2012 .
[98] Min-Yen Kan,et al. Perspectives on crowdsourcing annotations for natural language processing , 2012, Language Resources and Evaluation.
[99] E. Prince. The ZPG Letter: Subjects, Definiteness, and Information-status , 1992 .
[100] Jeff Donahue,et al. Annotator rationales for visual recognition , 2011, 2011 International Conference on Computer Vision.
[101] Yang Liu,et al. Non-Expert Evaluation of Summarization Systems is Risky , 2010, Mturk@HLT-NAACL.
[102] Martin Hepp,et al. OntoGame: Weaving the Semantic Web by Online Games , 2008, ESWC.
[103] Monojit Choudhury,et al. Complex Linguistic Annotation – No Easy Way Out! A Case from Bangla and Hindi POS Labeling Tasks , 2009, Linguistic Annotation Workshop.
[104] Mitchell P. Marcus,et al. OntoNotes: The 90% Solution , 2006, NAACL.
[105] Paul Resnick,et al. Eliciting Informative Feedback: The Peer-Prediction Method , 2005, Manag. Sci..
[106] Fernando González-Ladrón-de-Guevara,et al. Towards an integrated crowdsourcing definition , 2012, J. Inf. Sci..
[107] Deborah I. Fels,et al. Reimagining leaderboards: towards gamifying competency models through social game mechanics , 2013, Gamification.
[108] Krista A. Ehinger,et al. SUN database: Large-scale scene recognition from abbey to zoo , 2010, 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition.
[109] Panagiotis G. Ipeirotis,et al. Get another label? improving data quality and data mining using multiple, noisy labelers , 2008, KDD.
[110] Kenneth Y. Goldberg,et al. Opinion space: a scalable tool for browsing online comments , 2010, CHI.
[111] Devavrat Shah,et al. Iterative Learning for Reliable Crowdsourcing Systems , 2011, NIPS.
[112] D. Meyer,et al. Supporting Online Material Materials and Methods Som Text Figs. S1 to S6 References Evidence for a Collective Intelligence Factor in the Performance of Human Groups , 2022 .
[113] Rebecca F. Johnson,et al. Traditional Taxonomic Groupings Mask Evolutionary History: A Molecular Phylogeny and New Classification of the Chromodorid Nudibranchs , 2012, PloS one.
[114] Claire Martin,et al. When Good Enough Is Best , 2006, Neuron.
[115] Trevor van Mierlo. The 1% Rule in Four Digital Health Social Networks: An Observational Study , 2014, Journal of medical Internet research.
[116] Massimo Poesio,et al. Discourse Annotation and Semantic Annotation in the GNOME corpus , 2004, Proceedings of the 2004 ACL Workshop on Discourse Annotation - DiscAnnotation '04.
[117] Kôiti Hasida,et al. ISO 24617-2: A semantically-based standard for dialogue annotation , 2012, LREC.
[118] Alexander I. Rudnicky,et al. Using the Amazon Mechanical Turk for transcription of spoken language , 2010, 2010 IEEE International Conference on Acoustics, Speech and Signal Processing.
[119] Carsten Rahbek,et al. Comparing diversity data collected using a protocol designed for volunteers with results from a professional alternative , 2013 .
[120] Filip Radlinski,et al. Comparing the sensitivity of information retrieval metrics , 2010, SIGIR.
[121] Michael B. Twidale,et al. Design Facets of Crowdsourcing , 2015 .
[122] Allen Newell,et al. The psychology of human-computer interaction , 1983 .
[123] Udo Kruschwitz,et al. Phrase detectives: Utilizing collective intelligence for internet-scale language resource creation , 2013, TIIS.
[124] Udo Kruschwitz,et al. Markup Infrastructure for the Anaphoric Bank: Supporting Web Collaboration , 2012, Modeling, Learning, and Processing of Text Technological Data Structures.
[125] Panagiotis G. Ipeirotis,et al. Quality management on Amazon Mechanical Turk , 2010, HCOMP '10.
[126] Ron Artstein,et al. Anaphoric Annotation in the ARRAU Corpus , 2008, LREC.
[127] Luis Mateus Rocha,et al. Symbiotic intelligence: Self-organizing knowledge on distributed networks, driven by human interaction , 1998 .
[128] Roberto Navigli,et al. Validating and Extending Semantic Knowledge Bases using Video Games with a Purpose , 2014, ACL.
[129] Dana Chandler,et al. Breaking Monotony with Meaning: Motivation in Crowdsourcing Markets , 2012, ArXiv.
[130] Jisup Hong,et al. How Good is the Crowd at "real" WSD? , 2011, Linguistic Annotation Workshop.
[131] Andrew McCallum,et al. Integrating Probabilistic Extraction Models and Data Mining to Discover Relations and Patterns in Text , 2006, NAACL.
[132] Oren Etzioni,et al. The Tradeoffs Between Open and Traditional Relation Extraction , 2008, ACL.
[133] Fei-Fei Li,et al. ImageNet: A large-scale hierarchical image database , 2009, 2009 IEEE Conference on Computer Vision and Pattern Recognition.
[134] K. Bretonnel Cohen,et al. Last Words: Amazon Mechanical Turk: Gold Mine or Coal Mine? , 2011, CL.
[135] R. Steneck,et al. Coral Reefs Under Rapid Climate Change and Ocean Acidification , 2007, Science.
[136] Udo Kruschwitz,et al. Constructing an Anaphorically Annotated Corpus with Non-Experts: Assessing the Quality of Collaborative Annotations , 2009, PWNLP@IJCNLP.
[137] Craig MacDonald,et al. Learning to predict response times for online query scheduling , 2012, SIGIR '12.
[138] J. Roberts,et al. Recommendations for best practice in deep-sea habitat classification: Bullimore et al. as a case study , 2014 .
[139] Yannick Versley,et al. SemEval-2010 Task 1: Coreference Resolution in Multiple Languages , 2009, *SEMEVAL.
[140] Udo Kruschwitz,et al. A new life for a dead parrot: Incentive structures in the Phrase Detectives game , 2009 .
[141] Duncan J. Watts,et al. Financial incentives and the "performance of crowds" , 2009, HCOMP '09.
[142] Huan Liu,et al. Promoting Coordination for Disaster Relief - From Crowdsourcing to Coordination , 2011, SBP.
[143] Peter Norvig,et al. Can Distributed Volunteers Accomplish Massive Data Analysis Tasks , 2001 .
[144] Yu-An Sun,et al. When majority voting fails: Comparing quality assurance methods for noisy human computation environment , 2012, ArXiv.
[145] Carlos Castillo,et al. Emotions and dialogue in a peer-production community: the case of Wikipedia , 2012, WikiSym '12.
[146] Pierre Lévy,et al. Collective Intelligence: Mankind's Emerging World in Cyberspace , 1997 .
[147] Manuel Blum,et al. reCAPTCHA: Human-Based Character Recognition via Web Security Measures , 2008, Science.
[148] D. Maynard,et al. Challenges in developing opinion mining tools for social media , 2012 .
[149] Brian A Vander Schee. Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business , 2009 .
[150] Ron Artstein,et al. Survey Article: Inter-Coder Agreement for Computational Linguistics , 2008, CL.
[151] Paulo Minatel Gonella,et al. Drosera magnifica (Droseraceae): the largest New World sundew, discovered on Facebook , 2015 .
[152] Dan Klein,et al. Named Entity Recognition with Character-Level Models , 2003, CoNLL.
[153] George Sugihara,et al. Food-web theory provides guidelines for marine conservation , 2005 .
[154] Carsten Eickhoff. Crowd-powered experts: helping surgeons interpret breast cancer images , 2014, GamifIR '14.
[155] Annie Zaenen. Mark-up Barking Up the Wrong Tree , 2006, Computational Linguistics.
[156] Steve Holmes,et al. User-generated content and the law , 2007 .
[157] Arno Scharl,et al. Games with a purpose for social networking platforms , 2009, HT '09.
[158] Leslie G. Ungerleider,et al. The neural systems that mediate human perceptual decision making , 2008, Nature Reviews Neuroscience.
[159] Beatrice Santorini,et al. Building a Large Annotated Corpus of English: The Penn Treebank , 1993, CL.
[160] Qi Su,et al. Internet-scale collection of human-reviewed data , 2007, WWW '07.
[161] Jerry R. Hobbs. Resolving pronoun references , 1986 .
[162] Nicole Fruehauf. Flow The Psychology Of Optimal Experience , 2016 .
[163] Heng-Li Yang,et al. Motivations of Wikipedia content contributors , 2010, Comput. Hum. Behav..
[164] and software — performance evaluation , .
[165] Rada Mihalcea,et al. Linking Documents to Encyclopedic Knowledge , 2008, IEEE Intelligent Systems.
[166] Udo Kruschwitz,et al. Phrase Detectives: A Web-based collaborative annotation game , 2008 .
[167] Gabriella Kazai,et al. Towards methods for the collective gathering and quality control of relevance assessments , 2009, SIGIR.
[168] Oren Etzioni,et al. Open Information Extraction from the Web , 2007, CACM.
[169] Jon Chamberlain. Groupsourcing: Problem Solving, Social Learning and Knowledge Discovery on Social Networks , 2014, HCOMP.
[170] Jeffrey C. Carver,et al. Building reputation in StackOverflow: An empirical investigation , 2013, 2013 10th Working Conference on Mining Software Repositories (MSR).
[171] Nagiza F. Samatova,et al. PackPlay: Mining Semantic Data in Collaborative Games , 2010, Linguistic Annotation Workshop.
[172] Daniel J. Veit,et al. More than fun and money. Worker Motivation in Crowdsourcing - A Study on Mechanical Turk , 2011, AMCIS.
[173] Stefan Siersdorfer,et al. Groupsourcing: Team Competition Designs for Crowdsourcing , 2015, WWW.
[174] Karl Aberer,et al. An Evaluation of Aggregation Techniques in Crowdsourcing , 2013, WISE.
[175] Burr Settles,et al. Active Learning Literature Survey , 2009 .
[176] Brendan T. O'Connor,et al. Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks , 2008, EMNLP.
[177] Irwin King,et al. A Survey of Human Computation Systems , 2009, 2009 International Conference on Computational Science and Engineering.
[178] J. Gutt,et al. Semi-Automated Image Analysis for the Assessment of Megafaunal Densities at the Arctic Deep-Sea Observatory HAUSGARTEN , 2012, PloS one.
[179] Lars Chittka,et al. Speed-accuracy tradeoffs in animal decision making. , 2009, Trends in ecology & evolution.
[180] C. Lintott,et al. Galaxy Zoo: Motivations of Citizen Scientists , 2008, 1303.6886.
[181] Gilad Mishne,et al. Finding high-quality content in social media , 2008, WSDM '08.
[182] Omar Alonso,et al. Crowdsourcing for relevance evaluation , 2008, SIGF.
[183] Bill Tomlinson,et al. Who are the crowdworkers?: shifting demographics in mechanical turk , 2010, CHI Extended Abstracts.
[184] Hans C. Boas. 8. Using FrameNet for the semantic analysis of German: Annotation, representation, and automation , 2009 .
[185] Jacco van Ossenbruggen,et al. Do you need experts in the crowd?: a case study in image annotation for marine biology , 2013, OAIR.
[186] Nancy Ide,et al. Anveshan: A Framework for Analysis of Multiple Annotators’ Labeling Behavior , 2010, Linguistic Annotation Workshop.
[187] Pietro Perona,et al. The Multidimensional Wisdom of Crowds , 2010, NIPS.
[188] Lin Tingji Jovian,et al. OCR Correction via Human Computational Game , 2011, 2011 44th Hawaii International Conference on System Sciences.
[189] Chris Callison-Burch,et al. Cheap, Fast and Good Enough: Automatic Speech Recognition with Non-Expert Transcription , 2010, NAACL.
[190] C. Revenga,et al. Pilot analysis of global ecosystems : coastal ecosystems , 2000 .