Introduction to the Minitrack on Crowd-enhanced Technologies for Improving Reasoning and Solving Complex Problems

Crowdsourcing has reached a state of maturity, with scholars, organizations, and industry all experimenting and examining ways to harness groups of people to support distributed work. One dimension of crowdsourcing that has not been deeply explored is in the context of reasoning and decision-making. The four papers in this minitrack bring together researchers from across disciplines that are designing applications and conducting research in the effectiveness of crowdsourced analysis to determine the applicability of crowd-based reasoning and decision-making tasks. The first paper presents on the Smartlyassembled Wiki-style Argument Marshaling (SWARM) project that was commenced in 2017 as part of the US Intelligence Advanced Research Projects Activity (IARPA) funded Crowdsourcing Evidence, Argumentation, Thinking and Evaluation (CREATE) Program. This paper summarizes the core requirements and rationale that have driven the SWARM platform implementation. This includes the technical architecture and introduce core capabilities that have been introduced to encourage user interaction and social acceptance of the platform by the crowds. The second paper explores how divergent evaluation criteria might introduce bias into collective judgments in the context of crowdsourcing. Recent experiments have shown that crowd estimates can be swayed by social influence. This might be an unanticipated effect of media literacy training: encouraging readers to critically evaluate information that falls short when their judgment criteria are unclear and vary among social groups. In this exploratory study, the authors investigate the criteria used by crowd workers in reasoning through a task. They crowdsourced evaluation of a variety of information sources, identifying multiple factors that affect individual's judgment, as well as the accuracy of aggregated crowd estimates. Using a multi-method approach, they identified relationships between individual information assessment practices and analytical outcomes in crowds, and propose two analytic criteria, relevance and credibility, to optimize collective judgment in complex analytical tasks. The third paper studies the challenges associated with identifying promising ideas from large innovation contests. Evaluators do not perform well when selecting the best ideas from large idea pools as their information processing capabilities are limited. Therefore, it seems reasonable to let crowds evaluate subsets of ideas to distribute efforts among the many. One meaningful approach to subset creation is to draw ideas into subsets according to their similarity. Whether evaluation based on subsets of similar ideas is better than compared to subsets of random ideas is unclear. The authors employ experimental methods with 66 crowd workers to explore the effects of idea similarity on evaluation performance and cognitive demand. Their study contributes to the understanding of idea selection by providing empirical evidence that crowd workers presented with subsets of similar ideas experience lower cognitive effort and achieve higher elimination accuracy than crowd workers presented with subsets of random ideas. Implications for research and practice are discussed. The fourth paper argues that crowdsourcing has become a frequently adopted approach to solving various tasks from conducting surveys to designing products. In the field of reasoning-support, however, crowdsourcing-related research and application have not been extensively implemented. Reasoningsupport is essential in intelligence analysis to help analysts mitigate various cognitive biases, enhance deliberation, and improve report writing. The authors propose a novel approach to designing a crowdsourcing platform that facilitates stigmergic coordination, awareness, and communication for intelligence analysis. They have materialized their work in the form of a crowdsourcing system which supports intelligence analysis: TRACE (Trackable Reasoning and Analysis for Collaboration and Evaluation). They introduce several stigmergic approaches integrated into TRACE and discuss the potential experimentation of these approaches. Proceedings of the 52nd Hawaii International Conference on System Sciences | 2019