Need for Collective Decision When Divergent Thinking Arises in Collaborative Tasks of a Community of Practice

Based on previous work (Assimakopoulos et al. Proceedings of KICSS’2013 [5]) where we introduced HelpMe tool, a tool that automatically selects a group of people according to rules and metrics, we now introduce inconsistency between divergent ranking concerning answers during collaborative tasks among experts. In HelpMe tool, users collaboratively create a knowledge base about a subject and evaluate user opinions in order to achieve quality of knowledge. The basic assumption of the tool is that “Knowledge comes from experts.” This is achieved by collective evaluation, through voting and discussion in every stage (Task) of the discussion (Activity). Inconsistency appears when a set of sentences cannot be true at the same time (Adrian et al. Proceedings of KICSS’2013 [3]). During collaborative tasks in Communities of Practice, inconsistency may inspire new associations and lead to more interesting solutions. However, there are cases such as medical, legal, etc. issues in which contradicting views are not helpful since a final decision has to be made. This paper is focused on such cases and examines the possible options and the methods that have to be implemented in order for a final decision to be made. When difference in the evaluation range is observed (divergent voting), the community should be informed in order to evaluate the existing answer with an “up vote” or a “down vote.” No other option is available. In case of inconsistency, we introduce a loop procedure that informs experts in order to evaluate the task being observed as inconsistent. If inconsistency remains, or the number of evaluators is small, the process keeps going until it meets the criteria introduced in this paper. On the other hand, the system should be able to find, in its knowledge database, related conversations and examines the decisions made on related or same subject.

[1]  Kamel Aouiche,et al.  Collaborative OLAP with Tag Clouds - Web 2.0 OLAP Formalism and Experimental Evaluation , 2007, WEBIST.

[2]  Simon Buckingham Shum,et al.  Cohere: Towards Web 2.0 Argumentation , 2008, COMMA.

[3]  Leonard N. Foner,et al.  Yenta: a multi-agent, referral-based matchmaking system , 1997, AGENTS '97.

[4]  Lynn A. Streeter,et al.  Who Knows: A System Based on Automatic Representation of Semantic Structure , 1988, RIAO Conference.

[5]  Trevor J. M. Bench-Capon,et al.  PARMENIDES: Facilitating Deliberation in Democracies , 2006, Artificial Intelligence and Law.

[6]  Lior Rokach,et al.  Introduction to Recommender Systems Handbook , 2011, Recommender Systems Handbook.

[7]  Simon Buckingham Shum,et al.  Knowledge Cartography for Open Sensemaking Communities , 2008 .

[8]  Mark S. Ackerman,et al.  Expertise networks in online communities: structure and algorithms , 2007, WWW '07.

[9]  Volker Wulf,et al.  Sharing Expertise: Beyond Knowledge Management , 2002 .

[10]  Bart Selman,et al.  Referral Web: combining social networks and collaborative filtering , 1997, CACM.

[11]  Bruce Krulwich,et al.  The ContactFinder Agent: Answering Bulletin Board Questions with Referrals , 1996, AAAI/IAAI, Vol. 1.

[12]  Mark S. Ackerman,et al.  Answer Garden 2: merging organizational memory with collaborative help , 1996, CSCW '96.

[13]  Siegfried Handschuh,et al.  Adding Provenance and Evolution Information to Modularized Argumentation Models , 2008, 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology.

[14]  Bernardo A. Huberman,et al.  Usage patterns of collaborative tagging systems , 2006, J. Inf. Sci..

[15]  Nikos Karousos,et al.  From ‘Collecting' to ‘Deciding': Facilitating the Emergence of Decisions in Argumentative Collaboration , 2010 .