Crowdlicit: A System for Conducting Distributed End-User Elicitation and Identification Studies

End-user elicitation studies are a popular design method. Currently, such studies are usually confined to a lab, limiting the number and diversity of participants, and therefore the representativeness of their results. Furthermore, the quality of the results from such studies generally lacks any formal means of evaluation. In this paper, we address some of the limitations of elicitation studies through the creation of the Crowdlicit system along with the introduction of end-user identification studies, which are the reverse of elicitation studies. Crowdlicit is a new web-based system that enables researchers to conduct online and in-lab elicitation and identification studies. We used Crowdlicit to run a crowd-powered elicitation study based on Morris's "Web on the Wall" study (2012) with 78 participants, arriving at a set of symbols that included six new symbols different from Morris's. We evaluated the effectiveness of 49 symbols (43 from Morris and six from Crowdlicit) by conducting a crowd-powered identification study. We show that the Crowdlicit elicitation study resulted in a set of symbols that was significantly more identifiable than Morris's.

[1]  Kraig Finstad,et al.  The Usability Metric for User Experience , 2010, Interact. Comput..

[2]  Meredith Ringel Morris,et al.  Understanding users' preferences for surface gestures , 2010, Graphics Interface.

[3]  Theophanis Tsandilas Fallacies of Agreement: A Critical Review of Consensus Assessment Methods for Gesture Elicitation , 2018, TCHI.

[4]  Meredith Ringel Morris,et al.  Web on the wall: insights from a multimodal interaction elicitation study , 2012, ITS.

[5]  Radu-Daniel Vatavu,et al.  Formalizing Agreement Analysis for Elicitation Studies: New Measures, Significance Test, and Toolkit , 2015, CHI.

[6]  Meredith Ringel Morris,et al.  User-defined gestures for surface computing , 2009, CHI.

[7]  Mohammad Obaid,et al.  User-Defined Body Gestures for Navigational Control of a Humanoid Robot , 2012, ICSR.

[8]  Barbara Leporini,et al.  Exploring Visually Impaired People's Gesture Preferences for Smartphones , 2015, CHItaly.

[9]  Sayan Sarcar,et al.  Designing Mid-Air TV Gestures for Blind People Using User- and Choice-Based Elicitation Approaches , 2016, Conference on Designing Interactive Systems.

[10]  Steven Dow,et al.  Improving Crowd Innovation with Expert Facilitation , 2016, CSCW.

[11]  Yujin Zhang,et al.  SketchExpress: Remixing Animations for More Effective Crowd-Powered Prototyping of Interactive Interfaces , 2017, UIST.

[12]  James A. Landay,et al.  Drone & me: an exploration into natural human-drone interaction , 2015, UbiComp.

[13]  Stephen A. Brewster,et al.  Investigating touchscreen accessibility for people with visual impairments , 2008, NordiCHI.

[14]  Meredith Ringel Morris,et al.  Accessible Crowdwork?: Understanding the Value in and Challenge of Microtask Employment for People with Disabilities , 2015, CSCW.

[15]  Brad A. Myers,et al.  Maximizing the guessability of symbolic input , 2005, CHI Extended Abstracts.

[16]  ArditoCarmelo,et al.  Empowering End Users to Customize their Smart Environments , 2017 .

[17]  Radu-Daniel Vatavu,et al.  Between-Subjects Elicitation Studies: Formalization and Tool Support , 2016, CHI.

[18]  Sebastian Möller,et al.  I'm home: Defining and evaluating a gesture set for smart-home control , 2011, Int. J. Hum. Comput. Stud..

[19]  Michael Nebeling,et al.  User-Driven Design Principles for Gesture Representations , 2018, CHI.

[20]  Andy Cockburn,et al.  User-defined gestures for augmented reality , 2013, INTERACT.

[21]  Dennis R. Wixon,et al.  Building a user-derived interface , 1984, CACM.

[22]  Michael Nebeling,et al.  XDBrowser 2.0: Semi-Automatic Generation of Cross-Device Interfaces , 2017, CHI.

[23]  Katharina Reinecke,et al.  LabintheWild: Conducting Large-Scale Online Experiments With Uncompensated Samples , 2015, CSCW.

[24]  Azrul Hazri Jantan,et al.  A User-Defined Gesture Set for Music Interaction in Immersive Virtual Environment , 2017, INTERACT 2017.

[25]  David Ott,et al.  Kinect analysis: a system for recording, analysing and sharing multimodal interaction elicitation studies , 2015, EICS.

[26]  Krista Casler,et al.  Separate but equal? A comparison of participants and data gathered via Amazon's MTurk, social media, and face-to-face behavioral testing , 2013, Comput. Hum. Behav..

[27]  David Ott,et al.  Web on the Wall Reloaded: Implementation, Replication and Refinement of User-Defined Interaction Sets , 2014, ITS '14.

[28]  Kraig Finstad,et al.  Response to commentaries on 'The Usability Metric for User Experience' , 2013, Interact. Comput..

[29]  Aniket Kittur,et al.  Crowdsourcing user studies with Mechanical Turk , 2008, CHI.

[30]  Eva Hornecker,et al.  Modifying Gesture Elicitation: Do Kinaesthetic Priming and Increased Production Reduce Legacy Bias? , 2016, Tangible and Embedded Interaction.

[31]  Jacob O. Wobbrock,et al.  Beyond QWERTY: augmenting touch screen keyboards with multi-touch gestures for non-alphanumeric input , 2012, CHI.

[32]  Anne Marie Piper,et al.  A Wizard-of-Oz elicitation study examining child-defined gestures with a whole-body interface , 2013, IDC.

[33]  Per Ola Kristensson,et al.  Memorability of pre-designed and user-defined gesture sets , 2013, CHI.

[34]  Meredith Ringel Morris,et al.  Crowdsourcing Similarity Judgments for Agreement Analysis in End-User Elicitation Studies , 2018, UIST.

[35]  Richard E. Ladner,et al.  Usable gestures for blind people: understanding preference and performance , 2011, CHI.

[36]  Rob Miller,et al.  VizWiz: nearly real-time answers to visual questions , 2010, UIST.

[37]  Bongshin Lee,et al.  Reducing legacy bias in gesture elicitation studies , 2014, INTR.

[38]  Bruce N. Walker,et al.  Designing an In-Vehicle Air Gesture Set Using Elicitation Methods , 2017, AutomotiveUI.

[39]  Michael S. Bernstein,et al.  Soylent: a word processor with a crowd inside , 2010, UIST.

[40]  Laura A. Dabbish,et al.  Labeling images with a computer game , 2004, AAAI Spring Symposium: Knowledge Collection from Volunteer Contributors.

[41]  Radu-Daniel Vatavu,et al.  User-defined gestures for free-hand TV control , 2012, EuroITV.

[42]  Anne Köpsel,et al.  Benefiting from legacy bias , 2015, Interactions.