Mimic: visual analytics of online micro-interactions

We present Mimic, an input capture and visual analytics system that records online user behavior to facilitate the discovery of micro-interactions that may affect problem understanding and decision making. As aggregate statistics and visualizations can mask important behaviors, Mimic can help interaction designers to improve the usability of their designs by going beyond aggregates to examine many individual user sessions in detail. To test Mimic, we replicate a recent crowd-sourcing experiment to better understand why participants consistently perform poorly in answering a canonical conditional probability question called the Mammography Problem. To analyze the micro-interactions, the Mimic web application is used to play back user sessions collected through remote logging of client-side events. We use Mimic to demonstrate the value of using advanced visual interfaces to interactively study interaction data. In the Mammography Problem, issues like user confusion, low confidence, and divided-attention were found based on participants changing their answers, doing repeated scrolling, and overestimating a base rate. Mimic shows how helpful detailed observational data can be and how important the careful design of micro-interactions is in helping users to successfully understand a problem, find a solution, and achieve their goals.

[1]  Ryen W. White,et al.  No clicks, no problem: using cursor movements to understand and improve search , 2011, CHI.

[2]  Jeffrey Heer,et al.  Graphical Histories for Visualization: Supporting Analysis, Communication, and Evaluation , 2008, IEEE Transactions on Visualization and Computer Graphics.

[3]  Anthony C. Robinson,et al.  Re-Visualization: Interactive Visualization of the Process of Visual Analysis , 2006 .

[4]  Stefan Stieger,et al.  What are participants doing while filling in an online questionnaire: A paradata collection tool and an empirical study , 2010, Comput. Hum. Behav..

[5]  William G. Cole,et al.  Understanding Bayesian reasoning via graphical displays , 1989, CHI '89.

[6]  Penelope M. Sanderson,et al.  Exploratory Sequential Data Analysis: Foundations , 1994, Hum. Comput. Interact..

[7]  Takeo Igarashi,et al.  An application-independent system for visualizing user operation history , 2008, UIST '08.

[8]  Jon Howell,et al.  Mugshot: Deterministic Capture and Replay for JavaScript Applications , 2010, NSDI.

[9]  Natalia Juristo Juzgado,et al.  Replications types in experimental disciplines , 2010, ESEM '10.

[10]  Kasper Hornbæk,et al.  Reading patterns and usability in visualizations of electronic documents , 2003, TCHI.

[11]  Dan Saffer,et al.  Microinteractions: Designing with Details , 2013 .

[12]  Gerd Gigerenzer,et al.  How to Improve Bayesian Reasoning Without Instruction: Frequency Formats , 1995 .

[13]  Elizabeth F. Churchill,et al.  Mouse tracking: measuring and predicting users' experience of web-based content , 2012, CHI.

[14]  David F. Redmiles,et al.  Extracting usability information from user interface events , 2000, CSUR.

[15]  Laura Martignon,et al.  Bayesianisches Denken in der Schule , 2004 .

[16]  Silviu Andrica,et al.  WaRR: A tool for high-fidelity web application record and replay , 2011, 2011 IEEE/IFIP 41st International Conference on Dependable Systems & Networks (DSN).

[17]  Vagner Figuerêdo de Santana,et al.  Web Usability Probe: A Tool for Supporting Remote Usability Evaluation of Web Sites , 2011, INTERACT.

[18]  A. Tversky,et al.  Judgment under Uncertainty: Heuristics and Biases , 1974, Science.

[19]  James Mickens,et al.  Rivet: Browser-agnostic Remote Debugging for Web Applications , 2012, USENIX Annual Technical Conference.

[20]  Michael D. Ernst,et al.  Interactive record/replay for web application debugging , 2013, UIST.

[21]  D. Eddy Judgment under uncertainty: Probabilistic reasoning in clinical medicine: Problems and opportunities , 1982 .

[22]  Tom Johnston,et al.  MacSHAPA and the enterprise of exploratory sequential data analysis (ESDA) , 1994, Int. J. Hum. Comput. Stud..

[23]  Jeffrey M. Stibel,et al.  Frequency illusions and other fallacies , 2003 .

[24]  Jeffrey Heer,et al.  What did they do? understanding clickstreams with the WebQuilt visualization system , 2002, AVI '02.

[25]  Jeffrey Heer,et al.  Crowdsourcing graphical perception: using mechanical turk to assess visualization design , 2010, CHI.

[26]  Tovi Grossman,et al.  Patina: dynamic heatmaps for visualizing application usage , 2013, CHI.

[27]  Moira C. Norrie,et al.  CrowdStudy: general toolkit for crowdsourced evaluation of web interfaces , 2013, EICS '13.

[28]  Tovi Grossman,et al.  Chronicle: capture, exploration, and playback of document workflow histories , 2010, UIST.

[29]  Pierre Dragicevic,et al.  Assessing the Effect of Visualizations on Bayesian Reasoning through Crowdsourcing , 2012, IEEE Transactions on Visualization and Computer Graphics.

[30]  David H. Laidlaw,et al.  Modeling task performance for a crowd of users from interaction histories , 2012, CHI.

[31]  Katharina Reinecke,et al.  Crowdsourcing performance evaluations of user interfaces , 2013, CHI.

[32]  Aniket Kittur,et al.  CrowdScape: interactively visualizing user behavior and output , 2012, UIST.

[33]  Gary L. Brase Pictorial representations in statistical reasoning , 2009 .

[34]  Penelope M. Sanderson,et al.  Exploratory sequential data analysis: exploring continuous observational data , 1996, INTR.

[35]  Alvitta Ottley,et al.  Visually Communicating Bayesian Statistics to Laypersons , 2012 .