What's an Expert? Using Learning Analytics to Identify Emergent Markers of Expertise through Automated Speech, Sentiment and Sketch Analysis

Assessing student learning across a variety of environments and tasks continues to be a crucial educational concern. This task is of particular difficulty in non-traditional learning environments where students endeavor to design their own projects and engage in a hands-on educational experience. In order to improve our ability to recognize learning in these constructionist environments, this paper reports on an exploratory analysis of learning through multiple modalities: speech, sentiment and drawing. A rich set of features is automatically extracted from the data and used to identify emergent markers of expertise. Some of the most prominent markers of expertise include: user certainty, the ability to describe things efficiently and a disinclination to use unnecessary descriptors or qualifiers. Experts also displayed better organization and used less detail in their drawings. While many of these are things one would expect of an expert, there were areas in which experts looked very similar to novices. To explain this we report on learning theories that can reconcile these seemingly odd findings, and expound on how these domain-independent markers can be useful for identifying student learning over a series of activities.

[1]  Paulo Blikstein,et al.  Using learning analytics to assess students' behavior in open-ended programming tasks , 2011, LAK.

[2]  Chien-Liang Liu,et al.  An Unsupervised Automated Essay Scoring System , 2010, IEEE Intelligent Systems.

[3]  Cristina Conati,et al.  Empirically building and evaluating a probabilistic model of user affect , 2009, User Modeling and User-Adapted Interaction.

[4]  B. Brown,et al.  Double Talk: Synthesizing Everyday and Science Language in the Classroom. , 2008 .

[5]  Vasile Rus,et al.  Automatic Detection of Student Mental Models During Prior Knowledge Activation in MetaTutor , 2009, EDM.

[6]  Diane J. Litman,et al.  The relative impact of student affect on performance models in a spoken dialogue tutoring system , 2008, User Modeling and User-Adapted Interaction.

[7]  Steven M. Smith,et al.  Metrics for measuring ideation effectiveness , 2003 .

[8]  Julia Hirschberg,et al.  Detecting certainness in spoken tutorial dialogues , 2005, INTERSPEECH.

[9]  Arthur C. Graesser,et al.  Automatic detection of learner’s affect from conversational cues , 2008, User Modeling and User-Adapted Interaction.

[10]  Alice M. Agogino,et al.  Insights on Designers’ Sketching Activities in New Product Design Teams , 2004 .

[11]  Paul Boersma,et al.  Praat: doing phonetics by computer , 2003 .

[12]  Johanna D. Moore,et al.  Using Natural Language Processing to Analyze Tutorial Dialogue Corpora Across Domains Modalities , 2009, AIED.

[13]  J. Bruer Schools for Thought: A Science of Learning in the Classroom , 1993 .

[14]  Diane J. Litman,et al.  Spoken Tutorial Dialogue and the Feeling of Another’s Knowing , 2009, SIGDIAL Conference.

[15]  Kurt VanLehn,et al.  Inducing Effective Pedagogical Strategies Using Learning Context Features , 2010, UMAP.

[16]  John R. Anderson,et al.  Implications of the ACT-R Learning Theory: No Magic Bullets , 2000 .

[17]  Dan Klein,et al.  Accurate Unlexicalized Parsing , 2003, ACL.

[18]  Arthur C. Graesser,et al.  Emote aloud during learning with AutoTutor: Applying the Facial Action Coding System to cognitive–affective states during learning , 2008 .

[19]  John R. Anderson Cognitive Psychology and Its Implications , 1980 .

[20]  Diane J. Litman,et al.  Metacognition and Learning in Spoken Dialogue Computer Tutoring , 2010, Intelligent Tutoring Systems.