Teaching Recommender Systems at Large Scale

In the fall of 2013, we offered an open online Introduction to Recommender Systems through Coursera, while simultaneously offering a for-credit version of the course on-campus using the Coursera platform and a flipped classroom instruction model. As the goal of offering this course was to experiment with this type of instruction, we performed extensive evaluation including surveys of demographics, self-assessed skills, and learning intent; we also designed a knowledge-assessment tool specifically for the subject matter in this course, administering it before and after the course to measure learning, and again 5 months later to measure retention. We also tracked students through the course, including separating out students enrolled for credit from those enrolled only for the free, open course. Students had significant knowledge gains across all levels of prior knowledge and across all demographic categories. The main predictor of knowledge gain was effort expended in the course. Students also had significant knowledge retention after the course. Both of these results are limited to the sample of students who chose to complete our knowledge tests. Student completion of the course was hard to predict, with few factors contributing predictive power; the main predictor of completion was intent to complete. Students who chose a concepts-only track with hand exercises achieved the same level of knowledge of recommender systems concepts as those who chose a programming track and its added assignments, though the programming students gained additional programming knowledge. Based on the limited data we were able to gather, face-to-face students performed as well as the online-only students or better; they preferred this format to traditional lecture for reasons ranging from pure convenience to the desire to watch videos at a different pace (slower for English language learners; faster for some native English speakers). This article also includes our qualitative observations, lessons learned, and future directions.

[1]  Ernest T. Pascarella,et al.  How college affects students , 1991 .

[2]  G. Semb,et al.  Long-term memory for knowledge learned in school. , 1993 .

[3]  David N. Steer,et al.  How Students Think: Implications for Learning in Introductory Geoscience Courses , 2005 .

[4]  Kimberly C. Thornbury How College Affects Students: Volume 2 A Third Decade of Research , 2006 .

[5]  Barbara Means,et al.  Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies , 2009 .

[6]  K. Gegenfurtner,et al.  Design Issues in Gaze Guidance Under review with ACM Transactions on Computer Human Interaction , 2009 .

[7]  John Riedl,et al.  Rethinking the recommender research ecosystem: reproducibility, openness, and LensKit , 2011, RecSys '11.

[8]  E. Emanuel Online education: MOOCs taken by educated few , 2013, Nature.

[9]  Kyle Collins,et al.  Predicting non-traditional student learning outcomes using data analytics - a pilot research study , 2013 .

[10]  Doug Clow,et al.  MOOCs and the funnel of participation , 2013, LAK '13.

[11]  Chris Piech,et al.  Deconstructing disengagement: analyzing learner subpopulations in massive open online courses , 2013, LAK '13.

[12]  Daniel M. Russell,et al.  Student skill and goal achievement in the mapping with google MOOC , 2014, L@S.

[13]  John Champaign,et al.  Learning in an introductory physics MOOC: All cohorts learn equally, including an on-campus class , 2014 .

[14]  Michael D. Ekstrand,et al.  Teaching recommender systems at large scale: evaluation and lessons learned from a hybrid MOOC , 2014, L@S.

[15]  David E. Pritchard,et al.  Correlating skill and improvement in 2 MOOCs with a student's time on tasks , 2014, L@S.

[16]  Andrew D. Ho,et al.  Changing “Course” , 2014 .

[17]  Michael D. Ekstrand,et al.  Teaching Recommender Systems at Large Scale , 2015, ACM Trans. Comput. Hum. Interact..