Big data and cloud computing collectively offer a paradigm shift in the way businesses are now acquiring, using and managing information technology. This creates the need for every CS and IT student to be equipped with foundation knowledge in this collective paradigm and to possess some hands-on-experience in deploying and managing big data applications in the cloud. We argue that for substantial coverage of big data and cloud computing concepts and skills, the relevant topics need to be integrated into multiple core courses of CS/IT curriculum rather than creating additional standalone dedicated core or elective courses. Our approach to including these topics is to develop learning modules and to suggest specific core courses in which their coverage might find an appropriate context. In this paper, two such modules are discussed and our classroom experiences during these interventions are documented. Specifically, we discuss the learning outcomes, module contents, their designs and implementations, student assessment results and the lessons learned. Our objective is to share our experience with the instructors who aim at incorporating similar pedagogy that enhance student knowledge on this collective paradigm.
[1]
Mohammad Hammoud,et al.
A Cloud Computing Course: From Systems to Services
,
2015,
SIGCSE.
[2]
Bina Ramamurthy.
A Practical and Sustainable Model for Learning and Teaching Data Science
,
2016,
SIGCSE.
[3]
Suzanne J. Matthews.
Using Phoenix++ MapReduce to introduce undergraduate students to parallel computing
,
2017
.
[4]
Joshua Eckroth,et al.
Teaching Future Big Data Analysts: Curriculum and Experience Report
,
2017,
2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW).
[5]
Randy H. Katz,et al.
Experiences teaching MapReduce in the cloud
,
2012,
SIGCSE '12.