Big data and cloud computing collectively offer a paradigm shift in the way businesses are now acquiring, using and managing information technology. This creates the need for every CS student to be equipped with foundational knowledge in this collective paradigm and to possess some hands-on experience in deploying and managing big data applications in the cloud. We argue that, for substantial coverage of big data and cloud computing concepts and skills, the relevant topics need to be integrated into multiple core courses across the undergraduate CS curriculum rather than creating additional standalone core or elective courses and performing a major overhaul of the curriculum. Our approach to including these topics is to develop autonomous learning modules for specific core courses in which their coverage might find an appropriate context. In this paper, three such modules are discussed and our classroom experiences during these interventions are documented. So far, we have achieved reasonable success in attaining student learning outcomes, enhanced engagement, and interests. Our objective is to share our experience with the academics who aim at incorporating similar pedagogy and to receive feedback about our approach.
[1]
Randy H. Katz,et al.
Experiences teaching MapReduce in the cloud
,
2012,
SIGCSE '12.
[2]
Mohammad Hammoud,et al.
A Cloud Computing Course: From Systems to Services
,
2015,
SIGCSE.
[3]
Sally Fincher,et al.
Computer Science Curricula 2013
,
2013
.
[4]
Bina Ramamurthy.
A Practical and Sustainable Model for Learning and Teaching Data Science
,
2016,
SIGCSE.
[5]
Suzanne J. Matthews.
Using Phoenix++ MapReduce to introduce undergraduate students to parallel computing
,
2017
.
[6]
Joshua Eckroth,et al.
Teaching Future Big Data Analysts: Curriculum and Experience Report
,
2017,
2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW).