Validating a criteria set for an online learning environment

This paper presents the continuing effort to gain a better sense of what constitutes quality in an online course. This work was initially presented at FIE 2002, and expanded in a Journal of Engineering Education article accepted as part of the "Best of FIE" edition (April, 2004). Our research started with a broad view that entailed an initial brainstorming activity followed by a series of ranking and rating processes and a factor analysis to create a set of quality indicators from the student perspective. The current research extends the process by examining the validity of the factors. There are different approaches to performing validation activities; we choose the nominal group technique (NGT), which has been used successfully to generate a group consensus for over 30 years. Thirty two students were divided into five groups; one for each of the factors generated from the earlier research efforts. Each group was tasked with developing a detailed definition of the content and construct underlying the assigned factor. Online educators are often unsure of how to structure their courses since few have any experience as students and little experience as teachers in the environment. By following a clear and reproducible methodology, using multiple constituencies, and reevaluating at different steps in the process, this study provides online educators a meaningful framework upon which to structure course activities.