A Series Of Studies Examining The Florida Board Of Regents' Course Evaluation Instrument

This research examined the psychometric properties (e.g., factor structure, reliability) of the Florida Board of Regents Student Assessment of Instruction instrument and the relation between various factors (adaptations for distance education, initial expectations, time, noninstructional factors, and response scale format) and students’ course evaluations. Data were collected from 631 students in an undergraduate course in educational assessment and in graduate courses in educational technology, language arts, and library science at various times during the semester. Results for the course evaluations reflected a one-factor model and internal consistency reliabilities greater than .90. No significant differences in students’ course evaluation ratings emerged across time during the semester, students’ first and last day ratings of a course, non-instructional factor,( excluding hours employed), or response scale formats. The Board of Regents (BOR) of the State University System of Florida mandated that each state university in Florida use the State University System Student Assessment of Instruction (SUSSAI) instrument beginning in the spring of 1996. With limited exception, all undergraduate and graduate courses taught by faculty members, adjuncts, and graduate assistants were to be assessed using this instrument. It was also mandated that summary results be made available to students or members of the public to facilitate student selection of courses and that results be used in the evaluation of faculty instruction (State University System of Florida, 1995). The mandated introductory statement and eight items may be supplemented with other assessment items used by a university, college, or department.

[1]  Derek Cheung,et al.  Evidence of a Single Second-Order Factor in Student Ratings of Teaching Effectiveness , 2000 .

[2]  Paul S. Kaplan Educational psychology for tomorrow's teacher , 1990 .

[3]  Herbert J. Walberg,et al.  Educational Psychology (3rd ed.). , 1988 .

[4]  Betty A. Bergstrom,et al.  Rating Scale Analysis: Gauging the Impact of Positively and Negatively Worded Items. , 1998 .

[5]  John M. Keller,et al.  The systematic process of motivational design , 1987 .

[6]  L. Crocker,et al.  Introduction to Classical and Modern Test Theory , 1986 .

[7]  A. Chickering,et al.  Seven Principles for Good Practice in Undergraduate Education , 1987, CORE.

[8]  Chung-Ping Cheng,et al.  Effects of Response Order on Likert-Type Scales , 2000 .

[9]  Karl G. Jöreskog,et al.  Lisrel 8: User's Reference Guide , 1997 .

[10]  Michael Theall,et al.  Student ratings of instruction : issues for improving practice , 1990 .

[11]  Michael B. Paulsen,et al.  Teaching and Learning in the College Classroom. ASHE Reader Series. , 1994 .

[12]  H. Marsh,et al.  Making students' evaluations of teaching effectiveness effective: The critical issues of validity, bias, and utility. , 1997 .

[13]  A. Chickering,et al.  Applying the Seven Principles for Good Practice in Under graduate Education , 1991 .

[14]  Lawrence M. Aleamoni,et al.  Practical Decisions in Developing and Operating a Faculty Evaluation System. , 1990 .

[15]  Janet Mancini Billson,et al.  The social context of teaching and learning , 1991 .

[16]  Thomas M. Haladyna,et al.  The detection and correction of bias in student ratings of instruction , 1994 .

[17]  Raymond P. Perry,et al.  Effective teaching in higher education : research and practice , 1997 .

[18]  Susan N. Kushner,et al.  Procedures for Designing Course Evaluation Instruments: Masked Personality Format Versus Transparent Achievement Format , 1994 .

[19]  W. McKeachie Student ratings: The validity of use. , 1997 .

[20]  K. Crittenden,et al.  Student Values and Teacher Evaluation: A Problem in Person Perception , 1973 .

[21]  J. Jackson Barnette Likert Response Alternative Direction: SA to SD or SD to SA: Does It Make a Difference?. , 1999 .

[22]  Lei Chang,et al.  Dependability of Anchoring Labels of Likert-Type Scales , 1997 .

[23]  D. Dilts,et al.  Assessing What Professors Do: An Introduction to Academic Performance Appraisal in Higher Education , 1994 .

[24]  R. Sitgreaves Psychometric theory (2nd ed.). , 1979 .

[25]  Anthony S. Bryk,et al.  Hierarchical Linear Models: Applications and Data Analysis Methods , 1992 .

[26]  Anita S. Tesh Evaluation of the Effectiveness of Alternate Questionnaire Item Response Options. , 1992 .

[27]  P. Pintrich Student learning and college teaching , 1988 .

[28]  M. Yamagishi,et al.  The Influence of Labels and Positions in Rating Scales , 1988 .

[29]  M. Haberman,et al.  Handbook of Research on Teacher Education , 1990 .

[30]  Herbert W. Marsh,et al.  Multidimensional students' evaluations of teaching effectiveness : a test of alternative higher-order structures , 1991 .

[31]  P. Abrami,et al.  Navigating student ratings of instruction. , 1997 .

[32]  J. Keller Development and use of the ARCS model of instructional design , 1987 .

[33]  詹志禹 Response order effects in Likert-type scales , 1991 .

[34]  S. Simmons Reflective faculty evaluation: Enhancing teaching and determining faculty effectiveness , 1996 .

[35]  P. Abrami,et al.  Students' Evaluations of University Teaching: Research Findings, Methodological Issues, and Directions for Future Research , 1987 .

[36]  A. Greenwald Validity concerns and usefulness of student ratings of instruction. , 1997, The American psychologist.

[37]  R. Gagne Conditions of Learning , 1965 .