Developer intent versus instructor delivery in program implementation

Observation of the field testing of an instructional program for undergraduate engineering students revealed that the instructor did not collect or grade the student homework exercises as recommended in the instructor guide. Therefore, an experimental study was designed to investigate whether instructor delivery of the program as recommended would yield higher student achievement.The study was conducted for six class periods with 19 students in the Fall semester and replicated with 14 students and a second instructor in the Spring. Homework exercises were collected, scored, and returned for the experimental group, as recommended in the instructor guide. The exercises were not collected for the control group, thus reflecting the procedure used by the instructor in the field test.Posttest scores and grades were significantly higher for the experimental group. The test scores and grades by treatment were also consistent across the two semesters and two instructors. Self-report data revealed that experimental subjects spent significantly more time on the exercises.Questioning of the instructor revealed that he did not collect or grade the exercises because of the amount of time it required. The instructional designer placed more responsibility for student achievement on herself and the instructor, whereas the instructor placed more responsibility on the students. The differing opinions of the instructional designer and the course instructor are discussed as they relate to the design and delivery of instructional programs.

[1]  R. Branson Why the schools can’t improve: The upper limit hypothesis , 1987 .

[2]  Gene E. Hall,et al.  A Developmental Model for Determining Whether the Treatment is Actually Implemented , 1977 .

[3]  James D. Klein,et al.  Designing practice: A review of prescriptions and recommendations from instructional design theories , 1985 .

[4]  Howard J. Sullivan,et al.  Effects of Self-Pacing and Instructor-Pacing in a PSI Course. , 1977 .

[5]  Robert K. Branson Applications research in instructional systems development , 1981 .

[6]  F. Craik,et al.  Depth of processing and the retention of words , 1975 .

[7]  Raymond W. Kulhavy,et al.  Notetaking and depth of processing , 1979 .

[8]  S M Back,et al.  Self-Paced Instruction: Factors Critical to Implementation in Air Force Technical Training - A Preliminary Inquiry. , 1984 .

[9]  Philip H. Taylor,et al.  Book Review: Instructional objectives , 1971 .

[10]  Leslie J. Briggs,et al.  Principles of Instructional Design , 1974 .

[11]  Wesley G. Kaldahl Factors Affecting Utilization , 1966 .

[12]  Robert M. Gagné,et al.  Instructional Technology: Foundations , 1988 .

[13]  R. Stake The Countenance of Educational Evaluation , 1967, Teachers College Record: The Voice of Scholarship in Education.

[14]  Joseph J. Durzo,et al.  Evaluating Instructional Development Projects: Guidance from the Literature on Change. , 1984 .

[15]  D. Campbell,et al.  EXPERIMENTAL AND QUASI-EXPERIMENT Al DESIGNS FOR RESEARCH , 2012 .

[16]  Howard J. Sullivan,et al.  Teaching for Competence , 1983 .