Program level assessment is done by combining the contributions from several units within the program at designated time intervals. Course level assessment is one of those units and is usually done within a course using tests, homework, projects, presentations etc. without looking at the connected courses and their learning outcomes. Although course level assessment uses most of the resources under assessment and is considered an essential feedback path in making the curricular changes, very little effort has been devoted to reliable measures of student learning as they go through a sequence of connected courses. This study focuses on assessment across the Electrical and Computer Engineering curriculum in the signals and systems area and looks into what has been retained from the core knowledge. Introduction: Faculty members who serve on course and curriculum committees are quite familiar with the complaints of the instructors who teach lower level undergraduate courses. The most common one being: “students don’t have the prerequisite knowledge!” given after the proper acknowledgement of the instructor who taught the prerequisite course. This feedback usually lacks any quantitative measure but spreads like wildfire and soon enough finds its way to the agenda of the course and curriculum committee. At that point, coordinator of assessment makes a simple request that evidence is needed before any actions are taken about the problem. Engineering programs have gone through transformations after EC2000, designed processes which would facilitate the continuous improvements of their programs and placed the curriculum at the center of their operations. Electrical and Computer Engineering department at North Carolina State University was one of the leaders of this movement and adopted a two tier curriculum after a year of intense work involving all of its constituents. The contributions of courses in the ECE department to the ABET program outcomes are shown in Figure 1. with P ge 11245.2 essential core and intermediate electives marked to show the sampling done across the curriculum. Figure 1: ECE curriculum with core, intermediate and specialization electives and their contributions of program outcomes at 3: Major, 2: Intermediate and 1: Basic levels. (For A-K program outcomes look at www.abet.org ). The courses used in the experiments are framed with bold lines. We looked at the three courses marked on Figure 1. 1. ECE 220 Analytical Foundations of Electrical and Computer Engineering (Sophomore Level) 2. ECE 301 Linear Systems (Junior Level and ECE 220 is the prerequisite) 3. ECE 402 Communications Engineering (Senior Level and ECE 301 is the prerequisite) From 2004 to 2006, we followed a group of students through these courses and obtained core knowledge test results in all three of the courses. We asked four questions: 1) Do students come in with prerequisite knowledge? 2) Are course grades and retention of knowledge correlated? 3) Do students continue to learn the basics of a curriculum, even if not specifically taught in a course? 4) Who learns the most? The two-tier curriculum design allows us to repeat this experiment in two more areas; digital systems and electronics. Sampling across the curriculum in three major areas has the potential to give us valuable data in assessing the effectiveness of the curriculum in place. CORE 1 CORE 2 INTERMEDIATE SPECIALIZATION
[1]
Sam Allgood,et al.
The Longitudinal Effects of Economic Education on Teachers and Their Students
,
1999
.
[2]
M.J. Pavelich,et al.
Development, testing, and application of a chemistry concept inventory
,
2004,
34th Annual Frontiers in Education, 2004. FIE 2004..
[3]
Candace Schau,et al.
Conceptual astronomy. II. Replicating conceptual gains, probing attitude changes across three semesters
,
1999
.
[4]
M. Zeilik,et al.
Birth of the Astronomy Diagnostic Test: Prototest Evolution
,
2001
.
[5]
R. Hake.
Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses
,
1998
.
[6]
Richard R. Hake,et al.
ANALYZING CHANGE/GAIN SCORES*†
,
1999
.
[7]
Harry Roy.
Use of Web-based Testing of Students as a Method for Evaluating Courses.
,
2001
.