In order to gain accreditation, engineering programs must define goals and objectives, assess whether their graduates are meeting these objectives, and “close the loop” by using the assessment data to inform continuous improvement of the program. In ABET’s jargon, program “objectives” describe capabilities that graduates are expected to possess, e.g., “Graduates of the Chemical Engineering program at Rowan University will be able to....” Thus, the true success of the program in meeting its objectives is reflected in the first few years of graduates’ careers. Practically speaking a program cannot be expected to assess directly the performance of graduates with respect to these objectives, at least not in a comprehensive way. Consequently, programs are expected to define and assess measurable “outcomes” which fit within the undergraduate curriculum, and which ensure, to the best degree possible, that graduates will meet the program objectives. A variety of assessment instruments are in common use and merits and shortcomings of each have been discussed in the open literature. For example, surveys and exit interviews are commonly used, but are subjective, rely on self-assessments and may oversimplify the questions under examination. This paper focuses on tools for direct measurement of student performance through objective evaluation of work product. Numerous authors have outlined the assessment strategy of constructing rubrics for measuring student achievement of learning outcomes and applying them to portfolios of student work. Other authors have outlined use of rubrics for evaluation and grading of individual assignments and projects. This paper will describe the use of a consolidated rubric for evaluating final reports in the capstone Chemical Plant Design course. Instead of grading each report and then having some or all of the reports evaluated through a separate process for programmatic assessment purposes, the instructor evaluates the report once using the rubric, and the same raw data is used both for grading and for programmatic assessment. Background Since 2000, ABET has required that in order to be accredited, engineering programs must demonstrate evidence of continuous assessment and continuous improvement. Components of a good assessment strategy include: 1) Establish goals and desired educational outcomes for the degree program, which must include 11 outcomes (designated “A-K”) identified by ABET as essential for all engineering programs. 2) Measure whether graduates of the program are attaining the goals and outcomes. This process is required by ABET Criterion 3. 3) Use the data collected in step 2 to identify opportunities for improvement, and modify the program accordingly. P ge 22337.2 4) “Close the loop” by assessing whether the changes led to improved attainment of desired outcomes. Approximately 35% of recently evaluated programs were cited with shortcomings in Criterion 3. Two potential pitfalls that have been identified in recent literature are: not creating a sustained, continuous assessment plan, and not articulating expectations in a manner specific enough to be useful. This section expands upon these two potential problems, and the remainder of the paper describes the approach to program outcomes assessment adopted in the Chemical Engineering program at Rowan University. Continuous Assessment and Continuous Improvement ABET evaluations are scheduled to occur every six years. Shryock and Reed note that “some programs treat the six-year time lag between visits with the following timeline: Year 1 – Celebrate success of previous ABET visit. Years 2-4 – Feel that ABET is a long time away. Year 5 – Begin to worry about ABET visit the following year, and survey every class imaginable to be ready for year 6 with the ABET visit.” Limiting assessment to a “snapshot” of data collection once every six years undermines the intent of the ABET criteria; continuous assessment and continuous improvement. Significantly, ABET recently separated what was Criterion 3 into two distinct accreditation criterion: “Criterion 3Program Outcomes” and “Criterion 4Continuous Improvement.” This change was presumably motivated by the need to emphasize the importance of assessment as a continuous, ongoing activity. A more subtle point raised by Shryock and Reed’s description is the strategy of “survey every class imaginable.” Dr. Gloria Rogers, ABET’s Managing Director of Professional Services, calls attention to the fact that collecting large amounts of data from “every class imaginable” is not merely inefficient, but likely misleading and counter-productive. Program objectives are summative in nature; they concern not the capabilities of students in specific courses, but the capabilities of graduates. Thus, Dr. Rogers writes, “Why do we collect data in lower level courses and average them with the data taken in upper level courses and pretend like we know what they mean? Are we really saying that all courses are equal in how they contribute to cumulative learning and that the complexity and depth/breadth at which students are to perform is the same in all courses for any given outcome? Why not only collect ‘evidence’ of student learning in the course where students have a culminating experience related to the outcome.” (emphasis added) In sum, the 6-year cycle described by Shryock and Reed is contrary to the intent of the ABET criteria, for multiple reasons. Nonetheless, with all the demands that exist on faculty time, even well-intentioned departments could easily fall into the trap of approaching assessment and accreditation as Shryock and Reed describe. A sustainable assessment plan is one that makes efficient use of faculty time. This paper examines ways of conducting program assessment by leveraging activities that are already Page 22337.3 occurring and information that is already available, rather than creating new datagathering tasks that serve no purpose beyond program assessment. Strategies for Assessing Program Outcomes Instruments for assessing achievement of program outcomes can broadly be subdivided into direct and indirect instruments. Surveys of students, alumni and/or employers are common indirect instruments. This paper focuses on direct instruments, in which actual student work product is evaluated to make a determination of how well students met programmatic outcomes. An outcome is a broad statement such as “The Chemical Engineering Program at Rowan University will produce graduates who demonstrate an ability to apply knowledge of mathematics, science, and engineering,” which mirrors ABET outcome A. According to Dr. Gloria Rogers the most difficult part of the assessment process, and one which most engineering programs do not do well, is “identification of a limited number of performance indicators for each outcome.” Dr. Rogers notes that programs “...tend to go from broad outcomes to data collection without articulating specifically what students need to demonstrate...” In 2003, Felder outlined a strategy for bridging the gap between broad outcomes and clear, specific indicators of success. At the heart of the approach is development of assessment rubrics. An example of a rubric, which was published previously in Chemical Engineering Education, is shown in Table 1. For each outcome, 3-6 indicators are identified, and these are located in the leftmost column. For each indicator, precise descriptions of four different levels of achievement are provided. When reviewing a sample of work product (exam, lab report, etc.) the evaluator simply moves from left to right until he/she finds the descriptor that is accurate for the student’s work. The Chemical Engineering department at Rowan University also did a study which demonstrated that these rubrics provide excellent consistency for different raters evaluating a particular exam or report. This result highlights one significant merit of the indicators. Inter-rater reliability would presumably not be present if the evaluator was making a single, holistic determination of whether a particular student “demonstrates an ability to apply knowledge of mathematics, science and engineering,” or if the evaluator were rating work on a scale from 1-4 with no specific description of what each number meant. Thus, a rubric like the one in Table 1 fills the need identified by Dr. Rogers; it can be used to assess how well students have achieved programmatic outcomes in an objective and quantitative way. A drawback of using the assessment rubric shown in Table 1 is that it is time-intensive; each sample of student work must be read and individually evaluated with the rubric. A more time-efficient strategy is using information that is already available. The most obvious “direct” assessment instrument available is student grades. Assigning grades is a routine task. Tracking the fraction of students who earn A, B, and C in a course, or calculating the average score on a particular assignment, are data collection tasks that require essentially no “extra” effort on the part of faculty. However, ABET cautions P ge 22337.4 against using grades as an assessment metric because a grade is a holistic evaluation of whether a student has met all of the instructor’s expectations. A class of students that has one very specific and widespread shortcoming may still earn good grades. There are several recent examples of programs that address this concern by identifying tasks, such as individual homework problems or individual questions on exams, that are specific enough that they do reflect single outcomes, and track scores on these. Shryock and Reed call these “embedded indicators” and note that “it is important for the score of the activity to directly correlate to a specific outcome.” The assessment tool described here combines assessment rubrics with embedded indicators. Recent ASEE publications include several examples of rubrics used for programmatic assessment. Other recent ASEE publ
[1]
Shonda Bernadin,et al.
An Outcomes Driven Approach For Assessment:A Continuous Improvement Process
,
2010
.
[2]
Kevin Dahm,et al.
Rubric Development and Inter-Rater Reliability Issues in Assessing Learning Outcomes
,
2002
.
[3]
Mark Steiner,et al.
A Holistic Approach For Student Assessment In Project Based Multidisciplinary Engineering Capstone Design
,
2010
.
[4]
John Irwin,et al.
The Electrical Engineering Technology Program Outcomes Assessment Process: Closing The Loop
,
2009
.
[5]
Nidal Al-Masoud,et al.
Development And Implementation Of An Integrated Outcomes Based Assessment Plan For A New Engineering Program.
,
2009
.
[6]
Kevin Dahm,et al.
Rubric development for assessment of undergraduate research: Evaluating multidisciplinary team projects
,
2004
.
[7]
Bruce Murray,et al.
Improving An Abet Course Assessment Process That Involves Marker Problems And Projects
,
2009
.
[8]
Rebecca Brent,et al.
Designing and Teaching Courses to Satisfy the ABET Engineering Criteria
,
2003
.
[9]
Kathleen Ossman.
An Assessment And Data Collection Process For Evaluating Student Progress On "A K" Abet Educational Outcomes
,
2010
.
[10]
Ronald Welch.
AC 2010-1448: ASSESSMENT OF ABET 3 A-K IN AN OPEN-ENDED CAPSTONE?
,
2010
.
[11]
Kristi J. Shryock,et al.
Abet Accreditation: Best Practices For Assessment
,
2009
.
[12]
Lisa R. Lattuca,et al.
Engineering Change: A Study of the Impact of EC2000*
,
2004
.
[13]
Kevin Dahm.
Practical, Efficient Strategies For Assessment Of Engineering Projects And Engineering Programs
,
2010
.
[14]
Min-Sung Koh,et al.
Development Of Course Assessment Metrics To Measure Program Outcomes Against Abet Criteria In A Digital Circuits Class
,
2009
.
[15]
Amir Jokar,et al.
Assessment Of Program Outcomes For Abet Accreditation
,
2009
.
[16]
Massood Towhidnejad,et al.
An Assessment Strategy For A Capstone Course In Software And Computer Engineering
,
2009
.