Technical writing is an important skill in all engineering disciplines. Many first-year engineering programs (FYEPs) include technical writing as one of their core course components to begin to instill the importance of and to develop this skill early in aspiring engineers. In order to assess student learning and provide feedback on technical writing, proper grading of these assignments is essential. This paper presents the preliminary assessment results of a new grading training program for teaching assistants (TAs) in a FYEP that was implemented at a large land grant institution while situating the findings in past research related to the grading of writing assignments and preparing TAs. This paper summarizes the approach used to enhance the grading of TAs. We supplement this discussion with quantitative and qualitative assessment results. Finally, we provide recommendations or “Tricks of the Trade” for TAs who may be grading technical writing and those who are responsible for training TAs for this type of grading. Introduction Engineering programs throughout the country are increasing the amount of technical writing in their curriculum1 because technical communication is recognized as a highly valued skill for engineers of all disciplines. While collaboration with both university and departmental writing centers can be valuable, the inclusion of technical writing in the context of engineering classes is essential.2 As the amount of technical writing increases in the classroom, it is important to train those responsible in providing feedback, whether it is faculty or TAs. Despite the recognized need, lack of training in addition to TAs or faculty feeling inadequate to provide constructive feedback can be a barrier to including effective technical writing in the classroom.3 Generally speaking, graduate teaching assistant (GTA) training is important4, as they are expected to perform many roles in the classroom, and they have many responsibilities including providing feedback and evaluating student work.5,6 However, there is no consistent method for training GTAs between universities5-7 and only 17% of universities spend more than one day on formal training activities.7 Broadly speaking, TA training efforts should be expanded to undergraduate teaching assistants (UTAs), should be beyond one day, and should include opportunities for practicing their teaching skills,8 which could include lecturing, grading, providing feedback, and classroom management. At our large, land-grant university, the FYEP conducts courses for over 2000 students each year. All engineering students are required to take one of the two tracks (Fundamental of Engineering (FE) or Fundamentals of Engineering for Honors (FEH)). Each track teaches students problem solving, computer programming, technical graphics (visualization and sketching), CAD, and design. In addition to these topics, students are also exposed to many hands-on labs and taught fundamentals in technical writing and communication. In order to grade the technical writing assignments, approximately 50 GTAs and 150 UTAs are employed in the program each year. The grading of lab reports, memos, executive summaries, abstracts, and design reports is a difficult, time consuming task. Additionally, consistency between sections of a course is something that is of utmost importance. Because these course sections are intended to be as identical as possible, the consistency with grading (both in scores and feedback) is crucial for the P ge 26667.2 program to succeed. Therefore, there is a need for substantial technical writing grading training for the GTAs and UTAs in both FYEP tracks. The purpose of this paper is to first describe the technical writing grading training that was implemented in our program, highlighting the changes to the training across three years. Second, we aim to provide an early evaluation as to the impact of this grading training program through both quantitative and qualitative assessment elements. In order to collect quantitative data, we examined two technical writing assignments – one lab memo and one lab report – that were unchanged across this three year period. These two assignments were analyzed to examine both the average score and standard deviation of the scores among all course sections for that assignment in order to assess the consistency of our grading. The qualitative element of our assessment consisted of focus groups with the TAs about the grading training process. These focus groups, conducted for both GTAs and UTAs, asked for feedback concerning what went well, what could be improved, what was most useful, and what additional topics or resources would be useful to implement going forward. We feel this feedback is critical in the development of a successful grading training program as we strive to ensure that the time we ask of our TAs for training is seen as beneficial, important, and worthwhile by both parties. Both the quantitative and qualitative assessment data are essential for understanding the impact of the training. Finally, we present a “Tricks of the Trade” section that outlines some major themes that came out of our focus group discussions with TAs. This is provided in order to help others who are interested in implementing similar TA training programs at their institution. Background Our FYEP has used various techniques to train TAs in grading technical writing. In the following sections, we describe the process used in past years and the new approach implemented in Autumn 2014. Providing this information gives context for the results presented later in the paper. It should be noted that the training began in FEH but was later adapted to also be included in FE. Previous Methods of TA Training for Grading Technical Writing Prior to Autumn 2013, the grading training for FEH typically consisted of a single, two-hour session at the beginning of the semester where the TAs would work through the grading of a sample lab report in groups. The lead GTAs facilitated the training and answered questions. After this session, TAs were only approached again with a follow-up email or meeting if there were major anomalies noted in their grading. Beginning in Autumn 2013, a new grading training effort, based on the TA training performed at Purdue9 to grade MEAs, was piloted in FEH. The first step in the process was to collect sample reports that represented a range of overall grades. A team of GTAs and faculty members graded the reports independently and had a meeting to discuss the scoring on the detailed grading rubric. Through this discussion, a baseline score and acceptable range was set for each grade category on each of the collected reports. Page 26667.3 For the pilot grading training session, TAs were given instructions on grading for the semester and then worked with other TAs to grade a sample lab report. Afterwards, they were provided with a copy of the same sample report that was marked up with feedback to be used as an example. The TAs then received four writing assignments to grade as part of this training, which they graded independently and returned to the lead GTA. Their scores were compared to the baseline scores. Additionally, the feedback written on the report for the student was examined and judged based on quantity and quality. After collecting this information, each TA was given a copy of their spreadsheet with a feedback note about potential improvements they could focus on for grading in the future. The TAs graded the four writing assignments in sets of two, receiving feedback in between with the instructions to use the feedback to complete the grading of the final two assignments. Details about this feedback process, which remained unchanged, are provided in the “Implementation of New Grading Training Program” section. Moving Towards New Training Methods for Grading Technical Writing While this effort was a respectable first step toward a more comprehensive technical writing grading training, one missing component was an additional follow-up intervention for those that were still identified as grading either “too harshly” or “too leniently” after the last set of grading assignments was returned. Beyond the feedback provided – an email with their scores compared to a baseline and a short note on their grading – no additional measures were taken to help TAs further calibrate their scores and learn from the feedback. This missing piece could be a reason that the initial pilot did not result in a lower standard deviation in average report and memo grades between sections. Additionally, this version of the training was only implemented in FEH. An additional enhancement to the pilot was to expand this training into the second course in the sequence in some way. While the initial pilot training was intended to impact both semesters, many employment and grading responsibilities change between semesters, thus it was possible that a number of TAs could enter the spring semester without any training in grading technical writing. Despite the second course having fewer lab writing assignments, there is a significant additional technical writing component added in the grading of the design project report. Therefore, it made sense to design the training program to cross both terms. While this pilot was implemented in FEH, the TA training for the FE track consisted of a short lecture during orientation about technical communication and encouragement to attend a writing workshop conducted by the Engineering Communications within the College of Engineering. However, these workshops have had limited success due to attendance and TA turnover. Therefore this training improvement project sought to address this shortcoming and to establish a more consistent training procedure for TAs across the FYEP. Implementation of New Grading Training Program The updated and expanded pilot training program began in Autumn 2014. This project consisted of three phas
[1]
Gillian H. Roehrig,et al.
Growing a Garden without Water: Graduate Teaching Assistants in Introductory Science Laboratories at a Doctoral/Research University
,
2004
.
[2]
Mike Ekoniak,et al.
Improving student writing through multiple peer feedback
,
2013,
2013 IEEE Frontiers in Education Conference (FIE).
[3]
Daytona Beach,et al.
A Teaching Assistant Training Protocol for Improving Feedback on Open- Ended Engineering Problems in Large Classes
,
2013
.
[4]
Gillian H. Roehrig,et al.
Graduate Teaching Assistants and Inquiry-Based Instruction: Implications for Graduate Teaching Assistant Training
,
2003
.
[5]
Janet Mancini Billson,et al.
Focus Groups: A Practical Guide for Applied Research
,
1989
.
[6]
David M. Shannon,et al.
TA Teaching Effectiveness: The Impact of Training and Teaching Experience
,
1998
.
[7]
Paul D. Travers.
Better Training for Teaching Assistants.
,
1989
.
[8]
Kristin Walker,et al.
Integrating Writing Instruction into Engineering Courses: A Writing Center Model
,
2000
.
[9]
Linda Ann Riley,et al.
Integrating Communication and Engineering Education: A Look at Curricula, Courses, and Support Systems
,
2003
.
[10]
Heidi A. Diefes-Dux,et al.
Socialization Experiences Resulting from Doctoral Engineering Teaching Assistantships
,
2013
.