Work in Progress: Introduction of Failure Analysis to a First-year Robotics Course

This work-in-progress paper describes the first implementation of a failure analysis component added to an existing first-year cornerstone project course. The first-year engineering program at The Ohio State University provides honors students with the opportunity to engage in an intensive design-and-build robotics project. The primary educational objective of this course is to give students a realistic engineering experience, so that at the end of their first year, they can make educated decisions about whether engineering is the profession they want for themselves, and, if so, what particular engineering discipline they want as a major. To that end, the project includes many aspects of real-world engineering, including teamwork, budgeting, planning a project schedule, communicating orally and in writing, documenting, programming a microcontroller, constructing and wiring a device, and, of course, designing, testing, and refining of a product. The robot project was first conceived over two decades ago, and it continues to evolve both technically and pedagogically. In the spring of 2017, one refinement was adding a failure analysis component to the course. It has always been part of the course that teams are required to participate in performance tests at several points during the term to determine whether their product is progressing according to schedule and executing as intended. The additional element required any team that scored fifty percent or less on a performance test to engage in a postperformance test analysis. They were to identify the causes for why the robot did not achieve the goals of the test, along with likely strategies for remedying the problems identified. In the first offering of the course with this requirement, there were four performance tests, and about half of the participating teams engaged in one or more failure analyses. This paper describes the common causes students identified for their failures, as well as the range of solutions they proposed for fixing them. Additionally, a question on the course-end survey solicited feedback from the students regarding the educational value of the post-performance test failure analysis. Student responses were mixed, but have suggested refinements to the assignment for future offerings of the course. Background and Rationale Mistakes are part of the learning process. Educators recognize this, but students often struggle with the notion and as a result, many of them miss out on valuable learning opportunities. In a midterm exam study, a survey was given to 285 freshmen engineering majors and only 25% of students reported trying to learn from their mistakes while the material was fresh in their minds. The majority of the students put the test away and often never looked at it again. In another anonymous survey of 456 first-year engineering students, only 21% reported that they would use the exam again later, and many specified that would only be if the final were cumulative [1]. This data prompted the First-Year Engineering Honors Program at The Ohio State University (OSU) to implement exam corrections as a mandatory assignment for any student scoring less than 90% on an exam. It has been used for the last seven years, and the format is very similar to that described by Henderson and Harper in their introductory physics courses [2]. Instead of the instructor going over the test when it is returned, students have a few days to correct any mistakes, explain any errors, and submit this work as part of a homework assignment. This assignment is graded largely based on effort and the entire class reviews the exam, if necessary, only after submitting the exam corrections assignment. The goal is to help students use their exams in a formative way and learn from their mistakes. Henderson and Harper describe several small investigations that consistently show educational value in such an activity [2]. The same logic behind the exam corrections assignment was applied to another form of assessment in the First-Year Engineering Honors robotics course at OSU during the 2017 spring semester. During this class, students work in teams to design and build an autonomous robot to complete various tasks on a robotics course [3]. The design schedule for this project includes regular robot performance tests to help prepare students for both individual and head-to-head competitions near the end of the semester. The (Dreaded) Performance Tests Throughout this design, build, and test robot project, teams are evaluated on their robot’s performance regularly. At several points during the project, the teams are asked to complete one assigned task defined in the robot scenario on the course during a performance test. The main goal of these performance tests is to ensure continual progress throughout the project, especially on some of the more challenging tasks. As students approach the end of the project, they will have completed four different performance tests that cover all the objectives. This helps ensure that each team has demonstrated or achieved a baseline of performance in regards to each task. An overview of the performance tests for the 2017 robot competition is in Appendix A. The scenario for that year was that robots were assisting with tasks required for scientific research near a volcano. Performance tests focus primarily on a single task each week. Student can choose to develop new robot control code each week or build on the code from previous tests. Teams who make their code more modular tend to have an easier time in programming the code for each new performance test, and they also have an easier time when all of the tasks must be combined later in the project. As the design of the robot progresses, there are often new mechanical and electrical or sensor changes made each week, in addition to the software changes. Thus each performance test often involves new or modified software and hardware. The performance tests are a graded component of the class. Each of these tests is broken into sub-components with associated points. Each team is allowed up to five official attempts per performance test, and the highest score attained counts toward the course grade. To be considered official, the attempt must be observed and scored by a designated member of the instructional team and may only be completed during a class period. While a performance test may be completed in a class prior to the deadline, the deadline is firm on the day of the test. Additionally, there are a stretch bonus and early completion bonus. Both of these opportunities are provided to encourage teams to move beyond the weekly expectations and complete the assigned tasks in a timely fashion. The early completion bonus provides incentive for teams to get ahead of schedule. The stretch bonus is intended to encourage teams to look at the bigger picture than just that week’s test and think about how the various elements of the challenge fit together. As such, the stretch bonus often involves navigating to the location of another task on the course. The instructional team is able to evaluate how each team is doing and devote specific attention to teams that may be struggling. This ensures that the majority of teams are usually seeing success when they reach the individual competition, the first time at which they are evaluated on whether their robots can complete the entire course or not. Each performance test is graded out of twenty points and is typically broken down into four subtasks at five points apiece. The two bonus opportunities can add a total of five additional points to the score. The specific point breakdown for the performance test that is the focus of the analysis in this paper will be shared in the discussion that follows. Each test contributes 1.5% to the final course grade. Even with a relatively small impact on the grades, students take the performance tests very seriously. In recent years, instructors observed that students placed a great deal of pressure on themselves to execute each performance test perfectly, but when a given performance test did not go as planned, students rarely spent time discussing with their group why their robot had failed. Due to the lack of analysis, teams would instead focus on the next performance test while staying with the failed design, or they would change their design without carefully thinking through their change. Some teams would approach each performance test with such tunnel vision that they would make major alterations to their robot each week, without thinking about how they would eventually need to perform all of the tasks together. These observations prompted the creation of the post-performance test assignment during the spring semester of 2017. Similar to the exam corrections, this assignment was constructed with the goal being to help students use the performance test in a formative way. Failure is a natural part of the engineering design process, and it is important for students to look at it that way, as opposed to something that should be dreaded and feared. As Lottero-Perdue and Parry state, “Practicing engineers acknowledge failure as a normal and expected outcome as part of the iterative nature of designing solutions to problems, although the end goal is that the solution ...is not intended to fail.” [4] It is also important to note that students must learn to approach iteration as something more intentional than blind trial and error. Failures must be analyzed before adjustments are made to the design. Details of the Failure Analysis Assignment and Overall Impressions The failure analysis assignment is presented in Figure 1 below. Each post performance reflection is worth ten points, or slightly less than 1% of a student’s final grade. This number of points is consistent with other assignments throughout the semester that require a similar amount of work. Generally speaking, these second-semester honors students tend to take