Serving as a reader and grader for Informatics 115 gave me a rigorous perspective on how students apply formal software testing principles in practice. I evaluated over 150 student implementations, focusing not only on functional correctness but also on edge case handling, assertion quality, and overall test design. Reviewing a wide range of submissions exposed recurring misconceptions in how students reason about coverage and failure modes, which strengthened my ability to analyze code critically and systematically.
A key part of my role involved writing detailed feedback that explained why certain tests were insufficient and how they could be improved. I emphasized reasoning about boundary conditions, input partitioning, and meaningful assertions rather than surface level fixes. This process required translating abstract testing concepts into clear and actionable guidance that students could apply in future assignments. Through this work, I developed a stronger appreciation for how evaluation criteria shape student understanding of software quality.
In addition to grading, I held weekly office hours and midterm review sessions where I worked directly with students to debug failing tests and interpret autograder results. These interactions reinforced the importance of clarity and consistency in automated assessment tools. I also provided feedback on autograder behavior and course materials, which gave me insight into the challenges of designing scalable and fair evaluation systems for large technical courses.