Reliability in the Assessment of Program Quality by Teaching Assistants During Code Reviews

Scott, Michael ORCID logoORCID: and Ghinea, Gheorghita (2015) Reliability in the Assessment of Program Quality by Teaching Assistants During Code Reviews. In: Proceedings of the 2015 ACM Conference on Innovation and Technology in Computer Science Education, July 6-8, 2015, Vilnius, Lithuania.

[thumbnail of sig-alternate-iticse15.pdf]
sig-alternate-iticse15.pdf - Accepted Version
Available under License Creative Commons Attribution Share Alike.

Download (135kB) | Preview
Official URL:

Abstract / Summary

It is of paramount importance that formative feedback is meaningful in order to drive student learning. Achieving this, however, relies upon a clear and constructively aligned model of quality being applied consistently across submissions. This poster presentation raises concerns about the inter-rater reliability of code reviews conducted by teaching assistants in the absence of such a model. Five teaching assistants each reviewed 12 purposely selected programs submitted by introductory programming students. An analysis of their reliability revealed that while teaching assistants were self-consistent, they each assessed code quality in different ways. This suggests a need for standard models of program quality, alongside supporting rubrics and other tools, to be used during code reviews to improve the reliability of formative feedback.

Item Type: Conference or Workshop Item (Poster)
Subjects: Computer Science, Information & General Works
Courses by Department: The School of Film & Television > Games and Animation
Depositing User: Michael Scott
Date Deposited: 06 Oct 2015 14:40
Last Modified: 11 Nov 2022 16:33


View Item View Item (login required)