Scott, Michael 
ORCID: https://orcid.org/0000-0002-6803-1490 and Ghinea, Gheorghita
  
(2015)
Reliability in the Assessment of Program Quality by Teaching Assistants During Code Reviews.
    In: Proceedings of the 2015 ACM Conference on Innovation and Technology in Computer Science Education, July 6-8, 2015, Vilnius, Lithuania.
  
  
  
Preview  | 
            
              
Text
 sig-alternate-iticse15.pdf - Accepted Version Available under License Creative Commons Attribution Share Alike. Download (135kB) | Preview  | 
          
Abstract / Summary
It is of paramount importance that formative feedback is meaningful in order to drive student learning. Achieving this, however, relies upon a clear and constructively aligned model of quality being applied consistently across submissions. This poster presentation raises concerns about the inter-rater reliability of code reviews conducted by teaching assistants in the absence of such a model. Five teaching assistants each reviewed 12 purposely selected programs submitted by introductory programming students. An analysis of their reliability revealed that while teaching assistants were self-consistent, they each assessed code quality in different ways. This suggests a need for standard models of program quality, alongside supporting rubrics and other tools, to be used during code reviews to improve the reliability of formative feedback.
| Item Type: | Conference or Workshop Item (Poster) | 
|---|---|
| Subjects: | Computing & Data Science Education  | 
        
| Department: | School of Film & Television | 
| Depositing User: | Michael Scott | 
| Date Deposited: | 06 Oct 2015 14:40 | 
| Last Modified: | 08 Aug 2024 10:00 | 
| URI: | https://repository.falmouth.ac.uk/id/eprint/1633 | 
![]()  | 
        View Record (staff only) | 
          
 Tools
 Tools