- Authors: Umair Z. Ahmed, Shubham Sahai, Ben Leong, and Amey Karkare
- Published: February 2025
- Source: Proceedings of the 56th ACM Technical Symposium on Computer Science Education (SIGCSE TS 2025)
- Location: Pittsburgh, PA, USA
- Document Link: https://sigcse2025.sigcse.org/details/sigcse-ts-2025-Papers/87/Feasibility-Study-of-Augmenting-Teaching-Assistants-with-AI-for-CS1-Teaching
This study explores a hybrid model where human Teaching Assistants (TAs) review and modify AI-generated feedback for programming exercises. While students perceived an improvement in feedback quality, the hybrid approach did not consistently lead to better student performance and revealed risks of TA complacency.
Conclusion: Augmenting human tutors with AI is not a “silver bullet”; it requires careful design and training to ensure that TAs remain critical and engaged in the feedback loop.
The Hybrid Model: Instead of replacing humans with AI, this research tests a “Human-in-the-loop” system where TAs use AI-generated drafts to provide faster and more accurate feedback to students.
Mixed Results: A large-scale randomized intervention with 185 students showed that while AI-augmented feedback was viewed positively by students, it didn’t necessarily result in higher test scores or better learning outcomes.
The Problem of Complacency: A significant finding was “TA complacency”—some TAs over-relied on the AI, failing to catch and correct subtle inaccuracies in the AI-generated guidance.
Efficiency vs. Accuracy: Although AI can speed up the feedback process, the study suggests that without rigorous oversight, the quality of pedagogical guidance might actually suffer due to human over-reliance on machine output.

Leave a Reply