DOI: 10.1177/00986283241282696 ISSN: 0098-6283

Grading the Graders: Comparing Generative AI and Human Assessment in Essay Evaluation

Elizabeth L. Wetzler, Kenneth S. Cassidy, Margaret J. Jones, Chelsea R. Frazier, Nickalous A. Korbut, Chelsea M. Sims, Shari S. Bowen, Michael Wood

Background

Generative artificial intelligence (AI) represents a potentially powerful, time-saving tool for grading student essays. However, little is known about how AI-generated essay scores compare to human instructor scores.

Objective

The purpose of this study was to compare the essay grading scores produced by AI with those of human instructors to explore similarities and differences.

Method

Eight human instructors and two versions of OpenAI's ChatGPT (3.5 and 4o) independently graded 186 deidentified student essays from an introductory psychology course using a detailed rubric. Scoring consistency was analyzed using Bland-Altman and regression analyses.

Results

AI scores for ChatGPT3.5 were, on average, higher than human scores, although average scores for ChatGPT 4o and human scores were more similar. Notably, AI grading for both versions was more lenient than human instructors at lower performance levels and stricter at higher levels, reflecting proportional bias.

Conclusion

Although AI may offer potential for supporting grading processes, the pattern of results suggests that AI and human instructors differ in how they score using the same rubric.

Teaching Implications

Results suggest that educators should be aware that AI grading of psychology writing assignments that require reflection or critical thinking may differ markedly from scores generated by human instructors.

More from our Archive