Analysis of student understanding in short‐answer explanations to concept questions using a human‐centered AI approach
Harpreet Auby, Namrata Shivagunde, Vijeta Deshpande, Anna Rumshisky, Milo D. Koretsky Abstract
Background
Analyzing student short‐answer written justifications to conceptually challenging questions has proven helpful to understand student thinking and improve conceptual understanding. However, qualitative analyses are limited by the burden of analyzing large amounts of text.
Purpose
We apply dense and sparse Large Language Models (
Design/Method
We first identify the cognitive resources students use through human coding of seven questions. We then compare the performance of four dense
Findings
In a sample question, we analyze 904 responses to identify 48 unique cognitive resources, which we then organize into six themes. In contrast to recommendations in the literature, students who activate molecular resources were less likely to answer correctly. This example illustrates the usefulness of qualitatively analyzing large datasets. Of the
Conclusions
Open‐source models like Mixtral have the potential to perform well when coding short‐answer justifications to challenging concept questions. However, more fine‐tuning is needed so that they can be robust enough to be utilized with a resources‐based framing.