AI and the advent of the cyborg behavioral scientist
Geoff Tomaino, Alan D. J. Cooke, Jim Hoover Abstract
Large Language Models have been incorporated into an astounding breadth of professional domains. Given their capabilities, many intellectual laborers naturally question to what extent these AI models will be able to usurp their own jobs. As behavioral scientists, we performed an effort to examine the extent to which an AI can perform our roles. To achieve this, we utilized commercially available AIs (e.g., ChatGPT 4) to perform each step of the research process, culminating in an AI‐written manuscript. We attempted to intervene as little as possible in the AI‐led idea generation, empirical testing, analysis, and reporting. This allowed us to assess the limits of AIs in a behavioral research context and propose guidelines for behavioral researchers wanting to utilize AI. We found that the AIs were adept at some parts of the process and wholly inadequate at others. Our overall recommendation is that behavioral researchers use AIs judiciously and carefully monitor the outputs for quality and coherence. We additionally draw implications for editorial teams, doctoral student training, and the broader research ecosystem.