As Good as a Coin Toss: Human Detection of AI-Generated Content
Di Cooke, Abigail Edwards, Sophia Barkoff, Kathryn KellyOne of today’s principal defenses against weaponized synthetic media continues to be the ability of the targeted individual to visually or auditorily recognize AI-generated content when they encounter it. However, as the realism of synthetic media continues to rapidly improve, it is vital to have an accurate understanding of just how susceptible people currently are to potentially being misled by convincing but false AI-generated content. To ascertain this, we conducted a perceptual study with 1,276 participants to assess how capable people were at distinguishing between authentic and synthetic images, audio, video, and audiovisual media. As AI-generated content is proliferating across online platforms in particular, the surveys were designed to emulate some of the ecological conditions typical of an online platform. We find that, on average, people struggled to distinguish between synthetic and authentic media, with the mean detection performance close to a chance-level performance of 50%. We also find that accuracy rates worsen when the stimuli contain any degree of synthetic content, feature foreign languages, and the media type is a single modality. People are also less accurate at identifying synthetic images when they feature human faces, and when audiovisual stimuli have heterogeneous authenticity. Finally, we find that higher degrees of prior knowledge about synthetic media does not significantly impact detection-accuracy rates, but age does, with older individuals performing worse than their younger counterparts. Collectively, these results highlight that it is no longer feasible to rely on people’s perceptual capabilities to protect themselves against the growing threat of weaponized synthetic media, and that the need for alternative countermeasures is more critical than ever before.