DOI: 10.1287/mnsc.2022.03316 ISSN: 0025-1909

Digital Lyrebirds: Experimental Evidence That Voice-Based Deep Fakes Influence Trust

Scott Schanke, Gordon Burtch, Gautam Ray

We consider the pairing of audio chatbot technologies with voice-based deep fakes, that is, voice clones, examining the potential of this combination to induce consumer trust. We report on a set of controlled experiments based on the investment game, evaluating how voice cloning and chatbot disclosure jointly affect participants’ trust, reflected by their willingness to play with an autonomous, AI-enabled partner. We observe evidence that voice-based agents garner significantly greater trust from subjects when imbued with a clone of the subject’s voice. Recognizing that these technologies present not only opportunities but also the potential for misuse, we further consider the moderating impact of AI disclosure, a recent regulatory proposal advocated by some policymakers. We find no evidence that AI disclosure attenuates the trust-inducing effect of voice clones. Finally, we explore underlying mechanisms and contextual moderators for the trust-inducing effects, with an eye toward informing future efforts to manage and regulate voice-cloning applications. We find that a voice clone’s effects operate, at least in part, by inducing a perception of homophily and that the effects are increasing in the clarity and quality of generated audio. Implications of these results for consumers, policymakers, and society are discussed.

This paper has been This paper was accepted by D. J. Wu for the Special Issue on the Human-Algorithm Connection.

Funding: This work was supported by funding from the University of Wisconsin-Milwaukee Research Assistance Fund.

Supplemental Material: The online appendix and data files are available at https://doi.org/10.1287/mnsc.2022.03316 .

More from our Archive