DOI: 10.1002/aaai.12118 ISSN:

An analysis of Watson vs. BARD vs. ChatGPT: The Jeopardy! Challenge

Daniel E. O'Leary
  • Artificial Intelligence

Abstract

The recently released BARD and ChatGPT have generated substantial interest from a range of researchers and institutions concerned about the impact on education, medicine, law and more. This paper uses questions from the Watson Jeopardy! Challenge to compare BARD, ChatGPT, and Watson. Using those, Jeopardy! questions, we find that for high confidence Watson questions the three systems perform with similar accuracy as Watson. We also find that both BARD and ChatGPT perform with the accuracy of a human expert and that the sets of their correct answers are rated highly similar using a Tanimoto similarity score. However, in addition, we find that both systems can change their solutions to the same input information on subsequent uses. When given the same Jeopardy! category and question multiple times, both BARD and ChatGPT can generate different and conflicting answers. As a result, the paper examines the characteristics of some of those questions that generate different answers to the same inputs. Finally, the paper discusses some of the implications of finding the different answers and the impact of the lack of reproducibility on testing such systems.

More from our Archive