DOI: 10.1145/963770.963772 ISSN:

Evaluating collaborative filtering recommender systems

Jonathan L. Herlocker, Joseph A. Konstan, Loren G. Terveen, John T. Riedl
  • Computer Science Applications
  • General Business, Management and Accounting
  • Information Systems

Recommender systems have been evaluated in many, often incomparable, ways. In this article, we review the key decisions in evaluating collaborative filtering recommender systems: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. In addition to reviewing the evaluation strategies used by prior researchers, we present empirical results from the analysis of various accuracy metrics on one content domain where all the tested metrics collapsed roughly into three equivalence classes. Metrics within each equivalency class were strongly correlated, while metrics from different equivalency classes were uncorrelated.

More from our Archive