DOI: 10.3390/app142311002 ISSN: 2076-3417

Time-Varying Preference Bandits for Robot Behavior Personalization

Chanwoo Kim, Joonhyeok Lee, Eunwoo Kim, Kyungjae Lee

Robots are increasingly employed in diverse services, from room cleaning to coffee preparation, necessitating an accurate understanding of user preferences. Traditional preference-based learning allows robots to learn these preferences through iterative queries about desired behaviors. However, these methods typically assume static human preferences. In this paper, we challenge this static assumption by considering the dynamic nature of human preferences and introduce the discounted preference bandit method to manage these changes. This algorithm adapts to evolving human preferences and supports seamless human–robot interaction through effective query selection. Our approach outperforms existing methods in time-varying scenarios across three key performance metrics.

More from our Archive