DOI: 10.1145/3715099 ISSN: 1046-8188

LLMCDSR: Enhancing Cross-Domain Sequential Recommendation with Large Language Models

Haoran Xin, Ying Sun, Chao Wang, Hui Xiong

Cross-Domain Sequential Recommendation (CDSR) aims to predict users’ preferences based on historical sequential interactions across multiple domains. Existing works focus on the overlapped users who interact in multiple domains to capture the cross-domain correlations. These methods often underperform in practical scenarios featuring both overlapped and non-overlapped users due to the limited cross-domain interactions and knowledge transfer misalignment for non-overlapped users. To address this, we leverage Large Language Models (LLMs) to facilitate CDSR by fully exploiting single-domain interactions. However, LLMs exhibit inherent limitations in handling extensive item repositories and sequential collaborative signals. Moreover, the generation reliability is compromised by the hallucination problem, potentially causing noisy and unstable outputs. To this end, we propose a novel LLMCDSR framework, which employs LLMs to predict unobserved cross-domain interactions, termed pseudo items, within single-domain interactions. Specifically, we first prompt LLMs to execute the Candidate-Free Cross-Domain Interaction Generation task. Then, we devise a Collaborative-Textual Contrastive Pre-Training strategy, learning to infuse collaborative information into textual features. Afterwards, we present a novel Relevance-Aware Meta Recall Network (RMRN) to selectively identify and retrieve high-quality pseudo items from the dataset, where the parameters are optimized in a meta-learning manner. Finally, extensive experiments on two public datasets validate the effectiveness of LLMCDSR in enhancing CDSR.

More from our Archive