| | |
| | |
Stat |
Members: 3669 Articles: 2'599'751 Articles rated: 2609
22 March 2025 |
|
| | | |
|
Article overview
| |
|
LLMRec: Large Language Models with Graph Augmentation for Recommendation | Wei Wei
; Xubin Ren
; Jiabin Tang
; Qinyong Wang
; Lixin Su
; Suqi Cheng
; Junfeng Wang
; Dawei Yin
; Chao Huang
; | Date: |
1 Nov 2023 | Abstract: | The problem of data sparsity has long been a challenge in recommendation
systems, and previous studies have attempted to address this issue by
incorporating side information. However, this approach often introduces side
effects such as noise, availability issues, and low data quality, which in turn
hinder the accurate modeling of user preferences and adversely impact
recommendation performance. In light of the recent advancements in large
language models (LLMs), which possess extensive knowledge bases and strong
reasoning capabilities, we propose a novel framework called LLMRec that
enhances recommender systems by employing three simple yet effective LLM-based
graph augmentation strategies. Our approach leverages the rich content
available within online platforms (e.g., Netflix, MovieLens) to augment the
interaction graph in three ways: (i) reinforcing user-item interaction egde,
(ii) enhancing the understanding of item node attributes, and (iii) conducting
user node profiling, intuitively from the natural language perspective. By
employing these strategies, we address the challenges posed by sparse implicit
feedback and low-quality side information in recommenders. Besides, to ensure
the quality of the augmentation, we develop a denoised data robustification
mechanism that includes techniques of noisy implicit feedback pruning and
MAE-based feature enhancement that help refine the augmented data and improve
its reliability. Furthermore, we provide theoretical analysis to support the
effectiveness of LLMRec and clarify the benefits of our method in facilitating
model optimization. Experimental results on benchmark datasets demonstrate the
superiority of our LLM-based augmentation approach over state-of-the-art
techniques. To ensure reproducibility, we have made our code and augmented data
publicly available at: this https URL | Source: | arXiv, 2311.00423 | Services: | Forum | Review | PDF | Favorites |
|
|
No review found.
Did you like this article?
Note: answers to reviews or questions about the article must be posted in the forum section.
Authors are not allowed to review their own article. They can use the forum section.
|
| |
|
|
|