In this thesis we investigate implicit feedback techniques for real-world recommender systems. However, learning a recommender system from implicit feedback is very challenging, primarily due to the lack of negative feedback. While a common strategy is to treat the unobserved feedback (i.e., missing data) as a source of negative signal, the technical difficulties cannot be overlooked: (1) the ratio of positive to negative feedback in practice is highly imbalanced, and (2) learning through all unobserved feedback (which easily scales to billion level or higher) is computationally expensive.To effectively and efficiently learn recommender models from implicit feedback, two types of methods are presented, that is, negative sampling based stochastic gradient descent (NS-SGD) and whole sample based batch gradient descent (WS-BGD). Regarding the NS-SGD method, how to effectively sample informative negative examples to improve recommendation algorithms is investigated. More specifically, three learning models called Lambda Factorization Machines (lambdaFM), Boosting Factorization Machines (BoostFM) and Geographical Bayesian Personalized Ranking (GeoBPR) are described. While regarding the WS-BGD method, how to efficiently use all unobserved implicit feedback data rather than resorting to negative sampling is studied. A fast BGD learning algorithm is proposed, which can be applied to both basic collaborative filtering and content/context-aware recommendation settings. The last research work is on the session-based item recommendation, which is also an implicit feedback scenario. However, different from above four works based on shallow embedding models, we apply deep learning based sequence-to-sequence model to directly generate the probability distribution of next item. The proposed generative model can be applied to various sequential recommendation scenarios.To support the main arguments, extensive experiments are carried out based on real-world recommendation datasets. The proposed recommendation algorithms have achieved significant improvements in contrast with strong benchmark models. Moreover, these models can also serve as generic solutions and solid baselines for future implicit recommendation problems.
【 预 览 】
附件列表
Files
Size
Format
View
Learning implicit recommenders from massive unobserved feedback