学位论文详细信息
Axiomatic analysis of smoothing methods in language models for pseudo-relevance feedback
Search Engines;Text Retrieval;Relevance Feedback;Pseudo-Relevance Feedback;Implicit Feedback;Blind Feedback
Hazimeh, Hussein ; Zhai ; ChengXiang
关键词: Search Engines;    Text Retrieval;    Relevance Feedback;    Pseudo-Relevance Feedback;    Implicit Feedback;    Blind Feedback;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/92709/HAZIMEH-THESIS-2016.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】

Pseudo-Relevance Feedback (PRF) is an important general technique for improving retrieval effectiveness without requiring any user effort.Several state-of-the-art PRF models are based on the language modeling approach where a query language model is learned based on feedback documents. In all these models, feedback documents are represented with unigram language models smoothed with a collection language model. While collection language model-based smoothing has proven both effective and necessary in using language models for retrieval, we use axiomatic analysis to show that this smoothing scheme inherently causes the feedback model to favor frequent terms and thus violates the IDF constraint needed to ensure selection of discriminative feedback terms. To address this problem, we propose replacing collection language model-based smoothing in the feedback stage with additive smoothing, which is analytically shown to select more discriminative terms. Empirical evaluation further confirms that additive smoothing indeed significantly outperforms collection-based smoothing methods in multiple language model-based PRF models.

【 预 览 】
附件列表
Files Size Format View
Axiomatic analysis of smoothing methods in language models for pseudo-relevance feedback 577KB PDF download
  文献评价指标  
  下载次数:10次 浏览次数:18次