学位论文详细信息
Robust adaptation of natural language processing for language variation
Natural language processing;Machine learning
Yang, Yi ; Eisenstein, Jacob Computer Science Rehg, James Boots, Byron Chau, Duen Horng (Polo) Daumé III, Hal ; Eisenstein, Jacob
University:Georgia Institute of Technology
Department:Computer Science
关键词: Natural language processing;    Machine learning;   
Others  :  https://smartech.gatech.edu/bitstream/1853/58201/1/YANG-DISSERTATION-2017.pdf
美国|英语
来源: SMARTech Repository
PDF
【 摘 要 】

Natural language processing (NLP) technology has been applied in various domains, ranging from social media and digital humanities to public health. Unfortunately, the adoption of existing NLP techniques in these areas often experiences unsatisfactory performance. Languages of new datasets and settings can be significantly different from standard NLP training corpora, and modern NLP techniques are usually vulnerable to variation in non-standard languages, in terms of the lexicon, syntax, and semantics. Previous approaches toward this problem suffer from three major weaknesses. First, they often employ supervised methods that require expensive annotations and easily become outdated with respect to the dynamic nature of languages. Second, they usually fail to leverage the valuable metadata associated with the target languages of these areas. Third, they treat language as uniform and ignore the differences in language use with respect to different individuals. In this thesis, we propose several novel techniques to overcome these weaknesses and build NLP systems that are robust to language variation. These approaches are driven by co-occurrence statistics as well as rich metadata without the need of costly annotations, and can easily adapt to new settings. First, we can transform lexical variation into text that better matches standard datasets. We present a unified unsupervised statistical model for text normalization. The relationship between standard and non-standard tokens is characterized by a log-linear model, permitting arbitrary features. Text normalization focuses on tackling variation in lexicons, and therefore improving underlying NLP tasks. Second, we can overcome language variation by adapting standard NLP tools to fit the text with variation directly. We propose a novel but simple feature embedding approach to learn joint feature representations for domain adaptation, by exploiting the feature template structure commonly used in NLP problems. We also show how to incorporate metadata attributes into feature embeddings, which helps to learn distill the domain-invariant properties of each feature over multiple related domains. Domain adaptation is able to deal with a full range of linguistic phenomenon, thus it often yields better performances than text normalization. Finally, a subtle challenge posed by variation is that language is not uniformly distributed among individuals, while traditional NLP systems usually treat texts from different authors the same. Both text normalization and domain adaptation follow the standard NLP settings and fail to handle this problem. We propose to address the difficulty by exploiting the sociological theory of \textit{homophily}---the tendency of socially linked individuals to behave similarly---to build models that account for language variation on an individual or a social community level. We investigate both \textit{label homophily} and \textit{linguistic homophily} to build socially adapted information extraction and sentiment analysis systems. Our work delivers state-of-the-art NLP systems for social media and historical texts on various standard benchmark datasets.

【 预 览 】
附件列表
Files Size Format View
Robust adaptation of natural language processing for language variation 2154KB PDF download
  文献评价指标  
  下载次数:8次 浏览次数:11次