学位论文详细信息
A translation framework for discovering word-like units from visual scenes and spoken descriptions
multimodal learning;low-resource speech technology;machine translation
Wang, Liming ; Hasegawa-Johnson ; Mark A
关键词: multimodal learning;    low-resource speech technology;    machine translation;   
Others  :  https://www.ideals.illinois.edu/bitstream/handle/2142/108055/WANG-THESIS-2020.pdf?sequence=1&isAllowed=y
美国|英语
来源: The Illinois Digital Environment for Access to Learning and Scholarship
PDF
【 摘 要 】

In the absence of dictionaries, translators, or grammars, it is still possible to learn some of the words of a new language by listening to spoken descriptions of images. If several images, each containing a particular visually salient object, each co-occur with a particular sequence of speech sounds, we can infer that those speech sounds are a word whose definition is the visible object. A multimodal word discovery system accepts, as input, a database of spoken descriptions of images (or a set of corresponding phone transcriptions) and learns a mapping from waveform segments (or phone strings) to their associated image concepts. In this thesis, we propose a novel framework for multimodal word discovery systems based on statistical machine translation (SMT) and neural machine translation (NMT). We extend the existing theoretical frameworks on unsupervised word discovery and demonstrate a class of effective models for end-to-end word discovery from image regions and spoken descriptions. Finally, we provide a careful ablation study on components of my system and present some of the challenges in multimodal spoken word discovery.

【 预 览 】
附件列表
Files Size Format View
A translation framework for discovering word-like units from visual scenes and spoken descriptions 1909KB PDF download
  文献评价指标  
  下载次数:22次 浏览次数:16次