In the absence of dictionaries, translators, or grammars, it is still possible to learn some of the words of a new language by listening to spoken descriptions of images. If several images, each containing a particular visually salient object, each co-occur with a particular sequence of speech sounds, we can infer that those speech sounds are a word whose definition is the visible object. A multimodal word discovery system accepts, as input, a database of spoken descriptions of images (or a set of corresponding phone transcriptions) and learns a mapping from waveform segments (or phone strings) to their associated image concepts. In this thesis, we propose a novel framework for multimodal word discovery systems based on statistical machine translation (SMT) and neural machine translation (NMT). We extend the existing theoretical frameworks on unsupervised word discovery and demonstrate a class of effective models for end-to-end word discovery from image regions and spoken descriptions. Finally, we provide a careful ablation study on components of my system and present some of the challenges in multimodal spoken word discovery.
【 预 览 】
附件列表
Files
Size
Format
View
A translation framework for discovering word-like units from visual scenes and spoken descriptions