Applied Sciences | |
Better Word Representation Vectors Using Syllabic Alphabet: A Case Study of Swahili | |
CasperS. Shikali1  Refuoe Mokhosi1  Zhou Sijie1  Liu Qihe1  | |
[1] School of Information and Software Engineering, University of Electronic Science and Technology of China, Xiyuan Ave, West Hi-Tech Zone, Chengdu 611731, China; | |
关键词: syllabic alphabet; word representation vectors; deep learning; syllable-aware language model; perplexity; word analogy; | |
DOI : 10.3390/app9183648 | |
来源: DOAJ |
【 摘 要 】
Deep learning has extensively been used in natural language processing with sub-word representation vectors playing a critical role. However, this cannot be said of Swahili, which is a low resource and widely spoken language in East and Central Africa. This study proposed novel word embeddings from syllable embeddings (WEFSE) for Swahili to address the concern of word representation for agglutinative and syllabic-based languages. Inspired by the learning methodology of Swahili in beginner classes, we encoded respective syllables instead of characters, character n-grams or morphemes of words and generated quality word embeddings using a convolutional neural network. The quality of WEFSE was demonstrated by the state-of-art results in the syllable-aware language model on both the small dataset (31.229 perplexity value) and the medium dataset (45.859 perplexity value), outperforming character-aware language models. We further evaluated the word embeddings using word analogy task. To the best of our knowledge, syllabic alphabets have not been used to compose the word representation vectors. Therefore, the main contributions of the study are a syllabic alphabet, WEFSE, a syllabic-aware language model and a word analogy dataset for Swahili.
【 授权许可】
Unknown