Open Computer Science | |
Word2Vec: Optimal hyperparameters and their impact on natural language processing downstream tasks | |
Liwicki Foteini1  Adewumi Tosin1  Liwicki Marcus1  | |
[1] ML Group, EISLAB, Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, 97187, Luleå, Sweden; | |
关键词: word2vec; hyperparameters; embeddings; named entity recognition; sentiment analysis; | |
DOI : 10.1515/comp-2022-0236 | |
来源: DOAJ |
【 摘 要 】
Word2Vec is a prominent model for natural language processing tasks. Similar inspiration is found in distributed embeddings (word-vectors) in recent state-of-the-art deep neural networks. However, wrong combination of hyperparameters can produce embeddings with poor quality. The objective of this work is to empirically show that Word2Vec optimal combination of hyper-parameters exists and evaluate various combinations. We compare them with the publicly released, original Word2Vec embedding. Both intrinsic and extrinsic (downstream) evaluations are carried out, including named entity recognition and sentiment analysis. Our main contributions include showing that the best model is usually task-specific, high analogy scores do not necessarily correlate positively with F1 scores, and performance is not dependent on data size alone. If ethical considerations to save time, energy, and the environment are made, then relatively smaller corpora may do just as well or even better in some cases. Increasing the dimension size of embeddings after a point leads to poor quality or performance. In addition, using a relatively small corpus, we obtain better WordSim scores, corresponding Spearman correlation, and better downstream performances (with significance tests) compared to the original model, which is trained on a 100 billion-word corpus.
【 授权许可】
Unknown