期刊论文详细信息
Machine Learning and Knowledge Extraction
Benefits from Variational Regularization in Language Models
Stefan Wegenkittl1  Cornelia Ferner1 
[1] Information Technology and Systems Management, Salzburg University of Applied Sciences, Urstein Sued 1, 5412 Puch/Hallein, Austria;
关键词: language models;    regularization;    isotropy;    generalizability;    semantic reasoning;   
DOI  :  10.3390/make4020025
来源: DOAJ
【 摘 要 】

Representations from common pre-trained language models have been shown to suffer from the degeneration problem, i.e., they occupy a narrow cone in latent space. This problem can be addressed by enforcing isotropy in latent space. In analogy with variational autoencoders, we suggest applying a token-level variational loss to a Transformer architecture and optimizing the standard deviation of the prior distribution in the loss function as the model parameter to increase isotropy. The resulting latent space is complete and interpretable: any given point is a valid embedding and can be decoded into text again. This allows for text manipulations such as paraphrase generation directly in latent space. Surprisingly, features extracted at the sentence level also show competitive results on benchmark classification tasks.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:1次