期刊论文详细信息
IEEE Access 卷:8
NEWLSTM: An Optimized Long Short-Term Memory Language Model for Sequence Prediction
Jia-Qiang Wang1  Han-Bing Qu1  Rong-Qun Peng2  Qing Wang2  Zhi Li3 
[1] Key Laboratory of Artificial Intelligence and Data Analysis, Beijing Academy of Science and Technology, Beijing, China;
[2] School of Computer Science and Technology, Shandong University of Technology, Zibo, China;
[3] School of Economics and Management, University of Chinese Academy of Sciences, Beijing, China;
关键词: Gate fusion;    exploding gradient;    long short-term memory;    recurrent neural network;   
DOI  :  10.1109/ACCESS.2020.2985418
来源: DOAJ
【 摘 要 】

The long short-term memory (LSTM) model trained on the universal language modeling task overcomes the bottleneck of vanishing gradients in the traditional recurrent neural network (RNN) and shows excellent performance in processing multiple tasks generated by natural language processing. Although LSTM effectively alleviates the vanishing gradient problem in the RNN, the information will be greatly lost in the long distance transmission, and there are still some limitations in its practical use. In this paper, we propose a new model called NEWLSTM, which improves the LSTM model, and alleviates the defects of too many parameters in LSTM and the vanishing gradient. The NEWLSTM model directly correlates the cell state information with current information. The traditional LSTM's input gate and forget gate are integrated, some components are deleted, the problems of too many LSTM parameters and complicated calculations are solved, and the iteration time is effectively reduced. In this paper, a neural network model is used to identify the relationship between input information sequences to predict the language sequence. The experimental results show that the improved new model is simpler than traditional LSTM models and LSTM variants on multiple test sets. NEWLSTM has better overall stability and can better solve the sparse words problem.

【 授权许可】

Unknown   

  文献评价指标  
  下载次数:0次 浏览次数:0次