Frontiers in Neurorobotics | |
A novel approach to attention mechanism using kernel functions: Kerformer | |
Neuroscience | |
Yanyun Fu1  Yao Gan2  Yongming Li2  Deyong Wang3  | |
[1] Beijing Academy of Science and Technology, Beijing, China;Information Science and Engineering Department, Xinjiang University, Ürümqi, China;Key Laboratory of Big Data of Xinjiang Social Security Risk Prevention and Control, Xinjiang Lianhai INA-INT Information Technology Ltd., Ürümqi, Xinjiang, China; | |
关键词: linear attention; kernel method; transformer; SE Block; self-attention; | |
DOI : 10.3389/fnbot.2023.1214203 | |
received in 2023-04-29, accepted in 2023-08-01, 发布年份 2023 | |
来源: Frontiers | |
【 摘 要 】
Artificial Intelligence (AI) is driving advancements across various fields by simulating and enhancing human intelligence. In Natural Language Processing (NLP), transformer models like the Kerformer, a linear transformer based on a kernel approach, have garnered success. However, traditional attention mechanisms in these models have quadratic calculation costs linked to input sequence lengths, hampering efficiency in tasks with extended orders. To tackle this, Kerformer introduces a nonlinear reweighting mechanism, transforming maximum attention into feature-based dot product attention. By exploiting the non-negativity and non-linear weighting traits of softmax computation, separate non-negativity operations for Query(Q) and Key(K) computations are performed. The inclusion of the SE Block further enhances model performance. Kerformer significantly reduces attention matrix time complexity from O(N2) to O(N), with N representing sequence length. This transformation results in remarkable efficiency and scalability gains, especially for prolonged tasks. Experimental results demonstrate Kerformer's superiority in terms of time and memory consumption, yielding higher average accuracy (83.39%) in NLP and vision tasks. In tasks with long sequences, Kerformer achieves an average accuracy of 58.94% and exhibits superior efficiency and convergence speed in visual tasks. This model thus offers a promising solution to the limitations posed by conventional attention mechanisms in handling lengthy tasks.
【 授权许可】
Unknown
Copyright © 2023 Gan, Fu, Wang and Li.
【 预 览 】
Files | Size | Format | View |
---|---|---|---|
RO202310101624989ZK.pdf | 1223KB | download |