Journal of Xidian University ›› 2020, Vol. 47 ›› Issue (1): 104-110.doi: 10.19665/j.issn1001-2400.2020.01.015

Previous Articles     Next Articles

Speech enhancement method based on the multi-head self-attention mechanism

CHANG Xinxu,ZHANG Yang,YANG Lin,KOU Jinqiao,WANG Xin,XU Dongdong   

  1. Beijing Institute of Computer Technology and Application, Beijing 100854, China
  • Received:2019-09-28 Online:2020-02-20 Published:2020-03-19

Abstract:

The human ear can only accept one sound signal at one time, and the signal with the highest energy will shield other signals with low energy. According to the above principle, this paper combines the self-attention and the multi-head attention to propose a speech enhancement method based on the multi-head self-attention mechanism. By applying multi-head self-attention calculation to the input noisy speech features, the clean speech part and the noise part of the input speech feature can be clearly distinguished, thereby enabling subsequent processing to suppress noise more effectively. Experimental results show that the proposed method significantly outperforms the method based on the recurrent neural network in terms of both speech quality and intelligibility.

Key words: speech enhancement, deep neural network, self attention, multi-head attention, gated recurrent unit

CLC Number: 

  • TN912.35

Baidu
map