T-Gsa: Transformer With Gaussian-Weighted Self-Attention For Speech Enhancement

This video program is a part of the Premium package:

T-Gsa: Transformer With Gaussian-Weighted Self-Attention For Speech Enhancement


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

T-Gsa: Transformer With Gaussian-Weighted Self-Attention For Speech Enhancement

1 view
  • Share
Transformer neural networks (TNN) demonstrated state-of-art performance on many natural language processing (NLP) tasks, replacing recurrent neural networks (RNNs), such as LSTMs or GRUs. However, TNNs did not perform well in speech enhancement, whose con
Transformer neural networks (TNN) demonstrated state-of-art performance on many natural language processing (NLP) tasks, replacing recurrent neural networks (RNNs), such as LSTMs or GRUs. However, TNNs did not perform well in speech enhancement, whose con