Collection:

Recurrent neural network language models (RNNLMs) have become very successful in many natural language processing tasks. However, RNNLMs trained with a cross entropy loss function and hard output targets are prone to overfitting, which weakens the languag
- IEEE MemberUS $11.00
- Society MemberUS $0.00
- IEEE Student MemberUS $11.00
- Non-IEEE MemberUS $15.00
Videos in this product
Learning Recurrent Neural Network Language Models With Context-Sensitive Label Smoothing For Automatic Speech Recognition
Recurrent neural network language models (RNNLMs) have become very successful in many natural language processing tasks. However, RNNLMs trained with a cross entropy loss function and hard output targets are prone to overfitting, which weakens the languag