Unsupervised Style And Content Separation By Minimizing Mutual Information For Speech Synthesis

This video program is a part of the Premium package:

Unsupervised Style And Content Separation By Minimizing Mutual Information For Speech Synthesis


  • IEEE MemberUS $11.00
  • Society MemberUS $0.00
  • IEEE Student MemberUS $11.00
  • Non-IEEE MemberUS $15.00
Purchase

Unsupervised Style And Content Separation By Minimizing Mutual Information For Speech Synthesis

0 views
  • Share
Create Account or Sign In to post comments
We present a method to generate speech from input text and a style vector that is extracted from a reference speech signal in an unsupervised manner, i.e., no style annotation, such as speaker information, is required. Existing unsupervised methods, durin
We present a method to generate speech from input text and a style vector that is extracted from a reference speech signal in an unsupervised manner, i.e., no style annotation, such as speaker information, is required. Existing unsupervised methods, durin