GTC 2020 S21465
Presenters: Peter Huang,NVIDIA
Based on collaboration with customers, we’ll go through the key phases to deploy text-to-speech services including use-case survey, model selection, data preparation, model training, and, most importantly, optimizing model inference on Tesla GPU products. After introducing the background, related models, and tricks for training the model, we’ll take a deep dive into TensorRT based Tacotron and WaveGlow acceleration work-study, and then touch upon methods to accelerate other VoCoders, such as WaveRNN and BERT’s potential usage for TTS.
Watch this session
Join in the conversation below.