Why tacotron2 model separated into 3 parts?

I am reading the source code of TensorRT TensorRT/demo/Tacotron2/tensorrt at main · NVIDIA/TensorRT · GitHub.
The Tacotron2 model has been split into three parts: Encoder, Deocder, Postnet. And convert into onnx and engine respcetivaly.
Why not convert the model into engine as a whole? Is there some reason to separate into three parts?

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi,

The onnx model is worked.
I just wonder why they convert Encoder, Postnet and Decoder respectivaly in the official sample,(see TensorRT github Repositories TensorRT/convert_tacotron22onnx.py at main · NVIDIA/TensorRT · GitHub).
Why they didn’t convert tacotron2 into one engine?

Hi,

The number of iterations that the decoder runs for is data-dependent. This causes the input shapes to Postnet become data-dependent which TRT does not support as of now.

Thank you.