Converting ASR Citrinet (Nemo based) onnx model to TRT

Dear Nvidia Team,

We are using Citrinet model for ASR. We converted .nemo to onnx format, but how to convert onnx model to TRT(with trtexec or any other ways)?

Here we have Audio signal as Input, so how to pass the input shape while we are using trtexec?

Looking forward to your reply.

Thanks in advance,
Darshan C G

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

I @NVES,

Thanks for your reply.

Here is the ONNX model: citrinet_256.onnx - Google Drive

How to convert this model to TRT with trtexec? Till now I have converted only Vision-based models, but I am not sure for NLP models where we have audio signals as Input.

Thanks,
Darshan

Hi @darshancganji12,

We can build engine using trtexec with dynamic shapes, same as CNN model, using trtexec --minShapes xx --optShapes xx --maxShape xx.
Please refer
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec#example-4-running-an-onnx-model-with-full-dimensions-and-dynamic-shapes

or refer following in developer guide to write building engine app.

Thank you.