TensorRT with onnx model

Description

I’ve modified the SampleOnnxMNIST C++ project to load a custom model (and have changed the input and output node names), but the binary fails at ConstructNetwork throwing the following error

Environment

TensorRT Version: 8.0.1.6
GPU Type: 1080Ti
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version: 8.2.2
Operating System + Version: Windows 10
Python Version (if applicable):
TensorFlow Version (if applicable): 2.5
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

https://drive.google.com/drive/folders/1KRKFzi-geTLgRyKa4Du8haCqd4CtyTa9?usp=sharing

Here is the model file and my slightly changed SampleUffMNIST.cpp file.

I’ve tried the following 2 approaches to get the onnx model file in TensorFlow 2.5 using tf2onnx

  1. TF checkpoint → pb → onnx
  2. TF checkpount → SavedModel → onnx

With both approaches, to get onnx model, I seem to be running into the same error. I am clueless as to where I am going wrong. Any help would be greatly appreciated.

Steps To Reproduce

This c++ file can be replaced in the place of the one at ‘TensorRT-8.0.1.6\samples\sampleOnnxMNIST’

and the model.onnx file is expected to be in ‘TensorRT-8.0.1.6\data’

This project was built using Visual Studio 2017. Upon successfully building, ‘sample_onnx_mnist.exe’ is generated in the bin folder which can be run from command line (no args required)

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:
https://docs.nvidia.com/deeplearning/tensorrt/quick-start-guide/index.html#onnx-export

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi, thanks for your response.

model.load_weights(load_regressor_path)
print('loading:{}'.format(load_regressor_path))
print(model.inputs)
print(model.name)

import tf2onnx
import keras2onnx
import onnx

input_signature = [tf.TensorSpec([1, None, None, 1], tf.float32, name='x')]
onnx_model, _ = tf2onnx.convert.from_keras(model, input_signature, opset=13)
onnx.save(onnx_model, "model_11.onnx")

I have used the above snippet to get the onnx model.

Here is the trtexec output of the same model
model_11_trtexec_output.txt (266.7 KB)

To reproduce error:
I’ve uploaded the required binaries to run the executable to the link below.

https://drive.google.com/drive/folders/1S410S169_0_LhMXqa7CNOe176F0BA9sC?usp=sharing

Run ‘sample_onnx_mnist.exe’ from ‘bin’ folder

Hi @ashwin.kannan3,

We could reproduce the issue. Looks like the model has dynamic channel size for the convolution layers, which TRT does not support yet. Please allow us some time to work on this.

Thank you.

Hi @spolisetty,

Thanks for looking into this issue. Any time estimate you can provide me with?

Hi @ashwin.kannan3,

Currently we do not have estimation. Please stay tuned developer forum for the update.

Thank you.

Hi @ashwin.kannan3
Google drive links are not accessible anymore in order to access your model and code.
Could you please give us the access so we can better help?

Thanks

Hi,
here are the links from the respective posts:
Post 1
https://drive.google.com/drive/folders/1KRKFzi-geTLgRyKa4Du8haCqd4CtyTa9?usp=sharing

Post 2
https://drive.google.com/drive/folders/1S410S169_0_LhMXqa7CNOe176F0BA9sC?usp=sharing

Let me know if you are able to access the files.