Regarding the information about modifying sample code to convert the prebuilt onnx models and perform inferencing

Description

Hi all, I got an error when I tried converting the onnx model to tensorrt. The onnx model “model.onxx” is the one which I have created from a tensorflow model.
The error message is
ModelImporter.cpp:726: ERROR: ModelImporter.cpp:162 In function parseGraph:
[6] Invalid Node - StatefulPartitionedCall/sequential/flatten/Reshape

It only converts the existing onnx model “mnist.onnx” that comes along with tensorrt during its installation.
I want to know how the sample code given in the tensorrt sample directory can be used to convert the our own created onnx models to tensorrt format and make successful inference.

Below is the error log I got

[08/19/2023-06:33:09] [I] Building and running a GPU inference engine for Onnx MNIST
[08/19/2023-06:33:10] [I] [TRT] [MemUsageChange] Init CUDA: CPU +353, GPU +0, now: CPU 371, GPU 5391 (MiB)
[08/19/2023-06:33:10] [I] [TRT] ----------------------------------------------------------------
[08/19/2023-06:33:10] [I] [TRT] Input filename: …/…/data/mnist/model.onnx
[08/19/2023-06:33:10] [I] [TRT] ONNX IR version: 0.0.8
[08/19/2023-06:33:10] [I] [TRT] Opset version: 15
[08/19/2023-06:33:10] [I] [TRT] Producer name: tf2onnx
[08/19/2023-06:33:10] [I] [TRT] Producer version: 1.14.0 8f8d49
[08/19/2023-06:33:10] [I] [TRT] Domain:
[08/19/2023-06:33:10] [I] [TRT] Model version: 0
[08/19/2023-06:33:10] [I] [TRT] Doc string:
[08/19/2023-06:33:10] [I] [TRT] ----------------------------------------------------------------
[08/19/2023-06:33:10] [W] [TRT] onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/19/2023-06:33:10] [E] [TRT] ModelImporter.cpp:720: While parsing node number 0 [Reshape → “StatefulPartitionedCall/sequential/flatten/Reshape:0”]:
[08/19/2023-06:33:10] [E] [TRT] ModelImporter.cpp:721: — Begin node —
[08/19/2023-06:33:10] [E] [TRT] ModelImporter.cpp:722: input: “flatten_input”
input: “const_fold_opt__7”
output: “StatefulPartitionedCall/sequential/flatten/Reshape:0”
name: “StatefulPartitionedCall/sequential/flatten/Reshape”
op_type: “Reshape”

[08/19/2023-06:33:10] [E] [TRT] ModelImporter.cpp:723: — End node —
[08/19/2023-06:33:10] [E] [TRT] ModelImporter.cpp:726: ERROR: ModelImporter.cpp:162 In function parseGraph:
[6] Invalid Node - StatefulPartitionedCall/sequential/flatten/Reshape

Environment

TensorRT Version: 8.0.1.6-1
GPU Type: NVIDIA Maxwell architecture with 128 NVIDIA CUDA® cores
Nvidia Driver Version: -NA-
CUDA Version: cuda10.2
CUDNN Version: - NA-
Operating System + Version: Linux ubuntu 4.9.253-tegra #2 SMP PREEMPT Tue Nov 29 18:32:41 IST 2022 aarch64 aarch64 aarch64 GNU/Linux
Python Version (if applicable): -NA-
TensorFlow Version (if applicable): -NA-
PyTorch Version (if applicable): -NA-
Baremetal or Container (if container which image + tag): -NA-

Relevant Files

This is the link from which the code is taken to convert a tensorflow mnist model into onnx model which I have named it to “model.onnx”
(https://github.com/onnx/keras-onnx/blob/master/tutorial/TensorFlow_Keras_MNIST.ipynb)

Steps To Reproduce

  1. From the samples directory in the installed tensorrt directory modify the file samples/sampleOnnxMNIST/sampleOnnxMNIST.cpp
    The method to be modified is
    samplesCommon::OnnxSampleParams initializeSampleParams()

params.onnxFileName = “mnist.onnx”;

it has to be changed to
params.onnxFileName = “model.onnx”;

  1. Execute the make command and it will create a binary sample_onnx_mnist

  2. Run the above binary and the error will be reproduced

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi Aakansha, thank you for your response. I am attaching the script file and onnx model file.
The script file I have used is the one which is present in the directory
/usr/src/tensorrt/samples/sampleOnnxMNIST/sampleOnnxMNIST.cpp which I have modified to read the model.onnx file instead of mnist.onnx
I have attached the script file, make file as well as the ONNX model file for your reference.

Thanks and Regards

Nagaraj Trivedi
Makefile (243 Bytes)
model.onnx (26.7 KB)
sampleOnnxMNIST.cpp (15.1 KB)

Hi,

We recommend you use the latest TensorRT version 8.6.1.
Using the latest version, we are able to build the engine successfully.

[08/29/2023-10:11:34] [I]
&&&& PASSED TensorRT.trtexec [TensorRT v8601] # trtexec --onnx=model.onnx --verbose

Thank you.

Thank you. Please share me the link to download and install 8.6.1 version of tensorrt.

or

I am facing issue while executing the statement. Please provide the correct link how to install the onnx on the jetson nano
Below is the error message.
import onnx

import onnx
Traceback (most recent call last):
File “”, line 1, in
ModuleNotFoundError: No module named ‘onnx’

Please provide the link to download the TensorRT version 8.6.1

I have executed the following commands
import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)

and found there are no errors. But why the trtexec command should give the error. Please look into it.

As we mentioned earlier, we could successfully build the TensorRT engine on the latest version 8.6.1.
We recommend that you please make sure you are using the latest version.

https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tensorrt

You can use the above link to download the TensorRT NGC container to skip installation.

If you wish to install TensorRT normally, we are moving this post to the Jetson Nano forum to get better help on the installation steps.

Thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.