Ssd inception v2 model ONNX to tensorrt error

How can I change the UINT8 to INT32 in tensorflow ssd inception v2 model.?
Is there any workaround for this model.?
Here is the model : http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2018_01_28.tar.gz:

I used tf2onnx command to convert saved model to onnx. It worked .

Then I got the below error when I run this command ./trtexec --onxx=inception.onnx

Error repeated for both custom trained and also standard ssd inception v2 model

WARNING: ONNX model has a newer ir_version (0.0.5) than this parser was built against (0.0.3).
Unsupported ONNX data type: UINT8 (2)
ERROR: ModelImporter.cpp:54 In function importInput:
[8] Assertion failed: convert_dtype(onnx_tensor_type.elem_type(), &trt_dtype)

Any suggestions for this??
I am using jetson nano with below specifications.

sudo apt-cache show nvidia-jetpack
Package: nvidia-jetpack
Version: 4.3-b134
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194

Depends: nvidia-container-csv-cuda (= 10.0.326-1), libopencv-python (= 4.1.1-2-gd5a58aa75), libvisionworks-sfm-dev (= 0.90.4), libvisionworks-dev (= 1.6.0.500n), libvisionworks-samples (= 1.6.0.500n), libnvparsers6 (= 6.0.1-1+cuda10.0), libcudnn7-doc (= 7.6.3.28-1+cuda10.0), libcudnn7-dev (= 7.6.3.28-1+cuda10.0), libnvinfer-samples (= 6.0.1-1+cuda10.0), libnvinfer-bin (= 6.0.1-1+cuda10.0), nvidia-container-csv-cudnn (= 7.6.3.28-1+cuda10.0), libvisionworks-tracking-dev (= 0.88.2), vpi-samples (= 0.1.0), tensorrt (= 6.0.1.10-1+cuda10.0), libopencv (= 4.1.1-2-gd5a58aa75), libnvinfer-doc (= 6.0.1-1+cuda10.0), libnvparsers-dev (= 6.0.1-1+cuda10.0), libcudnn7 (= 7.6.3.28-1+cuda10.0), libnvidia-container0 (= 0.9.0~beta.1), cuda-toolkit-10-0 (= 10.0.326-1), nvidia-container-csv-visionworks (= 1.6.0.500n), graphsurgeon-tf (= 6.0.1-1+cuda10.0), libopencv-samples (= 4.1.1-2-gd5a58aa75), python-libnvinfer-dev (= 6.0.1-1+cuda10.0), libnvinfer-plugin-dev (= 6.0.1-1+cuda10.0), libnvinfer-plugin6 (= 6.0.1-1+cuda10.0), nvidia-container-toolkit (= 1.0.1-1), libnvinfer-dev (= 6.0.1-1+cuda10.0), libvisionworks (= 1.6.0.500n), libopencv-dev (= 4.1.1-2-gd5a58aa75), nvidia-l4t-jetson-multimedia-api (= 32.3.1-20191209225816), vpi-dev (= 0.1.0), vpi (= 0.1.0), python3-libnvinfer (= 6.0.1-1+cuda10.0), python3-libnvinfer-dev (= 6.0.1-1+cuda10.0), opencv-licenses (= 4.1.1-2-gd5a58aa75), nvidia-container-csv-tensorrt (= 6.0.1.10-1+cuda10.0), libnvinfer6 (= 6.0.1-1+cuda10.0), libnvonnxparsers-dev (= 6.0.1-1+cuda10.0), libnvonnxparsers6 (= 6.0.1-1+cuda10.0), uff-converter-tf (= 6.0.1-1+cuda10.0), nvidia-docker2 (= 2.2.0-1), libvisionworks-sfm (= 0.90.4), libnvidia-container-tools (= 0.9.0~beta.1), nvidia-container-runtime (= 3.1.0-1), python-libnvinfer (= 6.0.1-1+cuda10.0), libvisionworks-tracking (= 0.88.2)
Homepage: Autonomous Machines | NVIDIA Developer
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_4.3-b134_arm64.deb
Size: 29742
SHA256: 1fd73e258509822b928b274f61a413038a29c3705ee8eef351a914b9b1b060ce
SHA1: a7c4ab8b241ab1d2016d2c42f183c295e66d67fe
MD5sum: de856bb9607db87fd298faf7f7cc320f
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

hi @god_ra,
I tried the same and was able to re-produce the issue.
Our team will be working to resolve this issue.
Thank you for your patience.

Okay. let me know as soon as you get the solution.

Thank you

I am still waiting for your response.
@AakankshaS

Hi @god_ra,
The Engineering team is working on the issue.
We will keep you posted on the solution.
Thanks!

Hi @AakankshaS
It is almost a month since I posted.

I want to use tensorflow model in tensorRT using c++ sample codes.
Standard incepiton models works fine with tf - uff - tensorrt (python and c++)
But custom trained models doesnot work with c++ samples code.
They work well with python code.

Atleast solve the problem of uff conversion if onnx is gettign delayed.

I am literally waitning everyday for your solution on this.

posted several times, but no respons efrom NVIDIA or tensorRT community.

Support is not as good as we think I suppose.

Just solve my problem of tf - uff - tensorrt(c++) or tf-onnx-tensorrt(c++) for custom trained ssd inception v2 models.

@AakankshaS,

Any update regarding this??

I tried it in tensorRT 7 as well. same problem persists.

Moving to Jetson Nano forum for resolution.

Hi,

Just assigned to this topic today, so please let me know if I don’t understand your question well.

It’s looks like you are facing some issue when using the default standard ssd inception v2 model with TensorRT C++ interface.
Actually, we do have a sample for ssd inception v2 model and it works well.

Have you tried it yet? The flow should be frozen pb -> uff -> TensorRT.

/usr/src/tensorrt/samples/sampleUffSSD

Thanks.

Standard model works fine. No problem.
Custom trained model works in python version but not in c++ version
In c++ i get no detections. No errors.

Thanks for the clarification.
It looks like you have shared the model in topic Custom trained SSD inception model in tensorRT c++ version - #3 by god_ra.

So let’s track the following status there and close this issue.