I am trying to convert the ONNX SSD mobilnet v2 model into TensorRT Engine. I am getting the below error

Hi team,

I converted the tf ssd mobilnet v2 frozen graph into onnx model on jetson xavier. It is working well but when I tried to convert the ONNX model into TensorRT Engine. I am getting the below error.


Building an engine.  This would take a while...
(Use "-v" or "--verbose" to enable verbose logging.)
[TensorRT] WARNING: onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[TensorRT] WARNING: onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[TensorRT] WARNING: onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
ERROR: Failed to parse the ONNX file.
In node -1 (importResize): UNSUPPORTED_NODE: Assertion failed: scales.is_weights() && "Resize scales must be an initializer!"
ERROR: failed to build the TensorRT engine!

I used this script for conversion.

Hi,

Which TensorRT/JetPack version do you use?
More, since we do support the ONNX model natively, could you use trtexec to see if it works?

/usr/src/tensorrt/bin/trtexec --onnx=[model]

Thanks.

Hi

Package: nvidia-jetpack
Version: 4.4.1-b50
Architecture: arm64
Maintainer: NVIDIA Corporation
Installed-Size: 194
Depends: nvidia-cuda (= 4.4.1-b50), nvidia-opencv (= 4.4.1-b50), nvidia-cudnn8 (= 4.4.1-b50), nvidia-tensorrt (= 4.4.1-b50), nvidia-visionworks (= 4.4.1-b50), nvidia-container (= 4.4.1-b50), nvidia-vpi (= 4.4.1-b50), nvidia-l4t-jetson-multimedia-api (>> 32.4-0), nvidia-l4t-jetson-multimedia-api (<< 32.5-0)
Homepage: Autonomous Machines | NVIDIA Developer
Priority: standard
Section: metapackages
Filename: pool/main/n/nvidia-jetpack/nvidia-jetpack_4.4.1-b50_arm64.deb
Size: 29412
SHA256: ec502e1e3672c059d8dd49e5673c5b2d8c606584d4173ee514bbc4376547a171
SHA1: 75a405f1ad533bfcd04280d1f9b237b880c39be5
MD5sum: 1267b31d8b8419d9847b0ec4961b15a4
Description: NVIDIA Jetpack Meta Package
Description-md5: ad1462289bdbc54909ae109d1d32c0a8

thanks,

could you exactly tell me ? how to call the trtexec ? where the output model will be saved?

Hi ,

when i used the trtexec . I am getting the below error.

./trtexec --onnx=/home/rachel2/Desktop/TRT_object_detection-master/1.1.0/outputmodelNewDataType.onnx --saveEngine=/home/rachel2/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozen.engine
&&&& RUNNING TensorRT.trtexec # ./trtexec --onnx=/home/rachel2/Desktop/TRT_object_detection-master/1.1.0/outputmodelNewDataType.onnx --saveEngine=/home/rachel2/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozen.engine
[10/22/2021-17:24:09] [I] === Model Options ===
[10/22/2021-17:24:09] [I] Format: ONNX
[10/22/2021-17:24:09] [I] Model: /home/rachel2/Desktop/TRT_object_detection-master/1.1.0/outputmodelNewDataType.onnx
[10/22/2021-17:24:09] [I] Output:
[10/22/2021-17:24:09] [I] === Build Options ===
[10/22/2021-17:24:09] [I] Max batch: 1
[10/22/2021-17:24:09] [I] Workspace: 16 MB
[10/22/2021-17:24:09] [I] minTiming: 1
[10/22/2021-17:24:09] [I] avgTiming: 8
[10/22/2021-17:24:09] [I] Precision: FP32
[10/22/2021-17:24:09] [I] Calibration: 
[10/22/2021-17:24:09] [I] Safe mode: Disabled
[10/22/2021-17:24:09] [I] Save engine: /home/rachel2/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozen.engine
[10/22/2021-17:24:09] [I] Load engine: 
[10/22/2021-17:24:09] [I] Builder Cache: Enabled
[10/22/2021-17:24:09] [I] NVTX verbosity: 0
[10/22/2021-17:24:09] [I] Inputs format: fp32:CHW
[10/22/2021-17:24:09] [I] Outputs format: fp32:CHW
[10/22/2021-17:24:09] [I] Input build shapes: model
[10/22/2021-17:24:09] [I] Input calibration shapes: model
[10/22/2021-17:24:09] [I] === System Options ===
[10/22/2021-17:24:09] [I] Device: 0
[10/22/2021-17:24:09] [I] DLACore: 
[10/22/2021-17:24:09] [I] Plugins:
[10/22/2021-17:24:09] [I] === Inference Options ===
[10/22/2021-17:24:09] [I] Batch: 1
[10/22/2021-17:24:09] [I] Input inference shapes: model
[10/22/2021-17:24:09] [I] Iterations: 10
[10/22/2021-17:24:09] [I] Duration: 3s (+ 200ms warm up)
[10/22/2021-17:24:09] [I] Sleep time: 0ms
[10/22/2021-17:24:09] [I] Streams: 1
[10/22/2021-17:24:09] [I] ExposeDMA: Disabled
[10/22/2021-17:24:09] [I] Spin-wait: Disabled
[10/22/2021-17:24:09] [I] Multithreading: Disabled
[10/22/2021-17:24:09] [I] CUDA Graph: Disabled
[10/22/2021-17:24:09] [I] Skip inference: Disabled
[10/22/2021-17:24:09] [I] Inputs:
[10/22/2021-17:24:09] [I] === Reporting Options ===
[10/22/2021-17:24:09] [I] Verbose: Disabled
[10/22/2021-17:24:09] [I] Averages: 10 inferences
[10/22/2021-17:24:09] [I] Percentile: 99
[10/22/2021-17:24:09] [I] Dump output: Disabled
[10/22/2021-17:24:09] [I] Profile: Disabled
[10/22/2021-17:24:09] [I] Export timing to JSON file: 
[10/22/2021-17:24:09] [I] Export output to JSON file: 
[10/22/2021-17:24:09] [I] Export profile to JSON file: 
[10/22/2021-17:24:09] [I] 
----------------------------------------------------------------
Input filename:   /home/rachel2/Desktop/TRT_object_detection-master/1.1.0/outputmodelNewDataType.onnx
ONNX IR version:  0.0.5
Opset version:    10
Producer name:    
Producer version: 
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[10/22/2021-17:24:11] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[10/22/2021-17:24:11] [W] [TRT] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
[10/22/2021-17:24:11] [W] [TRT] onnx2trt_utils.cpp:246: One or more weights outside the range of INT32 was clamped
ERROR: builtin_op_importers.cpp:2549 In function importResize:
[8] Assertion failed: scales.is_weights() && "Resize scales must be an initializer!"
[10/22/2021-17:24:11] [E] Failed to parse onnx file
[10/22/2021-17:24:11] [E] Parsing model failed
[10/22/2021-17:24:11] [E] Engine creation failed
[10/22/2021-17:24:11] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # ./trtexec --onnx=/home/rachel2/Desktop/TRT_object_detection-master/1.1.0/outputmodelNewDataType.onnx --saveEngine=/home/rachel2/Desktop/TRT_object_detection-master/1.1.0/MODEL_frozen.engine

Even I tried converting the tf model into UFF. I am able to convert the TF frozen graph into UFF. But UFF to tensorRT conversion is not working.

I saw somewhere on the Nvidia forum, UFF to TensorRT won’t work on Jetson Xavier.

Hi,

ERROR: builtin_op_importers.cpp:2549 In function importResize:
[8] Assertion failed: scales.is_weights() && "Resize scales must be an initializer!"

The error is related to some non-supported layer.

Would you mind upgrading your device to JetPack 4.6?
Since we keep adding new support in the TensorRT, it is good to get the new release first.

Thanks.

I had upgraded the jetpack into 4.6, but it is not working. I wrongly mentioned the SSD mobilenet version as 2. I am trying to convert the SSD mobilenet v3 into TensorRT Engine.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.