Convert the onnx model to TRT engine by onnx2trt on AGX | JetPack v4.4

Dear all,

Recently I upgraded the JetPack version from v4.3 to v4.4 (Actually I was flashing OS from scratch) on AGX, and I tried to reproduced my project on this new version.

I have succeeded to convert this model from onnx model to tensorrt on AGX (JetPack v4.3), TX2 (JetPack v4.3), and Linux (Ubuntu 18.04 with TRT 7.0) by onnx2trt tool.

However, I don’t know why this model cannot successfully convert it to TRT engine that I got this error below.

$ onnx2trt model.onnx -o model.trt -b 1                         

----------------------------------------------------------------
Input filename:   model.onnx
ONNX IR version:  0.0.4
Opset version:    10
Producer name:    pytorch
Producer version: 1.3
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
Parsing model
[2020-06-11 01:15:19 WARNING] /home/nvidia/ssd256/github/onnx-tensorrt/onnx2trt_utils.cpp:235: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Building TensorRT engine, FP16 available:1
    Max batch size:     1
    Max workspace size: 1024 MiB
[2020-06-11 01:15:24   ERROR] ../builder/cudnnBuilderGraphOptimizer.cpp (3118) - Assertion Error in mergeDAG: 0 (d1Inputs.size() == n1.inputs.size())
terminate called after throwing an instance of 'std::runtime_error'
  what():  Failed to create object
[1]    10236 abort (core dumped)  onnx2trt model.onnx -o model.trt -b 1 

I haven’t met this error before.

Besides above, I found an issue that as my AGX was installed Pytorch v1.5.0, so when I converted the other model by onnx2trt on this device, they showed Producer version: 1.5. However, for this model, it showed Producer version: 1.3. (This onnx model was not exported on AGX. I took the onnx model from PC to AGX.)

I wonder does anyone also meet this error before? Or any idea about this?

Thank you.

Best regards,
Chieh


Environment

TensorRT Version : 7.1 with Jetpack 4.4
GPU Type : (Jetson AGX Xavier)
Nvidia Driver Version : Jetpack 4.4
CUDA Version : 10.2
CUDNN Version : 8.0
Operating System + Version : Jetpack 4.4 (Costimized Ubuntu 18.04)
Python Version (if applicable) : 3.6
PyTorch Version (if applicable) : 1.5.0
cmake version : 3.13.0
opencv version : 4.1.1
onnx version : 1.7.0

Hi,

To figure out this issue comes from onnx2trt or TensorRT itself, could you try this command:

/usr/src/tensorrt/bin/trtexec --onnx=model.onnx

If the error goes on, please share your onnx model via private message.
Thanks.

I also encountered a similar problem. What is the solution?

The command I executed
/usr/src/tensorrt/bin/trtexec --onnx=./model/test.onnx

The summary:

Input filename: ./model/test.onnx
ONNX IR version: 0.0.6
Opset version: 10
Producer name: pytorch
Producer version: 1.6
Domain:
Model version: 0
Doc string:

The error message:

…/builder/cudnnBuilderGraphOptimizer.cpp (3121) - Assertion Error in mergeDAG: 0 (d1Inputs.size() == n1.inputs.size())

My environment:

TensorRT Version : 7.1 with Jetpack 4.4
GPU Type : Jetson Nano
Nvidia Driver Version : Jetpack 4.4
CUDA Version : 10.2
CUDNN Version : 8.0
Operating System + Version : Jetpack 4.4
Python Version (if applicable) : 3.6
PyTorch Version (if applicable) : 1.6.0

Hi sysu.zeh,

Please help to open a new topic for your issue. Thanks