Error converting ONNX model to TensorRT on Jetson NX

I have tried to deploy an ONNX model from PyTorch using TensorRT.
On PC everything was quite good.
On XAVIER NX I couldn’t pass the optimization phase.
I had an exception calling buildEngineWithConfig API:
“Assertion Error in mergeDAG: 0 (d2Inputs.size() == n2.inputs.size())
I have also tried to use Onnx2trt tool from TensorRT OSS.
But the same error result.
Assertion Error in mergeDAG: 0 (d2Inputs.size() == n2.inputs.size())
(now it used buildCudaEngine API)

What is the root cause for this issue ?
I would be grateful for any help.

Best Regards
Shlomi Peer

Environment 1 - Good

Win10
Visual studio 2019
CUDA Driver Version / Runtime Version - 10.2 / 10.2
CUDNN Version - 8.0.1
TensorRT - 7.2.1.6
GPU 1: NVIDIA Quadro M2000M
Display driver R451.77

Environment 2 - Error

Jetson XAVIER NX
Jetpack 4.4 DP [L4T 32.4.2]

Relevant Files

original_d2net.onnx

Steps To Reproduce

/usr/src/tensorrt/bin/trtexec --onnx=MyMoodel.onnx

Or

TensorRT OSS tool - run onnx2trt on jetson NX
set full name path to onnx attached as param1,
full name path for exported serialized file as param2original_d2net.onnx (29.1 MB)

Hi,

Since there are some fixes between TensorRT7.1.3(JetPack4.4.1) and TensorRT7.1.0 (JetPack4.4 DP),
Could you reflash your XavierNX with our latest JetPack4.4.1 and try it again?

Thanks.

Hi Aasta,
thanks for your answer.

First i figured out that the error was with operator Sub.
I replaced it with Add (minus) and it worked.
The model was now OK optimized and could run inference also.

Second ,
We had a problem to reflash our NX with JetPack4.4.1 [using last SDKManager]
Also couldn’t reflash TX2i & TX with JetPack4.4.1.
Only with JetPack4.4DP.

Shlomi

Hi,

1.
Good to know you have a workaround for the operation.

Based on the support matrix of onnx2trt, the SUB operation does support.
Guess the error may occur due to some variance in the layer definition.

2.
Are you using a custom carrier board?
If not, you should be able to reflash the device with a newer JetPack release.
You can file a topic specifying for the flash issue, so we can provide some help for you.

Thanks.

Hi,

  1. What do you mean some variance in the layer definition ?
    such variance is working in Add operator . so i can’t understand.
    Can you be more specific ? an example might help.
  2. No it is NVIDIA Carrier-NVIDIA EVB.

**** Also :
We could upgrade to 4.4.1 using Packag Managment tool methods instead of using SDK manager:

[https://docs.nvidia.com/jetson/jetpack/install-jetpack/index.html]) section 1.3.2
It works!
I will examine the model and update this topic soon
Thanks!

Hi,

The parser v7.1 use almost the same function for Add and Sub:
https://github.com/onnx/onnx-tensorrt/blob/7.1/builtin_op_importers.cpp

DEFINE_BUILTIN_OP_IMPORTER(Add)
{
    return elementwiseHelper(ctx, node, inputs, nvinfer1::ElementWiseOperation::kSUM);
}
DEFINE_BUILTIN_OP_IMPORTER(Sub)
{
    return elementwiseHelper(ctx, node, inputs, nvinfer1::ElementWiseOperation::kSUB);
}

So the parsing should work on TensorRT 7.1.
Please let us know the result for JetPack 4.4.1.

Thanks.