Error converting ONNX model to TensorRT on Jetson NX

original_d2net.onnx (29.1 MB) I have tried to deploy an ONNX model from PyTorch using TensorRT.
On PC everything was quite good.
On XAVIER NX I couldn’t pass the optimization phase.
I had an exception calling buildEngineWithConfig API:
“Assertion Error in mergeDAG: 0 (d2Inputs.size() == n2.inputs.size())
I have also tried to use Onnx2trt tool from TensorRT OSS.
But the same error result.
Assertion Error in mergeDAG: 0 (d2Inputs.size() == n2.inputs.size())
(now it used buildCudaEngine API)

What is the root cause for this issue ?
I would be grateful for any help.

Best Regards
Shlomi Peer

Environment 1 - Good

Visual studio 2019
CUDA Driver Version / Runtime Version - 10.2 / 10.2
CUDNN Version - 8.0.1
TensorRT -
GPU 1: NVIDIA Quadro M2000M
Display driver R451.77

Environment 2 - Error

Jetpack 4.4 DP [L4T 32.4.2]

Relevant Files


Steps To Reproduce

TensorRT OSS tool - run onnx2trt on jetson NX
set full name path to onnx attached as param1,
full name path for exported serialized file as param2

This topic was move to
Autonomous Machines => Jetson & Embedded Systems => Jetson Xavier NX

Hi @Shalomshlomi.Peer,
This looks like Jetson Xavier issue.
We recommend you to raise it to the below platform.