DRIVE AGX: Onnx parsing error. Not supported operation

Please provide the following info (check/uncheck the boxes after clicking “+ Create Topic”):
Software Version
[*] DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
[*] Linux
QNX
other

Hardware Platform
[*] NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other

SDK Manager Version
[*] 1.5.0.7774
other

Host Machine Version
[*] native Ubuntu 18.04
other

Dear support team !

In scope of my project, I’ve used OnnxYolov4 model for development by using TensorRT-7.2.3.4 package on host machine.
No issues wasn’t observed during parsing ONNX model and further steps.
The C++ code has been compiled for aarch64 and launched on targedt Drive AGX platform.
As a result Ive gotten error:

[06/01/2021-18:36:05] [W] [TRT] ModelImporter.cpp:140: No importer registered for op: Equal. Attempting to import as plugin.
[06/01/2021-18:36:05] [I] [TRT] builtin_op_importers.cpp:2191: Searching for plugin: Equal, plugin_version: 1, plugin_namespace:
While parsing node number 375 [Equal]:
ERROR: builtin_op_importers.cpp:2193 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found”
FailEqualOpAgx.log (10.9 KB)

If I understood correctly, TensorRT 6.3.1 package on target platform doesn’t support op: Equal .
In other words the library Onnx model parsing on target platform can’t support this operation:
/usr/lib/aarch64-linux-gnu/libnvonnxparser.so.6.3.1
On the other hand, libnvonnxparser.so.7.2.1 already has this support and it can be explained the difference in results.
Please correct me if I’m wrong.

Could you please help me to understand what is the most fastest and correct solution w/o writing a custom plugin would be done:

  1. DRIVE OS Linux 5.2.0 has TensorRT 6.3.1, does Nvidia have any plans to provide the latest TensorRT package in the future ?
  2. Is it possible to compile libnvonnxparser.so.7.2.1 locally on the host for aarch64 or directly on target platform
    and replace it ?

I appreciate your help in any case.

Dear @anton.nesterenko,
Yes. Next DRIVE release will have next TensorRT release. As a WAR, you may try getting the TRT 7.x libs other needed libs from Jetson platform and use them on DRIVE. But this is not officially supported and you may notice issues.

On DRIVE platform, with TRT 6.3 libs, you need to write custom plugin to get it work…

Dear @SivaRamaKrishnaNV ,

I’ve tried to replace TRT libs on DRIVE with Jetson’s versions, but a lot of conflicts/dependencies…
So I’ve decided to follow the next scenario.

The main idea is porting operation code from the latest branch to TRT 6.3 w/o writing a custom plugin

  1. I found a code for missing operation in master branch:
    GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators.
    onnx-tensorrt/builtin_op_importers.cpp at a80015b29668722bc4d8e226f64c3fe8edf60179 · onnx/onnx-tensorrt · GitHub
    onnx-tensorrt/onnx2trt_utils.cpp at a80015b29668722bc4d8e226f64c3fe8edf60179 · onnx/onnx-tensorrt · GitHub
  2. AGX platform has TRT v6.3
    I’ve cloned source code release 6.0 to AGX :
    GitHub - NVIDIA/TensorRT at release/6.0
  3. Performed compilation directly on AGX for aarch64:
    make nvonnxparser
    Result: libnvonnxparser.so.6.0.1
    Copied to /usr/lib/aarch64-linux-gnu/ and set new link for using this libnvonnxparser
  4. Run my project on AGX and got new error:
    Please see the attached log file:
    OnnxIRerror.txt (1.8 KB)

— End node —
ERROR: /home/nvidia/TensoRT_C++/REMOVE/TensorRT/parsers/onnx/builtin_op_importers.cpp:757 In function importConv:
[8] Assertion failed: (nbSpatialDims == 2 && kernel_weights.shape.nbDims == 4) || (nbSpatialDims == 3 && kernel_weights.shape.nbDims == 5)
nvidia@tegra-ubuntu:~/TensoRT_C++/ForAGX/Aarch64/bin$

Looks like the code for 6.0.1 doesn’t support the current model.
5. The next step would be added code for , but it doesn’t make sense.

  1. Could you please correct my actions, is it a correct approach ?
  2. Could you please provide me link to repo for building TRT 6.3 lib the same version as already preinstalled on AGX ?
  3. If I have to write a custom plugin for and AGX board, please give me detailed directions how to do it ?

Thank you for support,

Dear @anton.nesterenko,
Could you please provide me link to repo for building TRT 6.3 lib the same version as already preinstalled on AGX?

The TensorRT package on DRIVE AGX comes along with DRIVE OS. There is no separate repo/package.

You need to implement custom plugins for unsupported layer. Please check Developer Guide :: NVIDIA Deep Learning TensorRT Documentation for mode details

Dear @SivaRamaKrishnaNV ,

I’ve understood your point and will develop a new custom plugin.

Could you please tell me when the next DRIVE OS release will be(approximately) and may we expect that future release will contain TRT 7 libs ?

Thanks for support.

Dear @anton.nesterenko,
The next DRIVE OS release is scheduled this month. But it does not have TRT 7 libs.