Please provide the following info (check/uncheck the boxes after clicking “+ Create Topic”): Software Version
[*] DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other
In scope of my project, I’ve used OnnxYolov4 model for development by using TensorRT-7.2.3.4 package on host machine.
No issues wasn’t observed during parsing ONNX model and further steps.
The C++ code has been compiled for aarch64 and launched on targedt Drive AGX platform.
As a result Ive gotten error:
[06/01/2021-18:36:05] [W] [TRT] ModelImporter.cpp:140: No importer registered for op: Equal. Attempting to import as plugin.
[06/01/2021-18:36:05] [I] [TRT] builtin_op_importers.cpp:2191: Searching for plugin: Equal, plugin_version: 1, plugin_namespace:
While parsing node number 375 [Equal]:
ERROR: builtin_op_importers.cpp:2193 In function importFallbackPluginImporter:
[8] Assertion failed: creator && “Plugin not found” FailEqualOpAgx.log (10.9 KB)
If I understood correctly, TensorRT 6.3.1 package on target platform doesn’t support op: Equal .
In other words the library Onnx model parsing on target platform can’t support this operation:
/usr/lib/aarch64-linux-gnu/libnvonnxparser.so.6.3.1
On the other hand, libnvonnxparser.so.7.2.1 already has this support and it can be explained the difference in results.
Please correct me if I’m wrong.
Could you please help me to understand what is the most fastest and correct solution w/o writing a custom plugin would be done:
DRIVE OS Linux 5.2.0 has TensorRT 6.3.1, does Nvidia have any plans to provide the latest TensorRT package in the future ?
Is it possible to compile libnvonnxparser.so.7.2.1 locally on the host for aarch64 or directly on target platform
and replace it ?
Dear @anton.nesterenko,
Yes. Next DRIVE release will have next TensorRT release. As a WAR, you may try getting the TRT 7.x libs other needed libs from Jetson platform and use them on DRIVE. But this is not officially supported and you may notice issues.
On DRIVE platform, with TRT 6.3 libs, you need to write custom plugin to get it work…
Performed compilation directly on AGX for aarch64:
make nvonnxparser
Result: libnvonnxparser.so.6.0.1
Copied to /usr/lib/aarch64-linux-gnu/ and set new link for using this libnvonnxparser
Run my project on AGX and got new error:
Please see the attached log file: OnnxIRerror.txt (1.8 KB)
— End node —
ERROR: /home/nvidia/TensoRT_C++/REMOVE/TensorRT/parsers/onnx/builtin_op_importers.cpp:757 In function importConv:
[8] Assertion failed: (nbSpatialDims == 2 && kernel_weights.shape.nbDims == 4) || (nbSpatialDims == 3 && kernel_weights.shape.nbDims == 5)
nvidia@tegra-ubuntu:~/TensoRT_C++/ForAGX/Aarch64/bin$
Looks like the code for 6.0.1 doesn’t support the current model.
5. The next step would be added code for , but it doesn’t make sense.
Could you please correct my actions, is it a correct approach ?
Could you please provide me link to repo for building TRT 6.3 lib the same version as already preinstalled on AGX ?
If I have to write a custom plugin for and AGX board, please give me detailed directions how to do it ?
For adding this plugin I’ve followed some examples and overridden necessary methods of IPluginV2DynamicExt
JFYI, the plugin crashed with using class IPluginV2Ext
I didn’t develop a full source code, only for passing plugin’s error
Looks like the plugin was applied and found, please FailCast.log FailCast.log (300.0 KB)
However I’ve got error:
[06/23/2021-15:50:40] [V] [TRT] ModelImporter.cpp:186: Equal_375 [Equal] outputs: [1111 → (6)],
[06/23/2021-15:50:40] [V] [TRT] ModelImporter.cpp:108: Parsing node: Cast_376 [Cast]
[06/23/2021-15:50:40] [V] [TRT] ModelImporter.cpp:124: Searching for input: 1111
[06/23/2021-15:50:40] [V] [TRT] ModelImporter.cpp:130: Cast_376 [Cast] inputs: [1111 → (6)],
Unsupported ONNX data type: BOOL (9)
While parsing node number 376 [Cast → “1112”]:
— Begin node —
input: “1111”
output: “1112”
name: “Cast_376”
op_type: “Cast”
attribute {
name: “to”
i: 9
type: INT
}
— End node —
ERROR: builtin_op_importers.cpp:308 In function importCast:
[4] Assertion failed: static_cast(dtype) != -1
destroy()
This error for the next node “Cast_376”
CastNode.png
DRIVE AGX uses libnvonnxparser.so.6.3.1
Is it possible that the new issue relates with oldest version of TRT6 instead of TRT7 and I can’t fix it
with using my current ONNX model ?
Due to plugin, wasn’t developed correctly I have new issue ?
May be I need to somehow try to cast output tensor to supported data type ?
However, I think it won’t help because the model already has the node with expected data type 9.
Do I need develop or rewrite already registered plugin for this case ?