TensorRT, Drive AGX, Jetson and the .onnx format

Software Version
NVIDIA DRIVE™ Software 10.0 (Linux)
Target Operating System
Linux
Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
SDK Manager Version
1.6.0.8170
Host Machine Version
native Ubuntu 18.04

Hello,

I’m trying to run a re-trained model in a .onnx format but I have problems to run it, I think by the TensorRT version of the NVIDIA Drive Software 10.0
I’m following the steps of this git hub report: jetson-inference/pytorch-ssd.md at master · dusty-nv/jetson-inference · GitHub
I was able to run the classification and detection networks but I have problems when I try to re-train a network and I saved it in .onnx.

This is the error:
— End node —
ERROR: ModelImporter.cpp:288 In function importModel:
[5] Assertion failed: tensors.count(input_name)
[TRT] failed to parse ONNX model ‘…/ssd-mobilenet.onnx’
[TRT] device GPU, failed to load …/ssd-mobilenet.onnx
[TRT] detectNet – failed to initialize.
detectnet: failed to load detectNet model

I saw some possible solutions on the web that consist on modify the ModelImporter.cpp but I can’t find this file.

First Question: The TensorRT version of my driveworks is 5.1. It is not possible to upgrade it? I think that I need the version 7.1.
Second Question: There are other options to run this .onnx model? It is possible to avoid the TensorRT optimization?
Third Question: The NVIDIA DRIVE™ Software 10.0 and the JetPack can be working together?

Thanks in advance

Dear @daniel.yustegalvez,
The Jetson and DRIVE releases are different.

  1. The latest DRIVE release(DRIVE OS 5.2.0) has TensorRT 6.3.
  2. If you want to get TensorRT Optimized model, you need to use it.
  3. No. Both have different releases.

What is the difference between TensorRT 6.3 and TensorRT 6.0? The desktop version seems only has TensorRT 6.0. If I can build the engine using TensorRT 6.0 on my desktop, does it mean the model can also been built using TensorRT 6.3 on Drive AGX?

Dear @xu.yan,
The releases for x86 and DRIVE are different. The TensorRT releases for DRIVE is not available publicly as seperate package. You need to flash DRIVE release. If the model can be built on 6.0, it is expected to build on 6.3. Do you see any issue for your model?

Hi

In node 2 (convert_axis): UNSUPPORTED_NODE: Assertion failed: axis >= 0 && axis < nbDims
2
[TensorRT] ERROR: Network must have at least one output

I build an engine with TensorRT 6.0, result in the error above.

The same model with TensorRT 7.0, can be built successfully.

Both version are using cuda10.2 and cudnn7.6

Does TensorRT 6 requirement additional steps to build properly?

Thanks

Dear @xu.yan,
As per error report, the node is not supported in 6.0.
Could you tell what you planning to do? Which TensorRT version you want to use?
Could you please file seperate topic for your ask?