Software Version
NVIDIA DRIVE™ Software 10.0 (Linux)
Target Operating System
Linux
Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
SDK Manager Version
1.6.0.8170
Host Machine Version
native Ubuntu 18.04
Hello,
I’m trying to run a re-trained model in a .onnx format but I have problems to run it, I think by the TensorRT version of the NVIDIA Drive Software 10.0
I’m following the steps of this git hub report: jetson-inference/pytorch-ssd.md at master · dusty-nv/jetson-inference · GitHub
I was able to run the classification and detection networks but I have problems when I try to re-train a network and I saved it in .onnx.
This is the error:
— End node —
ERROR: ModelImporter.cpp:288 In function importModel:
[5] Assertion failed: tensors.count(input_name)
[TRT] failed to parse ONNX model ‘…/ssd-mobilenet.onnx’
[TRT] device GPU, failed to load …/ssd-mobilenet.onnx
[TRT] detectNet – failed to initialize.
detectnet: failed to load detectNet model
I saw some possible solutions on the web that consist on modify the ModelImporter.cpp but I can’t find this file.
First Question: The TensorRT version of my driveworks is 5.1. It is not possible to upgrade it? I think that I need the version 7.1.
Second Question: There are other options to run this .onnx model? It is possible to avoid the TensorRT optimization?
Third Question: The NVIDIA DRIVE™ Software 10.0 and the JetPack can be working together?
Thanks in advance