Failed to Load ONNX ROS Deep Learning

Hi I’m using Jetson NX with detectnet_v2 resnet18 model which is trained using tao. Getting this error when I try to specify the path of the onnx file using ros deep learning.
[TRT] device GPU, building CUDA engine (this may take a few minutes the first time a network is loaded)
[TRT] 4: [network.cpp::validate::3062] Error Code 4: Internal Error (Network has dynamic or shape inputs, but no optimization profile has been defined.)
[TRT] device GPU, failed to build CUDA engine
[TRT] device GPU, failed to load /home/weld-1/Octo_crawler/src/ros_deep_learning/model/resnet18_detector.onnx
[TRT] detectNet – failed to initialize.
[ERROR] [1706880753.979044542]: failed to load detectNet model

I am stuck and have no idea how to fix this. Any help is appreciated.

@nil2434 have you tried getting this working with detectnet/detectnet.py from jetson-inference first? That’s what ros_deep_learning uses underneath. Also I have only used ETLT format from TAO in jetson-inference, and the ONNX path in jetson-inference for detectnet is expecting SSD-Mobilenet architecture exported from PyTorch.

For more current support in ROS, I would recommend looking into this Isaac ROS node which supports detectnet_v2 from TAO:

Hey dusty thanks for the reply. I am using ROS1 but will look more into issac ros detection. Would using etlt outside of the models you suggest in jetson inference work? I tried loading in trt file. That doesn’t seem to work. I tried detectnet.py that doesn’t work either.
Will try to think of something.

@nil2434 are you able to run these examples with detectnet/detectnet.py from jetson-inference that load ETLT models from TAO and convert them to TensorRT?

Yes I am able to run the samples. I also saw the tao-converter. Will it accept trt.int8 as input for the model since I’m using tao toolkit v5 for training my detectnet model? Is there any way for me to convert my onnx to etlt? Thanks for the reply.

I’m not intimately familiar with TAO, but my understanding is that it can export ETLT (or used to anyways). Perhaps tao-converter tool can take in ONNX now. You may also be able to reconfigure the jetson-inference code in c/detectNet.cpp to expect detectnet_v2 architecture when it sees ONNX instead of ssd-mobilenet:

Basically you could change those if( IsModelType(MODEL_ONNX) ) blocks to call the detectNet_v2 pre/post-processing routines instead of the SSD routines.