Hi @user119268, is TensorRT able to load the model using trtexec tool (found under /usr/src/tensorrt/bin)? This will let you know if the ONNX model is able to be used with TensorRT or not. It appears there is some plugin layer missing.
Also, the pre/post-processing for detection network in jetson-inference is setup for SSD-Mobilenet ONNX, not Faster-RCNN. You may need to modify the pre/post-processing in jetson-inference/c/detectNet.cpp for different models.
Hi @dusty_nv No, TensorRT is not able to load the model and it also shows the same error as shown in the question.
You mentioned that jetson inference is setup for SSD Mobilenet, does this means we cannot use Faster RCNN model. Can you share the list of models we can run other than SSD Mobilenet models?
I was under impression that we can use any model for custom dataset training (using PyTorch on desktop machine) and then convert them to ONNX to be able to use on Jetson Nano. Is this not correct?
In order for it to work, TensorRT needs to be able to load the ONNX and the jetson-inference detectNet.cpp source code needs to be setup with the right pre/post-processing. What I have tested it with and what it’s currently setup for is SSD-Mobilenet that was trained with train_ssd.py from jetson-inference. That’s not to say that you can’t adapt it to work with other models, it’s just what I’ve tested/validated.
How can we download some other models, lets say vgg16-ssd because I think this is supported. Moreover if you any links to any articles where any other user has used Faster RCNN or any other model running on Jetson nano, please share
vgg16-ssd is in train_ssd.py in PyTorch, but it’s not tested the whole deployment pipeline (the ONNX export / import part). The train_ssd.py is a fork that I use for training SSD-Mobilenet with PyTorch, so it has some extra models that don’t get used with TensorRT.
There is a Faster-RCNN sample that comes with TensorRT found under /usr/src/tensorrt/samples/sampleFasterRCNN
There are also examples of deploying YOLO with TensorRT here: