Unable to run model after conversion from pytorch to onnx

Hi

We have trained a Faster RCNN model with PyTorch. Later we converted the .pth model to .onnx following this link

While running the model with detectnet using below command:

detectnet --model=test/deploy.onnx --labels=test/labels.txt --input-blob=input_0 --output-cvg=scores --output-bbox=boxes video.mp4

we are getting below:

[TRT]    Importing initializer: 1322
[TRT]    Parsing node: node_of_552 [AliasWithName]
[TRT]    Searching for input: data
[TRT]    node_of_552 [AliasWithName] inputs: [data -> (1, 3, 1152, 800)[FLOAT]], 
[TRT]    No importer registered for op: AliasWithName. Attempting to import as plugin.
[TRT]    Searching for plugin: AliasWithName, plugin_version: 1, plugin_namespace: 
[TRT]    3: getPluginCreator could not find plugin: AliasWithName version: 1
[TRT]    ModelImporter.cpp:720: While parsing node number 0 [AliasWithName -> "552"]:
[TRT]    ModelImporter.cpp:721: --- Begin node ---
[TRT]    ModelImporter.cpp:722: input: "data"
output: "552"
op_type: "AliasWithName"
attribute {
  name: "name"
  s: "data"
  type: STRING
}
attribute {
  name: "is_backward"
  i: 0
  type: INT
}
domain: "org.pytorch._caffe2"

[TRT]    ModelImporter.cpp:723: --- End node ---
[TRT]    ModelImporter.cpp:726: ERROR: builtin_op_importers.cpp:4643 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
[TRT]    failed to parse ONNX model 'test/deploy.onnx'
[TRT]    device GPU, failed to load test/deploy.onnx
[TRT]    detectNet -- failed to initialize.
detectnet:  failed to load detectNet model

We also verified our onnx model by running onnx.checker.check_model(model) from the above link. Can anyone please suggest any issues here.

Thanks

Hi @user119268, is TensorRT able to load the model using trtexec tool (found under /usr/src/tensorrt/bin)? This will let you know if the ONNX model is able to be used with TensorRT or not. It appears there is some plugin layer missing.

Also, the pre/post-processing for detection network in jetson-inference is setup for SSD-Mobilenet ONNX, not Faster-RCNN. You may need to modify the pre/post-processing in jetson-inference/c/detectNet.cpp for different models.

Hi @dusty_nv No, TensorRT is not able to load the model and it also shows the same error as shown in the question.

You mentioned that jetson inference is setup for SSD Mobilenet, does this means we cannot use Faster RCNN model. Can you share the list of models we can run other than SSD Mobilenet models?

I was under impression that we can use any model for custom dataset training (using PyTorch on desktop machine) and then convert them to ONNX to be able to use on Jetson Nano. Is this not correct?

In order for it to work, TensorRT needs to be able to load the ONNX and the jetson-inference detectNet.cpp source code needs to be setup with the right pre/post-processing. What I have tested it with and what it’s currently setup for is SSD-Mobilenet that was trained with train_ssd.py from jetson-inference. That’s not to say that you can’t adapt it to work with other models, it’s just what I’ve tested/validated.

@dusty_nv Okay I understand Jetson-Inference is tested with SSD Mobilenet.

At this step, we are downloading the SSD Mobilenet V1

$ cd jetson-inference/python/training/detection/ssd
$ wget https://nvidia.box.com/shared/static/djf5w54rjvpqocsiztzaandq1m3avr7c.pth -O models/mobilenet-v1-ssd-mp-0_675.pth
$ pip3 install -v -r requirements.txt

How can we download some other models, lets say vgg16-ssd because I think this is supported. Moreover if you any links to any articles where any other user has used Faster RCNN or any other model running on Jetson nano, please share

Thanks

vgg16-ssd is in train_ssd.py in PyTorch, but it’s not tested the whole deployment pipeline (the ONNX export / import part). The train_ssd.py is a fork that I use for training SSD-Mobilenet with PyTorch, so it has some extra models that don’t get used with TensorRT.

There is a Faster-RCNN sample that comes with TensorRT found under /usr/src/tensorrt/samples/sampleFasterRCNN

There are also examples of deploying YOLO with TensorRT here:

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.