getPluginCreator could not find plugin ResizeNearest version 001 namespace

Please provide the following info (check/uncheck the boxes after creating this topic):
Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.6 and DriveWorks 4.0
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other

Target Operating System
Linux
QNX
other

Hardware Platform
NVIDIA DRIVE™ AGX Xavier DevKit (E3550)
NVIDIA DRIVE™ AGX Pegasus DevKit (E3550)
other

SDK Manager Version
1.8.0.10363
other

Host Machine Version
native Ubuntu 18.04
other

Referencing these other topics:

Is there an implementation for the ResizeNearest plugin as an .so file?

I’m having a hard time trying to convert all the TensorRT plugin code into the format that follows the FC_dnn_plugin_example.

Is there any progress on the TensorRT 6.0 integration with Drive SW 10?

Or any ideas of how to fix getting an error when trying to load an “.bin” model that was optimized through the SW stack?

Dear @eolson,
Is there any progress on the TensorRT 6.0 integration with Drive SW 10?

No. If you want to use TensorRT 6.x, we recommend to use DRIVE OS 5.2.6 + DW 4.0 release to test your model integration.

Do you have any examples besides the FC_dnn_plugin_example or a library of plugins for various DNN plugins integrated with Drive SW 10?

Dear @eolson,
No we just have only one sample to demonstrate plugin implementation. Is it possible for you to upgrade to latest DRIVE OS + DW release? This should allow you to integrate your custom DNNs but you can not use DRIVE DNN modules in DRIVE OS + DW release.

No we cannot, we are relying on the Drive AV stack in our solution which is why we are centered around Drive SW 10.

I’ve tried recompiling and linking libnvonnxparser_runtime.so from onnx-tensort, still with the same error of:
getPluginCreator could not find plugin ResizeNearest version 001 namespace

I’ve also tried re-writing the ResizeNearest plugin and loading it using FC_dnn_plugin_example as a guide, still with the same error.

Couple questions:

  1. When converting and onnx model to bin, does it still account for the pluginjson file when adding that as an argument in the command line? I see no feedback telling me this actually does anything

  2. What else do I need to change in my code to support ResizeNearest in the upsample layer? Am i calling something wrong? It just seems to provide the same error output with no other context.

I’ve attached the code files here for review.
retinanet_dnn_plugin.zip (3.9 KB)

Hi @eolson
Did you still see the below error with your plugin?

getPluginCreator could not find plugin ResizeNearest version 001 namespace

Please share your detailed steps for our reproducing.

Yes still seeing the error below.

Commands ran:
./tensorRT_optimization --modelType=onnx --onnxFile=“resnet18.onnx”

./sample_object_detector_tracker --tensorRT_model=“optimized.bin”

The model itself was trained using “GitHub - NVIDIA/retinanet-examples at TRT5” pipeline on a separate machine. I also use that to convert the output .pth file to .onnx. I believe during that process it’s adding the Upsample layer.

Do you have a direct email where I can transfer the model over?

Please share the detailed steps for reproducing with your plugin.

when optimizing the model call the plugin in the settings

  • ./tensorRT_optimization --modelType=onnx --onnxFile=“resnet18.onnx” --pluginConfig=plugin.json

  • Go through the driveworks dwx_dnn_plugins “Advanced Tutorial” and link it to the dwDNN_initializeTensorRTFromFile
    Plugin.json (87 Bytes)

Please share the complete steps and the files modified by you. It will be easier for us to reproduce the issue. Thanks.

Included the modified object_detection_tracker sample main.cpp

  • Install out of the box Drive Software 10 on host machine
  • Train neural net using GitHub - NVIDIA/retinanet-examples at TRT5
  • Convert .pth → Onnx using that same library
  • Write ResizeNearest plugin using onnx-tensorrt/ResizeNearest.hpp at 5.1 · onnx/onnx-tensorrt · GitHub and FCN_Plugin tutorial as guidelines.
  • Convert onnx model to .bin model using driveworks tool tensorRT.
    – ./tensorRT_optimization --modelType=onnx --onnxFile=“resnet18.onnx” --pluginConfig=plugin.json
  • Modify the object detection tracker to use the built in plugin
  • Run object detection tracker
    – ./sample_object_detector_tracker --tensorRT_model=“optimized.bin”
  • Error getPluginCreator could not find plugin ResizeNearest version 001 namespace
    main.cpp (38.7 KB)

One thing that’s interesting too, is that if you setup tensorRT raw in python in the same working environment.
It successfully reads the plugin. So i’m a bit confused on the driveworks dnn side why it’s not utilizing the same tensorrt features?

Example Code and Output:
import tensorrt as trt
import pycuda.driver as cuda
print(trt.version)
weight_paths = “trt5_rn18.trt”
trt_logger = trt.Logger(trt.Logger.VERBOSE)
with open(weight_paths, ‘rb’) as f, trt.Runtime(trt_logger) as runtime:
engine = runtime.deserialize_cuda_engine(f.read())

[TensorRT] INFO: Glob Size is 342814864 bytes.
[TensorRT] INFO: Added linear block of size 104857600
[TensorRT] INFO: Added linear block of size 26214400
[TensorRT] INFO: Added linear block of size 26214400
[TensorRT] INFO: Added linear block of size 13107200
[TensorRT] INFO: Added linear block of size 6553600
[TensorRT] INFO: Added linear block of size 1638400
[TensorRT] INFO: Added linear block of size 409600
[TensorRT] INFO: Added linear block of size 230400
[TensorRT] INFO: Added linear block of size 102400
[TensorRT] INFO: Added linear block of size 57856
[TensorRT] INFO: Added linear block of size 14848
[TensorRT] INFO: Added linear block of size 4096
[TensorRT] INFO: Found Creator ResizeNearest
[TensorRT] INFO: Found Creator ResizeNearest
[TensorRT] INFO: Deserialize required 1559591 microseconds.

Dear @eolson,
Could you please file a NVBug for this issue with all the details. Please login to NVIDIA DRIVE - Autonomous Vehicle Development Platforms | NVIDIA Developer with your credentials. Please check MyAccount->MyBugs-> Submit a new bug to file bug and share bug ID to follow up via seperate bug.

Submitted:

3665436
Bug ID