Please provide the following info (check/uncheck the boxes after creating this topic): Software Version
DRIVE OS Linux 5.2.6
DRIVE OS Linux 5.2.6 and DriveWorks 4.0
DRIVE OS Linux 5.2.0
DRIVE OS Linux 5.2.0 and DriveWorks 3.5
NVIDIA DRIVE™ Software 10.0 (Linux)
NVIDIA DRIVE™ Software 9.0 (Linux)
other DRIVE OS version
other
Dear @eolson,
No we just have only one sample to demonstrate plugin implementation. Is it possible for you to upgrade to latest DRIVE OS + DW release? This should allow you to integrate your custom DNNs but you can not use DRIVE DNN modules in DRIVE OS + DW release.
I’ve tried recompiling and linking libnvonnxparser_runtime.so from onnx-tensort, still with the same error of:
getPluginCreator could not find plugin ResizeNearest version 001 namespace
I’ve also tried re-writing the ResizeNearest plugin and loading it using FC_dnn_plugin_example as a guide, still with the same error.
Couple questions:
When converting and onnx model to bin, does it still account for the pluginjson file when adding that as an argument in the command line? I see no feedback telling me this actually does anything
What else do I need to change in my code to support ResizeNearest in the upsample layer? Am i calling something wrong? It just seems to provide the same error output with no other context.
The model itself was trained using “GitHub - NVIDIA/retinanet-examples at TRT5” pipeline on a separate machine. I also use that to convert the output .pth file to .onnx. I believe during that process it’s adding the Upsample layer.
Do you have a direct email where I can transfer the model over?
Convert onnx model to .bin model using driveworks tool tensorRT.
– ./tensorRT_optimization --modelType=onnx --onnxFile=“resnet18.onnx” --pluginConfig=plugin.json
Modify the object detection tracker to use the built in plugin
Run object detection tracker
– ./sample_object_detector_tracker --tensorRT_model=“optimized.bin”
Error getPluginCreator could not find plugin ResizeNearest version 001 namespace main.cpp (38.7 KB)
One thing that’s interesting too, is that if you setup tensorRT raw in python in the same working environment.
It successfully reads the plugin. So i’m a bit confused on the driveworks dnn side why it’s not utilizing the same tensorrt features?
Example Code and Output:
import tensorrt as trt
import pycuda.driver as cuda
print(trt.version)
weight_paths = “trt5_rn18.trt”
trt_logger = trt.Logger(trt.Logger.VERBOSE)
with open(weight_paths, ‘rb’) as f, trt.Runtime(trt_logger) as runtime:
engine = runtime.deserialize_cuda_engine(f.read())
[TensorRT] INFO: Glob Size is 342814864 bytes.
[TensorRT] INFO: Added linear block of size 104857600
[TensorRT] INFO: Added linear block of size 26214400
[TensorRT] INFO: Added linear block of size 26214400
[TensorRT] INFO: Added linear block of size 13107200
[TensorRT] INFO: Added linear block of size 6553600
[TensorRT] INFO: Added linear block of size 1638400
[TensorRT] INFO: Added linear block of size 409600
[TensorRT] INFO: Added linear block of size 230400
[TensorRT] INFO: Added linear block of size 102400
[TensorRT] INFO: Added linear block of size 57856
[TensorRT] INFO: Added linear block of size 14848
[TensorRT] INFO: Added linear block of size 4096
[TensorRT] INFO: Found Creator ResizeNearest
[TensorRT] INFO: Found Creator ResizeNearest
[TensorRT] INFO: Deserialize required 1559591 microseconds.