Description
Hi,
I have downloaded ssd.onnx and run with onnx parser to generate .trt file.
But I met this error
While parsing node number 464 [NonMaxSuppression]:
ERROR: ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: NonMaxSuppression
&&&& FAILED TensorRT.sample_onnx_mnist # ./test_onnx
It seems that NonMaxSuppressionPlugin is not available.
Then I tried to implement plugin for custom layer.
I follow nmsPlugin example and create a NonMaxSuppressionPlugin class.
- Make directory NonMaxSuppressionPlugin in plugin folder
- Copy nmsPlugin.cpp and nmsPlugin.h to folder NonMaxSuppressionPlugin
- Change nmsPlugin and nmsPluginCreator to NonMaxSuppressionPlugin and NonMaxSuppressionPluginCreator.
- Add into CMake list
- Add DEFINE_BUILTIN_OP_IMPORTER(NonMaxSuppression) in parsers/onnx/builtin_op_importers.cpp
- Add initializePlugin< nvinfer1::plugin::NonMaxSuppressionPluginCreator >(logger, libNamespace) in InferPlugin.cpp
- Regenerate the in build folder by “cmake … -DTRT_LIB_DIR=~/TensorRT/lib -DTRT_BIN_DIR=
pwd
/out -DBUILD_PLUGINS=ON -DBUILD_PARSERS=ON”- make in build folder
- make install
After rebuild I used onnx2trt ssd-10.onnx -o ssd_trt.out to convert model
It gives message :
Input filename: ../../../samples/data/ssd-10.onnx
ONNX IR version: 0.0.4
Opset version: 10
Producer name: pytorch
Producer version: 1.1
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------
Parsing model
[2020-12-16 06:36:24 WARNING] /home/u5393118/TensorRT/parsers/onnx/onnx2trt_utils.cpp:235: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
.....
[2020-12-16 06:36:24 WARNING] /home/u5393118/TensorRT/parsers/onnx/onnx2trt_utils.cpp:261: One or more weights outside the range of INT32 was clamped
[2020-12-16 06:36:24 ERROR] INVALID_ARGUMENT: getPluginCreator could not find plugin NonMaxSuppressionONNXTRT_NAMESPACE version 001
I’ not sure if I miss something in register the NonMaxSuppressionPlugin in to ONNX parser.
Is any implementation example in ONNX plugin?
Environment
TensorRT Version : 7.0.0-1
GPU Type : Tesla V100
Nvidia Driver Version : 450.51.05
CUDA Version : 11.0
CUDNN Version :
Operating System + Version : ubuntu 18.04
Python Version (if applicable) : 3.6.9
TensorFlow Version (if applicable) :
PyTorch Version (if applicable) :
Baremetal or Container (if container which image + tag) :
Relevant Files
ONNX model is downloaded from https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/ssd
Steps To Reproduce
- Make directory NonMaxSuppressionPlugin in plugin folder
- Copy nmsPlugin.cpp and nmsPlugin.h to folder NonMaxSuppressionPlugin
- Change nmsPlugin and nmsPluginCreator to NonMaxSuppressionPlugin and NonMaxSuppressionPluginCreator.
- Add into CMake list
- Add DEFINE_BUILTIN_OP_IMPORTER(NonMaxSuppression) in parsers/onnx/builtin_op_importers.cpp
- Add initializePlugin< nvinfer1::plugin::NonMaxSuppressionPluginCreator >(logger, libNamespace) in InferPlugin.cpp
- Regenerate the in build folder by “cmake … -DTRT_LIB_DIR=~/TensorRT/lib -DTRT_BIN_DIR=
pwd
/out -DBUILD_PLUGINS=ON -DBUILD_PARSERS=ON”- make in build folder
- make install
- in build/parsers/onnx/ run onnx2trt ssd-10.onnx -o ssd.trt
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered