Description
I’m trying to add a BatchedNmsPlugin after YoloV4 tiny model. My workflow is converting the darknet model to onnx, then using the onnx-surgeon to add the BatchedNmsPlugin. Finally, I want to use the trtexec to convert the onnx to tensorRT, but I got the following error.
Environment
TensorRT Version : 7.1.3
GPU Type : Jetson Xavier NX
Nvidia Driver Version :
CUDA Version : 10.2
CUDNN Version : 8
Operating System + Version : Ubuntu 18.04
Relevant Files
https://drive.google.com/file/d/17ce5rRwxaB3NxNBQLFQHC76hFAkO9Rxh/view?usp=sharing
Steps To Reproduce
sudo ./trtexec --onnx=modified.onnx
Hi @jack_gao ,
Kindly allow access to your files.
Thanks!
Hi @jack_gao ,
Looks like the issue is with your custom plugin implementation or registration.
Below links might help you in the same.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/samplePlugin
Thanks!
Hi @AakankshaS ,
I was following How to use NMS with Pytorch model (that was converted to ONNX -> TensorRT) · Issue #795 · NVIDIA/TensorRT · GitHub advice.
Using onnx-graphsurgeon like
#!/usr/bin/env python3
#
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import onnx_graphsurgeon as gs
import numpy as np
import onnx
This file has been truncated. show original
Estimating Depth with ONNX Models and Custom Layers Using NVIDIA TensorRT | NVIDIA Technical Blog .
It looks simple and easy. Now it get the error but I don’t know which step goes wrong. It looks like it will register from libnvinfer_plugin.so automatically, isn’t it?
Thanks!
Hi @jack_gao ,
Apologies for delayed response, are you still facing the issue?