Loading engine with custom plugin in Python

Description

Hi, I’ve built an engine file using a custom plugin with trtexec:

trtexec --onnx=model.onnx \
--plugins=build/dcn_plugin.so \
--workspace=10000 \
--saveEngine=model.engine \
--tacticSources=-cublasLt,+cublas \
--warmUp=1000 \
--shapes=curr_frame:1x3x736x1280,pre_frame:1x3x736x1280,pre_hm:1x1x736x1280 \
--explicitBatch \
--device=0 \
--fp16

I have a plugin build directory with contents:
dcn_plugin.so
DCNv2Plugin.o
DCNv2Plugin.cu.o

I tried loading the engine file with:

with open(engine_path, 'rb') as f:
        engine_data = f.read()
    engine = trt_runtime.deserialize_cuda_engine(engine_data)

But got an unsupported plugin error. How can I point python tensorrt to my plugin so that this loads properly?

Environment

TensorRT Version: 7.2.2.3
GPU Type: V100
Nvidia Driver Version: 450.80.02
CUDA Version: 10.2
CUDNN Version: 8.0
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.7.1
Baremetal or Container (if container which image + tag):

Relevant Files

Hi, Request you to check the below reference links for custom plugin implementation.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleOnnxMnistCoordConvAC

Thanks!

Hi, I don’t believe either of these links have examples of Python inference.

1 Like

Hi @austinmw89,

Before load the engine, we can do:
ctypes.CDLL("build/dcn_plugin.so")

Thank you.

I ended up needing to load the nvinfer library with ctypes before doing that

1 Like