Install TensorRT in Virtual Environment in Python with Custom Plugin?

Description

Hi, I have built the tensorrRT from repo GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators. (branch 7.0) with custom plugin (DCNv2).
After build, I copy all file in folder build/out to the TensorRT-7.0.0.10 folder downloaded from tar file.
After that, I build onnx-tensorrt (GitHub - onnx/onnx-tensorrt: ONNX-TensorRT: TensorRT backend for ONNX - branch 7.0) and export tensorrt engine.
But how I can load this engine in python on virtual environment?
When I install tensorrt from *whl file from TensorRT-7.0.0.10/python/, I had the issue:

[TensorRT] ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin DCNv2 version 1
[TensorRT] ERROR: safeDeserializationUtils.cpp (293) - Serialization Error in load: 0 (Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.

The issue show that maybe my installed tensorrt package is not built.
The question is: How to install TensorRT in virtual environment python after built with custom plugin?

Thank you!

Environment

TensorRT Version:
GPU Type:
Nvidia Driver Version:
CUDA Version:
CUDNN Version:
Operating System + Version:
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

@nguyenquyem99dt nguyenquyem99d- I moved How to install TensorRT in virtual environment python after built with custom plugin? here.

Hi,
Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.

Thanks!

Thank all!
I passed this issue by adding the following line before inference: trt.init_libnvinfer_plugins(TRT_LOGGER, ‘’)