Hi, I have built the tensorrRT from repo GitHub - NVIDIA/TensorRT: TensorRT is a C++ library for high performance inference on NVIDIA GPUs and deep learning accelerators. (branch 7.0) with custom plugin (DCNv2).
After build, I copy all file in folder build/out to the TensorRT-18.104.22.168 folder downloaded from tar file.
After that, I build onnx-tensorrt (GitHub - onnx/onnx-tensorrt: ONNX-TensorRT: TensorRT backend for ONNX - branch 7.0) and export tensorrt engine.
But how I can load this engine in python on virtual environment?
When I install tensorrt from *whl file from TensorRT-22.214.171.124/python/, I had the issue:
[TensorRT] ERROR: INVALID_ARGUMENT: getPluginCreator could not find plugin DCNv2 version 1
[TensorRT] ERROR: safeDeserializationUtils.cpp (293) - Serialization Error in load: 0 (Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
The issue show that maybe my installed tensorrt package is not built.
The question is: How to install TensorRT in virtual environment python after built with custom plugin?