Hi
Thank you for your fast reply,
I did test about serialize and deserialize functions in TensorRT python example, it’s ok, no problem
e.g TensorRT example
......
# model-engine version
with open("sample_uff.engine","rb") as f, trt.Runtime(TRT_LOGGER) as runtime:
engine = runtime.deserialize_cuda_engine(f.read())
..........
# Build a TensorRT engine.
with build_engine_uff(uff_model_file) as engine:
with open("sample_uff.engine","wb") as f:
f.write(engine.serialize())
but I failed to make my python by using model-engines in objectDetector_yolo (deepstream)
I think model engine files in objectDetector_yolo need PlugIn whenever “runtime.deserialize_cuda_engine” works
I hope to use model files in objectDetector_yolo (Deepstream) with a custom layer plugin (libnvdsinfer_custom_impl_Yolo.so) in python
Could you give any advice or python examples by using deserialize_cuda_engine with IPluginFactory?
Is exactly why you are seeing these error’s. The answer provided in the thread above holds good for your question as well. For your python implementation, you will also need to link TRT’s leaky RELU and (yolo’s custom region layer if required) through pluginfactory.
There may not be an existing sample which shows how to use that API, so you will have to implement your own.
because I want to know how I connect between the model engine and IPlugIn Layer (*.so)
You cannot use the c++ custom library from deepstream yolo sample “libnvdsinfer_custom_impl_Yolo.so” for your python code. Leaky relu and region is already available as a plugin layer in TensorRT, so you can use them directly in your plugin factory. If you want to use yolov3 or yolov3-tiny then you need to port the C++ implementation to python and then use it.