Thank you for your fast reply,
I did test about serialize and deserialize functions in TensorRT python example, it’s ok, no problem
e.g TensorRT example
# model-engine version
with open("sample_uff.engine","rb") as f, trt.Runtime(TRT_LOGGER) as runtime:
engine = runtime.deserialize_cuda_engine(f.read())
# Build a TensorRT engine.
with build_engine_uff(uff_model_file) as engine:
with open("sample_uff.engine","wb") as f:
but I failed to make my python by using model-engines in objectDetector_yolo (deepstream)
I reviewed uff_custom_plugin in TensorRT example and others but this source only used custom layer while changing from PB to UFF
I can’t find any examples by using deserialize_cuda_engine with IPluginFactory
I think model engine files in objectDetector_yolo need PlugIn whenever “runtime.deserialize_cuda_engine” works
I hope to use model files in objectDetector_yolo (Deepstream) with a custom layer plugin (libnvdsinfer_custom_impl_Yolo.so) in python
Could you give any advice or python examples by using deserialize_cuda_engine with IPluginFactory?