about model-engine-file

Hi,

Deepstream make their model-engine-file, e.g model_b1_int8.engine when we use any deepstream

I have a question about model-engine-files and interested in this part

is it possible for this model-engine file to reuse it in TensorRT python code?

for example,

I think I try to use HostDeviceMem->do_inference in common.py

/usr/src/tensorrt/samples/python/common.py

Hi,

YES.

The model-engine-files is the output of TensorRT.
You can deserialize it with this function:
https://github.com/AastaNV/TRT_object_detection/blob/master/main.py#L48

engine = runtime.deserialize_cuda_engine(buf)

But please noticed that the engine file cannot be used cross platfom or software.
Thanks.

Hi
Thank you for your fast reply,
I did test about serialize and deserialize functions in TensorRT python example, it’s ok, no problem

e.g TensorRT example

......
   # model-engine version 
    with open("sample_uff.engine","rb") as f, trt.Runtime(TRT_LOGGER) as runtime:
        engine = runtime.deserialize_cuda_engine(f.read())
..........

    # Build a TensorRT engine.
    with build_engine_uff(uff_model_file) as engine:
        with open("sample_uff.engine","wb") as f:
            f.write(engine.serialize())

but I failed to make my python by using model-engines in objectDetector_yolo (deepstream)

I reviewed uff_custom_plugin in TensorRT example and others but this source only used custom layer while changing from PB to UFF
/usr/src/tensorrt/samples/python/uff_custom_plugin/lenet5.py
TRT_object_detection/main.py at master · AastaNV/TRT_object_detection · GitHub
TRT_object_detection/model_ssd_inception_v2_coco_2017_11_17.py at master · AastaNV/TRT_object_detection · GitHub

I can’t find any examples by using deserialize_cuda_engine with IPluginFactory
https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/infer/Core/Runtime.html

I think model engine files in objectDetector_yolo need PlugIn whenever “runtime.deserialize_cuda_engine” works
I hope to use model files in objectDetector_yolo (Deepstream) with a custom layer plugin (libnvdsinfer_custom_impl_Yolo.so) in python

Could you give any advice or python examples by using deserialize_cuda_engine with IPluginFactory?

Thanks

I checked it below

  • yolo.cfg (SHAPE and Network)
  • config_infer_primary_yoloV2.txt ( custom layer info and model-engine-file )
  • label

every time my python included model-engine in objectDetector_yolo (Deepstream) have errors below

[TensorRT] ERROR: getPluginCreator could not find plugin LReLU_TRT version 1 namespace 
[TensorRT] ERROR: Cannot deserialize plugin LReLU_TRT
[TensorRT] ERROR: getPluginCreator could not find plugin LReLU_TRT version 1 namespace 
[TensorRT] ERROR: Cannot deserialize plugin LReLU_TRT
[TensorRT] ERROR: getPluginCreator could not find plugin LReLU_TRT version 1 namespace 
[TensorRT] ERROR: Cannot deserialize plugin LReLU_TRT
[TensorRT] ERROR: getPluginCreator could not find plugin LReLU_TRT version 1 namespace 
[TensorRT] ERROR: Cannot deserialize plugin LReLU_TRT
[TensorRT] ERROR: getPluginCreator could not find plugin LReLU_TRT version 1 namespace 
[TensorRT] ERROR: Cannot deserialize plugin LReLU_TRT
[TensorRT] ERROR: getPluginCreator could not find plugin LReLU_TRT version 1 namespace 
[TensorRT] ERROR: Cannot deserialize plugin LReLU_TRT
[TensorRT] ERROR: getPluginCreator could not find plugin LReLU_TRT version 1 namespace 
[TensorRT] ERROR: Cannot deserialize plugin LReLU_TRT
[TensorRT] ERROR: getPluginCreator could not find plugin LReLU_TRT version 1 namespace 
[TensorRT] ERROR: Cannot deserialize plugin LReLU_TRT
[TensorRT] ERROR: getPluginCreator could not find plugin Region_TRT version 1 namespace 
[TensorRT] ERROR: Cannot deserialize plugin Region_TRT

I found the same error
How to load and deserialize the .engine file? - Jetson TX2 - NVIDIA Developer Forums

I read TensorRT Manual again
https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#serial_model_c
https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#extending

is it possible for this model-engine file(objectDetector_yolo) with PlugIn to reuse it in TensorRT python code when in runtime ?

Thanks

Hi,

The answer provided here - [url]https://devtalk.nvidia.com/default/topic/1058409/jetson-tx2/how-to-load-and-deserialize-the-engine-file-/post/5367386/#5367386[/url]

Is exactly why you are seeing these error’s. The answer provided in the thread above holds good for your question as well. For your python implementation, you will also need to link TRT’s leaky RELU and (yolo’s custom region layer if required) through pluginfactory.

Here’s the link to python documentation - [url]Runtime — NVIDIA TensorRT Standard Python API Documentation 8.4.3 documentation

Thanks for your advice,

I read your link but failed it so hope to see the example

because I want to know how I connect between the model engine and IPlugIn Layer (*.so)

I hope to reuse TensorRT engine by python when in runtime

There may not be an existing sample which shows how to use that API, so you will have to implement your own.

because I want to know how I connect between the model engine and IPlugIn Layer (*.so)

You cannot use the c++ custom library from deepstream yolo sample “libnvdsinfer_custom_impl_Yolo.so” for your python code. Leaky relu and region is already available as a plugin layer in TensorRT, so you can use them directly in your plugin factory. If you want to use yolov3 or yolov3-tiny then you need to port the C++ implementation to python and then use it.

Hi,

You can check if this sample can meet your requirement:
[url]https://github.com/AastaNV/TRT_object_detection[/url]

Thanks.