Use .engine file in python

I used Transfer Learning Toolkit to train resnet18detectnetv2 for custom dataset. I then used the tlt-converter to convert the .etlt model into an .engine file. I am able to deploy both the .etlt and .engine file in DeepStream and it works.
But now I need to deploy the model in python and I am not able to find how to load the .engine model in python. Any suggestions?

Hi,

To use for inference, you would simply deserialize the engine.
Please refer below link:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-601/tensorrt-developer-guide/index.html#serial_model_python
https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html#yolov3_onnx

Thanks

@SunilJB @sharanssundar
For pre-processing and post-processing, what do I do for python code?

The pre and post-processing steps depend on the particular application.
You can refer to below link to get some idea, it includes examples for streaming from live camera feed and processing images

Thanks

I was able to deserialize the engine and inference without compile error. However, running the inference shows the following error:
‘tensorrt.tensorrt.ICudaEngine’ object has no attribute ‘infer’
The property of engine is:image

The line of code is: result = engine.infer(thisImage)

The .engine is build from the tlt sample program “resnet18detectnetv2”

Any ideal or direction on how to resolve this issue would be greatly appreciated? I’m running out of idea to tried.