How to run any custom .pb model file on deepstream 5.0?

We want to run Pretrained models using python script , please suggest . Thanks

You can run the PeopleNet with TensorRT API directly.
For example: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#create_network_python

Thanks for helping,
In python binding application app we use resnet10.caffemodel and run using .py file same we want to use different pretrained models using python scripts .

Hi,

The python code is the same.
You only need to update the configure file to read TLT model like below:

https://github.com/NVIDIA-AI-IOT/deepstream_tlt_apps/blob/master/configs/dssd_tlt/pgie_dssd_tlt_config.txt#L30

Thanks.

I have pytorch yolov5 fire detection model and i want to run this model using deepstream 5 ,is it possible ?

Thanks

@AastaLLL
Gst-Plugins for DeepStream SDK are given in C++, is it possible to access classes or variables of those plugins from the python interface?

@AastaLLL
How can we get single image inference (using C++/ python binding) of the deepStream model inference?