Step-wise procedure to deploy a custom tensorflow 2.4 object detection model in deepstream 5.1


I need a step wise procedure to deploy a single class custom trained tf 2.4 object detection ssd mobilenet v2 fpn 640*640 model in deepstream 5.1. Since i am new to this i will also need assistance in deploying for real time stream.


I am using jetpack 4.5.1 in jetson nano.

Relevant Files

i have successfully trained and exported the frozen graph for the model.
I have also done the inference from the .ckpt and .pb file achieving 5 FPS in jetson nano.
I need this to be faster in real time so need these steps to achieve that.

Kindly request to provide all the required posts, forum discussions etc for this.

Thanks in advance.


There are some tutorials shared by our user.
For example, please check if below page can meet your requirement:


Thanks @AastaLLL . I have looked into this but this is for tf 1.14 and after going through some of the developer forum topics I came to know that the procedure for tf 2.4 models is different. So kindly make this clear.


For TensorFlow 2.x, please convert the model into ONNX format and update the deepstream path accordingly.
You can check below converter to generate an ONNX model:


Hi @AastaLLL. I was able to convert the custom tf2.4 .pb model to .onnx as you suggested but then i followed up the forum for the next steps but it has to be converted in to .engine (tensorrt) and then deploy in deepstream 5.1. Could you please help me out for these steps as i am facing some difficulty to understand? The steps from conversion to trt engine and running in deepstream on a sample video.