How to run any custom .pb model file on deepstream 5.0?

I have created custom objetc_detection model which save in object_detection.pb file format, so how to run this file on deepstream 5.0 ?

I have jetson nano, jetpack 4.4, cuda 10.2, deepstream 5.0.
please suggest steps…

you need to try convert .pd model to .onnx model

please suggest for after conversion .pb model to .onnx model whats the next steps to run .onnx model on deepstream ?

can we run any customized trained model on deepstream or its need to be train some different way ?
if we need to be train different way so please suggest steps ?

Can Deepstream Run on RTX Series like 2060 ?

Whether or not Deepstream will give good performance on RTX 2060 card ?

Hi,

1.
You can run the ONNX model with Deepstream if TensorRT supports all the model layers.
Here is the TensorRT support matrix for your reference.
And you can run the model with trtexec to confirm like this:

/usr/src/tensorrt/bin/trtexec --onnx=[your/model]

2.
And based on our document below, you can run dGPU version Deepstream on an RTX Series:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_Quickstart.html#dgpu-setup-for-ubuntu

Although we don’t have profiling data for 2060, here is one for T4:
And you should get good performance for dGPU since Deepstream optimizes for both Jetson and dGPU.

Thanks.