I want to run inference using a custom TensorFlow model with deepstream. What are the steps I should follow?

I have a .pb file of a trained model. I figured that I have to convert it into uff or onnx format & use it with DeepStream.

Do I have any other options?

Hi,

YES.
TensorRT cannot support .pb directly, please convert it into .uff or .onnx first.
Here is a example for your reference:

/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD

Thanks.