TensorRT getting engine file.

I am trying to get the engine file for TensorRT while I am converting a tensorflow frozen graph into a TensorRT optimized frozen graph. I need this engine file for a deepstream application, I wish to keep the engine directly in the Primary GIE of the deepstream app. What is the best way to go about this.

Hi,

Deepstream have a sample to convert a SSD detector from TensorFlow model into TensorRT engine, and infer it with Deepstream.
You can also follow the steps described in the README file for your .pb file:
/opt/nvidia/deepstream/deepstream-4.0/sources/objectDetector_SSD/README

Thanks.