Creating Deepstream application with TensorRT engine.

I have a YoloV3 model tensorflow frozen graph and after converting in to TensorRT optimized graph (TF-TRT, TF version is 1.14), I have saved the engine file. Now I wish to use this engine file in a DeepStream application where this engine will be in the Primary GIE of the deepstream application. How do I go about this? Also once I can run this inference engine, can I use the native BBOX code written in the YoloV3 samples? If not how do I go about making my own>

Can you try to test the saved engine file by trtexe ?

You can refer to test1 or GitHub - NVIDIA-AI-IOT/deepstream_4.x_apps: deepstream 4.x samples to deploy TLT training models
pgie_frcnn_uff_config.txt
nvdsinfer_customparser_frcnn_uff
to learn how to deploy engine file and how to define custom output parser.