Basic DeepStream example/tutorial for object detection?

I’m interested in incorporating my custom trained YOLOv3 model (Keras/TensorFlow) as an object detection plugin within a DeepStream pipeline. Essentially I want to take multiple RTSP video input streams and detect objects within the streams, and when a detection is made on a stream I will add a detection event onto an event queue or message bus. Is there a tutorial that is relevant to this use case? I am brand new to DeepStream and as such, I want to keep it simple at first. Thanks in advance for any suggestions, I really appreciate your help.

I have managed to get the example application for inference to work using the ResNet10 Caffe model. This is using the model engine file /opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b30_int8.engine.

My assumption is that the next step for me will be to replace this model engine file with my own model engine file built from my custom-trained YOLOv3 model. My model is built/trained using Keras and is loaded into Keras as an *.h5 file. How can I convert this model file into a model engine file for use within DeepStream SDK?

The documentation mentions an “objectDetector_YoloV3 sample application” but searching for this has yielded nothing so far. I have found a few references to codes on GitHub mentioned in other NVIDIA devtalk forum posts (such as this and this) but they result in 404 errors. Can anyone point me to something I can follow as an example for this?

BTW the model I have developed and trained using a custom dataset is based upon keras-yolo3, in case this makes a difference or can guide the discussion.

Please see “sources/objectDetector_Yolo” for instructions on running YOLO models.
You can also look at [url]NVIDIA Metropolis Documentation
This shows how to use custom YOLO models in DeepStream

Thank you for the guidance, @cshah. This was helpful. If you can reference other material that explains how to utilize object detection models within DeepStream that will be very appreciated as well. My understanding is that I can pull object detection models from NGC, use DIGITS to train them, and then create corresponding custom nvinfer plugins that can then be added into DeepStream pipelines. Is this an accurate description of the workflow?