Inference multiple deeplearning model on the same camera source

Please provide complete information as applicable to your setup.

• Hardware Platform (Xavier)
• DeepStream Version 4.0.1
• JetPack Version (4.2)

Hello, I need to deploy multiple deep learning models on the same camera source input.
To be more specific I need to deploy Yolov3 and tensorflow checkpoint of depth estimation model on the same camera source simultaneously.
May anyone specify the exact steps to implement this pipeline in deepstream sdk?

can you share the deailed pipeline? and, can the tf model be convert to onnx or uff so that TRT can parse and run it?

Hello mchi , The pipeline is the same as the example provided by deepstream to deploy Yolov3.
Considering the conversion I will check this

Is there any option to use the check point of the model ?

if you want to use tf model direcetly, DS Triton introduced in DS5.0 could be an option

DS Triton:

If I have deepstream4 is there any solution with tf model ? or I need to be restricted with uff and onnx models

Also May you provide me with the required steps to deploy tf model with Yolo v3 in deepstream5.0?

To run TF model, you need to upgrade to JP4.4 with DS5.0, here is a sample to run ssd tf model. You can refer to this to deploy your YoloV3 model.

  1. Install JP4.4DP with DeepStream5.0 on Jetson-Xavier
  2. Install DeepStream Python
    2.1 downlod deepstream_python_v0.9.tbz2 from
    2.2 install ds python
    tar xpf deepstream_python_v0.9.tbz2 cd deepstream_python_v0.9/
    tar xpf ds_pybind_v0.9.tbz2 -C /opt/nvidia/deepstream/deepstream-5.0/sources/ 2.3 install cd /opt/nvidia/deepstream/deepstream-5.0/samples/
    ./ 2.4 run the sample cd /opt/nvidia/deepstream/deepstream/sources/python/apps/deepstream-ssd-parser
    $ python3 …/…/…/…/samples/streams/sample_720p.h264