Network with multibranch

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) jetson
• DeepStream Version 6.3
• JetPack Version (valid for Jetson only) 6

• Issue Type( questions, new requirements, bugs) questions

Hello,

I am new to DeepStream and I am planning to deploy a multitask model. My model has two branches: one branch is for semantic segmentation with 3 classes, and the other branch is for object detection with 2 classes. I have a couple of questions:

  1. Is it possible to deploy such a model in DeepStream?
  2. I was planning to set the network-type to 100, as I saw in one of the tutorials. My idea is to use a tee element to split the pipeline into two branches, where I can apply different post-processing on each branch’s output. Something like this :
    … ! nvinfer ! queue ! tee ! queue ! post-processing (plugin or probe) for object detection ! nvdsosd ! sink
    ! queue ! post-processing for semantic segmentation (plugin or probe) ! nvsegvisual ! sink

I would like to know if this approach is feasible.

Thank you in advance!

Could you attach the link of this tutorials? And could you attach the input layer and output layer of your model?

Hi, thanks for your reply!

What I meant by “tutorial” is the C++ and Python samples provided in the DeepStream repository. I was referring to this config file:
deepstream_tao_apps/configs/nvinfer/peopleSemSegNet_tao/shuffle/pgie_peopleSemSegShuffleUnet_tao_config.txt at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub where the :
network-type=100
output-tensor-meta=1

The semantic segmentation metadata is later handled as a probe in this file: :

– To give you a better idea of the model architecture, I’ve attached a sample image:

I would like to know the feasibility of my solution and if I can pass a model with this kind of architecture to the inference plugin.

In theory, it’s fine if you do the postprocess yourself.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.