Single source and multi inference models in parallel

Hi,
I can run models with deepstream-app.
But I want to run two models on one video source.
For example, feed video and audio to different models.
I noticed there is sample of two models in serial like primary and second GIE.
Is there and sample or codebase to implement my pipeline of GIEs in parallel?

Hey Customer,

  1. please share your setup with us.
  2. For your quesiton, is it possible to implement your pipeline using 2 cascade GIEs?

Hey bcao,
As I described, I want to process video and audio w/ different models. So cascade GIE may not be the best topology.
I just want to know if parallel is possible in deepstream.
As for the setup, do you means config file? Since I not only changed config but also modified code of deepstream-app code.

No, I mean the following info:

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Ok, here is my setup. I am just asking question rather than reporting bug. So there is nothing to reproduce.
• Hardware Platform (Jetson / GPU) T4
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version w/ offical deepstream docker
• NVIDIA GPU Driver Version (valid for GPU only) 450.80.02
• Issue Type( questions, new requirements, bugs) question

To specify my question: If I run video inference and audio inference in parallel w/ same source.

  1. Does issue in each of them block the proc of the other one?
  2. Since video and audio require different streammux, for audio one, I need to “export USE_NEW_NVSTREAMMUX =yes”. So, can both of them exist in one pipeline? If yes, where should I set the switch of USE_NEW_NVSTREAMMUX?

We do not have sample code for it.User can implement it. Add a Tee element after nvstreammux that feeds 2 nvinfer components in parallel. To synchronize output of both nvinfer, user can connect a probe on sourcepad of nvinfer which is connected to downstream component in the pipeline. Return from this function only if gstbuffer reference count reaches 1

Sorry, where did you see that audio need to use new streammux

Here: Gst-nvstreammux New Alpha — DeepStream 5.1 Release documentation
Only new streammux accept audio as input. Also the sample code of deepstream-audio in SDK is using this one.

Fusion of video and audio inference is in trend and I am really eager to test this with my project. I know that creating separate end2end pipelines to process is a walkaround. But it is not the proper way and may cause additional cost to decode.

It is said that audio inference is alpha feature which may not have enough support. But please tell me whether the current sdk is capable to build my project. If not, I will not waste time tunning this code and wait for future release.

OK, we will check internally.

Hey Customer,
Yeah, audio is only in new streammux and you cannot use both streammux together - so enable the new. For AVsync issues wait for upcoming release with bunch of fixes.

Thanks for your reply. That is really helpful!
So, is there any release schedule? (Or even rough date of release)

Hey,
We don’t have an exact release date, just wait for the official announcement.