About running both nvinfer and nvinferaudio in parallel in Jetson boards

Question 1: I found NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel. (github.com) on Github that allows to run Multiple Models in Parallel, I read the Main Features dedicating that only “Support multiple models inference with nvinfer(TensorRT) or nvinferserver(Triton) in parallel”. So, Can I implement two models inferencing with one for nvinfer and one for nvinferaudio in parallel?
Question 2: I see Video + Audio muxing Use cases in
Gst-nvstreammux New — DeepStream 6.3 Release documentation (nvidia.com) but I dont find any source code for this design, How to find it?
Thank you very much.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1.1
• TensorRT Version 8.6
• Issue Type (questions, new requirements, bugs) questions

The new nvstreammux is not supported now.

1 Like

Thank you very much for your answer,
For Question 2:

The new nvstreammux is not supported now.

When running deepstream-audio sample app that must use the new nvstreammux (you can see it in source code). I want to run an app using multi threads built from both nvinfer (using deepstream-app sample app) and nvinferaudio (using deepstream-audio sample app) in parallel in Jetson boards. But the deepstream-app don’t show OSD with the new nvstreammux (I exported USE_NEW_NVSTREAMMUX=“yes”). Are there any solutions for it?

For Question 1: What about Question 1?

The parallel sample does not support new nvstreammux now.

The sample is in x86 dGPU DeepStreamSDK, you need to port it to Jetson.

/opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-avsync

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.