I found deepstream_parallel_inference_app that only support multiple models inference with nvinfer(TensorRT) or nvinferserver(Triton) in parallel.
So, I want to apply design/source code for nvinfer and nvinferaudio, Is it possible? Do you have any advice for me?
Thank you very much.
• Hardware Platform (Jetson / GPU) Jetson NX Xavier • DeepStream Version 6.3 • JetPack Version (valid for Jetson only) 5.1.2 • TensorRT Version 8.5 • Issue Type (questions, new requirements, bugs) questions
I want to build a pipeline including both nvinfer (for handling video) and nvinferaudio (for handling audio) in parallel, the structure of pipeline is the same as below. But I dont find any source code of the below pipeline to refer. If you have, please let me know.
(Refer Gst-nvstreammux New — DeepStream 6.3 Release documentation (nvidia.com))
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
Can you describe your specific usage scenario, not only the pipeline? What scenario is your pipeline used to process?