How to build a deepstream pipeline using both nvinfer and nvinferaudio?

I want to build one deepstream pipeline using both nvinfer (for video inference) and nvinferaudio (for audio inference) and read the same video/rtsp source (including audio). Could you give sample pipeline (ex: pipeline built from gst-launch,…etc) or docs to refer?
Thank you very much.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1.1
• TensorRT Version 8.6
• Issue Type (questions, new requirements, bugs) questions

There is a sample for dGPU in DeepStream SDK dGPU package: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-avsync

Thank you very much,
Is it possible to implement on Jetson?

We will consider to add similar sample in Jetson package.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.