I run sound classification sample app (in /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-audio folder) and object detection sample app (in /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-app folder) successfully. Now, I want to integrate both of them to one pipeline to run in parallel (using a config file, read the same video source, and output to the same OSD), Is it possible to do that? If possible, Could you give me a design/tutorials?
Thank you very much.
• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1.1
• TensorRT Version 8.6
• Issue Type (questions, new requirements, bugs) questions
The sample /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-avsync is a pipeline to run audio inferencing and video inferencing in one pipeline. Please read the /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/deepstream-avsync/README carefully before you try the sample.
Thank you for your answer.
I checked deepstream-avsync app in /opt/nvidia/deepstream/deepstream-6.2/sources/apps/sample_apps/ folder but I cannot find it (both my Jetson AGX Xavier and Jetson Orin nano boards). I flashed Jetpack4.6 and Jetpack5.1 for two boards above.
The sample is for x86 only. Please refer to the x86 package.
Can I deploy deepstream-avsync app on Jetson boards? I want to use Jetson for deployment and just using x86 for reference.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
No. The model is not supported in Jetson. You can refer to the pipeline.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.