New nvstreammux and deepstream-parallel-inference-app

Please provide complete information as applicable to your setup.

**• Hardware Platform- Jetson
**• DeepStream Version- 6.2
**• JetPack Version- 5.1
**• TensorRT Version- 8.5.2.2

I’m working on deepstream parallel inference app, and I’ve successfully run the bodypose_yolo using my model and configuration files, as I am able to print detections from both video sources. I have two sources: one from a USB camera and the other from a CSI camera, each with different resolutions. Currently, both streams pass through nvstreammux and get resized to the resolution specified in the streammux’s config file. However, from the documentation, I’ve learned that using new nvstreammux would allow me to perform inference on the original sizes of the streams. While attempting to use it in my app, I face few errors. PFA my config file, GST_DEBUG=2 log and pipeline.png.

debug_log.txt (15.5 KB)
source4_1080p_dec_parallel_infer.txt (8.3 KB)

My sources_4.csv:
enable,type,uri,camera-width,camera-height,camera-fps-n,camera-fps-d
1,1,usb:0,384,284,30,1
1,1,csi:0,3840,2160,30,1

New nvstreammux cannot support sources with different resolutions. You can refer to Work Around.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.