Deepstream python with mutiple camera video feeds?

Hello,

I was able to use deepstream python with my hardware camera using the suggestions from this thread:
https://devtalk.nvidia.com/default/topic/1066912/deepstream-sdk/deepstream-now-supports-python-/

And then this thread says to look at deepstream_test_3.py for multiple streams:
https://devtalk.nvidia.com/default/topic/1069445/deepstream-sdk/deepstream-in-python-inquires-/

However I need inference (specifically segmentation) on multiple camera streams, not multiple file streams, and it is non obvious to me how to add camera streams (from first forum topic) to the multiple file streams example.

How would one change deepstream_test_3.py to support hardware cameras instead of video files?

Thank you!

you could refer to https://devtalk.nvidia.com/default/topic/1071110/deepstream-sdk/unable-to-run-inference-pipeline-using-csi-camera-source-in-deepstream-python-app/post/5435300/#5435300 for python csi camera input.

Thanks for the quick reply, but I already have input from one camera working. I am looking for help making multiple cameras work simultaneously. After I got one camera working I wanted to modify deepstream_test_3.py to support cameras, but the code is too different and I don’t know how to make it work.

Hey, could you get it working for multiple camera streams?

I did not get it working. I upgraded my system to jetpack 4.4 and deepstream 5.0 and I am trying to get it working with the transfer learning toolkit now.