Was able to get Python Deepstream Sample
“python3 deepstream_imagedata-multistream.py”
to run on Nano with RTSP I.P Cameras by adding single quotes to to rtsp string. Example:
python3 deepstream_imagedata-multistream.py ‘rtsp://172.16.2.160:554/user=admin&password=&channel=1&stream=0.sdp?’ ‘rtsp://172.16.2.159:554/user=admin&password=&channel=1&stream=0.sdp?’ frames
Was able to get 4 RTSP I.P cameras to run in python sample.
Bit found over 2 I.P cameras starts to affect latency
I modified the paths to look at the “Primary_Detector_Nano”
Works good . Couple questions.
1.The more cameras I ad the longer the lag is on the stream. The stream itself is running about 24fps but with 4 I.P.cameras running the lag is about 4 seconds.
2. Why is the 'frames" file needed. Is that where the streams are stored for processing.
Hi adventuredaisy, thanks using the DS Python apps!
Hope the info below helps to clarify the “frames” folder and deepstream-imagedata-multistream app:
The deepstream-imagedata-multistream app demonstrates accessing decoded images in the pipeline from Python app. These images are saved in a folder specified by the user (e.g. “frames”). The saved images are not used by the pipeline for inference or any other processing. They are generated this way:
Get the decoded images in a probe function – to show how to get those images as numpy arrays.
Convert each numpy array to cv::Mat – to show how to use the images in OpenCV.
Use OpenCV to draw bounding boxes on a copy of the frame, and then save the annotated frame to file. This shows using metadata along with the image data. This is only done on select frames based on some filtering criteria.
For use cases that don’t require processing the images in Python, deepstream-test3 app is sufficient. The imagedata app has some additional latency due to extra conversions and using unified memory so the images can be easily accessible in RGBA format on CPU.