Deepstream 5 Python multiple RTSP streams from I.P cams

This deepstream sample runs fine with 4 RTSP I.P cameras:

 deepstream-app -c source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt

Here is my source setup:

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2
#uri=file://…/…/streams/sample_1080p_h264.mp4
uri=rtsp://172.16.2.158:554/user=admin&password=&channel=1&stream=0.sdp?
gpu-id=0
(0): memtype_device - Memory type Device
(1): memtype_pinned - Memory type Host Pinned
(2): memtype_unified - Memory type Unified
cudadec-memtype=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2
#uri=file://…/…/streams/sample_1080p_h264.mp4
uri=rtsp://172.16.2.159:554/user=admin&password=&channel=1&stream=0.sdp?
gpu-id=0
(0): memtype_device - Memory type Device
(1): memtype_pinned - Memory type Host Pinned
(2): memtype_unified - Memory type Unified
cudadec-memtype=0

[source2]
enable=1
Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2
#uri=file://…/…/streams/sample_1080p_h264.mp4
uri=rtsp://172.16.2.160:554/user=admin&password=&channel=1&stream=0.sdp?
gpu-id=0
(0): memtype_device - Memory type Device
(1): memtype_pinned - Memory type Host Pinned
(2): memtype_unified - Memory type Unified
cudadec-memtype=0

[source3]
enable=2
Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=2
#uri=file://…/…/streams/sample_1080p_h264.mp4
uri=rtsp://172.16.2.157:554/user=admin&password=&channel=1&stream=0.sdp?
gpu-id=0
(0): memtype_device - Memory type Device
(1): memtype_pinned - Memory type Host Pinned
(2): memtype_unified - Memory type Unified
cudadec-memtype=0

When I run this python deepstream demo trying to access the RTSP I.P. camera stream it errors out because it cant find the RTSP stream:

 python3 deepstream_imagedata-multistream.py rtsp://172.16.2.157:554/user=admin&password=&channel=1&stream=0.sdp? frames

It errors out because it cant locate the streams.
Any ideas

Hi,
Please share the log and your platform(Jetson platform or desktop GPU) for reference. And could you also try the RTSP server through test-lauch?

sudo apt-get install libgstrtspserver-1.0 libgstreamer1.0-dev
gcc test-launch.c -o test-launch $(pkg-config --cflags --libs gstreamer-1.0 gstreamer-rtsp-server-1.0)
[test-launch.c download from: https://github.com/GStreamer/gst-rtsp-server/blob/master/examples/test-launch.c]
./test-launch "filesrc location=sample_1080p_h264.mp4 ! qtdemux ! rtph264pay name=pay0 pt=96 "

Would like to know if it is specific to the IP camera.

Was able to get Python Deepstream Sample
“python3 deepstream_imagedata-multistream.py”
to run on Nano with RTSP I.P Cameras by adding single quotes to to rtsp string. Example:
python3 deepstream_imagedata-multistream.py ‘rtsp://172.16.2.160:554/user=admin&password=&channel=1&stream=0.sdp?’ ‘rtsp://172.16.2.159:554/user=admin&password=&channel=1&stream=0.sdp?’ frames

Was able to get 4 RTSP I.P cameras to run in python sample.
Bit found over 2 I.P cameras starts to affect latency

Hi,
You would need to customize the sample for Jetson Nano. There is a post of customizing deepstream-test3:

FYR. You may also refer to
source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt
It is config file for Jetson Nano.

I modified the paths to look at the “Primary_Detector_Nano”
Works good . Couple questions.
1.The more cameras I ad the longer the lag is on the stream. The stream itself is running about 24fps but with 4 I.P.cameras running the lag is about 4 seconds.
2. Why is the 'frames" file needed. Is that where the streams are stored for processing.

Hi,

For multi source, please configure interval to nvinfer:

  interval            : Specifies number of consecutive batches to be skipped for inference
                        flags: readable, writable, changeable only in NULL or READY state

It is interval=4 in
source8_1080p_dec_infer-resnet_tracker_tiled_display_fp16_nano.txt

And you can customize the code if you don’t need to save the result to disk.

Hi adventuredaisy, thanks using the DS Python apps!

Hope the info below helps to clarify the “frames” folder and deepstream-imagedata-multistream app:

The deepstream-imagedata-multistream app demonstrates accessing decoded images in the pipeline from Python app. These images are saved in a folder specified by the user (e.g. “frames”). The saved images are not used by the pipeline for inference or any other processing. They are generated this way:

  1. Get the decoded images in a probe function – to show how to get those images as numpy arrays.
  2. Convert each numpy array to cv::Mat – to show how to use the images in OpenCV.
  3. Use OpenCV to draw bounding boxes on a copy of the frame, and then save the annotated frame to file. This shows using metadata along with the image data. This is only done on select frames based on some filtering criteria.

For use cases that don’t require processing the images in Python, deepstream-test3 app is sufficient. The imagedata app has some additional latency due to extra conversions and using unified memory so the images can be easily accessible in RGBA format on CPU.