I use deepstream to process 4 ip cameras, everything is ok, I can display or record live 2 row/2 colons video but when I set up rtsp output sink, vlc or my home automation system cannot decode well the stream ? the live rtsp video is not complete and i get only a full frame time to time ?
it is not a network problem as all my ip cameras live videos are clear.
in fact the rtsp output is ok with I use a file as multiple source or only one rtspcamera, but when I have more than one IP camera, the output stream contains a lot of frame errors.
it seems that I their is some timing problem muxing 2 or more rtsp camera in my case ?
I saw the user video, it is working ??
the cameras resolution is 1280x720 at 10 fps.
my config file:
# Copyright (c) 2019 NVIDIA Corporation. All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl
[tiled-display]
enable=1
rows=1
columns=2
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
#uri=rtsp://admin:Gimxxxx@192.168.1.247/Streaming/Channels/1
uri=rtsp://admin:Gimxxxx@192.168.1.247/Streaming/Channels/3
#uri=file://../../streams/sample_1080p_h264.mp4
#num-sources=2
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://admin:wifixxxx@192.168.1.248:554/1/h264major
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
[sink0]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=2
#5
sync=0
#1
source-id=0
gpu-id=0
qos=0
nvbuf-memory-type=0
overlay-id=1
[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0
[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
batch-size=2
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=4
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_nano.txt
[tracker]
enable=1
tracker-width=480
tracker-height=272
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for IOU only
#ll-config-file=iou_config.txt
gpu-id=0
[tests]
file-loop=0
Hi,
Do you use the same IP cameras running in the same mode(720p, 10fps)? It looks like the framerate of sources does not match. Also if you do not upgrade to DS4.0.1, please do the upgrade.
I use for my tests only different IP cameras form several brand (2 different trendnet model, 1 x wansview , 1x reolink, 1 x no brand ptz camera), all are handled well in DS4.0.1 for inference.
Yes I fell that it is because the streams are not exactly the same, this can affect RTSP sink ?
Saving the output mosaic into a file is ok.
What I can test is to setup the same camera as 2 separate rtsp input ?
Hi,
It might be an issue in network bandwidth. Looks like the h264 stream is not complete(maybe the bottom half part is lost). The bandwidth is good for single source, but insufficient in multiple sources.
Or do you run ‘sudo jetson_clocks’? Maybe the loading is high and performance is insufficient while CPU is running DFS( dynamic frequency scaling ).
Hi,
I frames have larger size. It is more like an issue in bandwidth or buffering. You may try to modify deepstream-app to configure some properties in rtspsrc or queue plugin.