It works! Thanks
Is possible use multi RTSP sources?
It works! Thanks
Is possible use multi RTSP sources?
Hi andrea_vighi,
Please share your config file so that other users can refer to it.
For multi RTSP sources, how many sources are run in your usecase?
My config file for RTSP !
# Copyright (c) 2019 NVIDIA Corporation. All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl
[tiled-display]
enable=0
rows=1
columns=1
width=1920
height=1080
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
#uri=file://../../streams/sample_1080p_h264.mp4
uri=rtsp://192.168.30.10:554/live/ch0
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0
[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt
[tracker]
enable=0
tracker-width=640
tracker-height=368
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=1
[secondary-gie0]
enable=0
model-engine-file=../../models/Secondary_VehicleTypes/resnet18.caffemodel_b16_int8.engine
gpu-id=0
batch-size=16
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_vehicletypes.txt
[secondary-gie1]
enable=0
model-engine-file=../../models/Secondary_CarColor/resnet18.caffemodel_b16_int8.engine
batch-size=16
gpu-id=0
gie-unique-id=5
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_carcolor.txt
[secondary-gie2]
enable=0
model-engine-file=../../models/Secondary_CarMake/resnet18.caffemodel_b16_int8.engine
batch-size=16
gpu-id=0
gie-unique-id=6
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_carmake.txt
[tests]
file-loop=0
I would like to know if it is possible to start only one deepstream-app process and set, in the config file, multiple RTSP sources (as done with “type = 3” [multi uri] .mp4 files).
Currently I have launched multiple deepstream-app processes to have multiple RTSP sources.
Thanks
Hey,
I managed to have several RTSP sources running at the same time with only one process. I just added one source per RTSP source, and one sink (type 2) per source. This way, there is only one deepstream app running, but you have several windows showing all your RTSP streams.
ok but how can i do to view the RTSP sources on the grid as for source type3?
Thanks
Hi,
Please modify uri, width, height per your IP camera in below config file and try again:
# Copyright (c) 2019 NVIDIA Corporation. All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl
[tiled-display]
enable=1
rows=2
columns=1
width=640
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
#uri=file://../../streams/sample_1080p_h264.mp4
uri=rtsp://127.0.0.1:8554/test
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
#uri=file://../../streams/sample_1080p_h264.mp4
uri=rtsp://127.0.0.1:8554/test
num-sources=1
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=5
sync=1
source-id=0
gpu-id=0
qos=0
nvbuf-memory-type=0
overlay-id=1
[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0
[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
batch-size=2
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=4
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_nano.txt
[tracker]
enable=1
tracker-width=480
tracker-height=272
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for IOU only
#ll-config-file=iou_config.txt
gpu-id=0
[tests]
file-loop=0
We can run it with rtsp server launched through test-mp4:
$ ./test-mp4 sample_1080p_h264.mp4
https://github.com/GStreamer/gst-rtsp-server/blob/master/examples/test-mp4.c
Please give it a try and see if the IP cameras can be launched successfully.
I tried with your config file in “deepstream-app” but I receive this error:
Creating LL OSD context new
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
ERROR from tiled_display_tiler: GstNvTiler: FATAL ERROR; NvTiler::Composite failed
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvtiler/gstnvtiler.cpp(665): gst_nvmultistreamtiler_transform (): /GstPipeline:pipeline/GstBin:tiled_display_bin/GstNvMultiStreamTiler:tiled_display_tiler
Quitting
0:00:23.081478570 9311 0x35f25190 WARN nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop:<primary_gie_classifier> error: Internal data stream error.
0:00:23.081561518 9311 0x35f25190 WARN nvinfer gstnvinfer.cpp:1830:gst_nvinfer_output_loop:<primary_gie_classifier> error: streaming stopped, reason error (-5)
ERROR from primary_gie_classifier: Internal data stream error.
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1830): gst_nvinfer_output_loop (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie_classifier:
streaming stopped, reason error (-5)
App run failed
Hi DaneLLL,
Thank you for the great support you bring to this forum.
I’m asking you to please flag this issue as a major one.
We’ve worked for almost 8 months with DS3.0EA and Jetson AGX and had a great experience with it.
We were waiting DS4.0 for getting into production.
The problem is that the standard DS4 pipeline using new v4l2decoder → muxer → inference → tiler → display fails with ip cameras from major video manufacturers (eg: HikVision) due to tiler crash, while it was working perfectly on DS3.0EA.
We understand that this was introduced with the big architecture changes in DS4.0 (merged sdk for Tegra and dGPU).
Currently, from the forum users experience and ours:
Hi,
We have taken this as a priority issue. Will make an instant update once we have any finding.
Thank you for your reactivity. We’ll be waiting for this update.
Hi Guys,
Thanks for all your help. I have been trying to run deepstream-app with 2 RTSP streams using HIKVISION cameras on Jetson Nano. Finally, I have been able to get the app working by following the guidance mentioned in this thread. However, I have the following queries :
Why does tracker not work with RTSP but works with file based input? Could someone please help me understand what could be the reason behind this?
Why does tiled display give issues with RTSP streams? Since tiled display just depends on the output it should be agnostic to what the source is according to me. Please correct me where I am going wrong in my understanding.
When I tried to run yolov3 (FP-16), I could only manage < 1 fps per camera in this case. Are these figures indicative of the maximum potential?
In order to run yolov3 using 2 RTSP streams on Nano (FP-16), I changed “deepstream_app_config_yoloV3.txt” and “config_infer_primary_yoloV3.txt” in “sources/objectDetector_Yolo” to the following:
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl
[tiled-display]
enable=0
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://admin:edge1234@192.168.0.201:554/Streaming/Channels/1
num-sources=1
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
source-id=0
[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://admin:edge1234@192.168.0.202:554/Streaming/Channels/1
num-sources=1
gpu-id=0
# (0): memtype_device - Memory type Device
# (1): memtype_pinned - Memory type Host Pinned
# (2): memtype_unified - Memory type Unified
cudadec-memtype=0
source-id=1
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=1
gpu-id=0
nvbuf-memory-type=0
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1280
height=720
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
#model-engine-file=model_b1_int8.engine
labelfile-path=labels.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
#interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary_yoloV3.txt
[tests]
file-loop=0
[property]
gpu-id=0
net-scale-factor=1
#0=RGB, 1=BGR
model-color-format=0
custom-network-config=/home/edgetensor/deepstream_sdk_v4.0_jetson/sources/objectDetector_Yolo/yolov3.cfg
model-file=/home/edgetensor/deepstream_sdk_v4.0_jetson/sources/objectDetector_Yolo/yolov3.weights
#model-engine-file=model_b1_int8.engine
labelfile-path=labels.txt
int8-calib-file=yolov3-calibration.table.trt5.1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=2
num-detected-classes=80
gie-unique-id=1
is-classifier=0
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomYoloV3
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
Command used to run:
deepstream-app -c 'deepstream_app_config_yoloV3_nano_rtsp.txt'
In regard to the above, I have the following queries:
a) Since I do not see any flag in the config settings for tracker, I am assuming that tracker is not enabled by default. Is that the right understanding?
b) Are the configurations right in general for Jetson Nano? Since the fps achieved is reasonably low, I suspect that I might be doing something wrong. Kindly let me know if anything is wrong.
Thanks
Hi,
We are checking the issue. Since we do not have the IP cameras you have, please help run attached app and share the print:
bufferformat: NvBufferColorFormat_NV12
Please modify rtsp location in the code and execute the steps:
$ export MMAPI_INCLUDE=/usr/src/tegra_multimedia_api/include
$ export MMAPI_CLASS=/usr/src/tegra_multimedia_api/samples/common/classes
$ export USR_LIB=/usr/lib/aarch64-linux-gnu
$ g++ -Wall -std=c++11 decode.cpp -o decode $(pkg-config --cflags --libs gstreamer-app-1.0) -I$MMAPI_INCLUDE $USR_LIB/tegra/libnvbuf_utils.so $MMAPI_CLASS/NvEglRenderer.o $MMAPI_CLASS/NvElement.o $MMAPI_CLASS/NvElementProfiler.o $MMAPI_CLASS/NvLogging.o $USR_LIB/libEGL.so $USR_LIB/libGLESv2.so $USR_LIB/libX11.so
$ export DISPLAY=:1(or 0)
$ ./decode
decode.zip (1.47 KB)
Hi DaneLLL,
Here are the results:
For Hikvision and Avigilon ip cameras crashing with tiler:
bufferformat: NvBufferColorFormat_NV12_709_ER
For Hikvision cameras working properly with tiler:
bufferformat: NvBufferColorFormat_NV12
It seems that you’re on the right track !
Hi DaneLLL,
I executed the commands. I got the following output:
bufferformat: NvBufferColorFormat_NV12_709_ER
Thanks
Hello and thanks for the good work!
The Trendnet TV-IP314PI also shows:
bufferformat: NvBufferColorFormat_NV12_709_ER
Hi andrea_vighi,
Are you able to share the model name of the HikVision and Dahua cameras?
Hi DaneLLL,
The model name of HikVision camera at my end is : DS-2CD202WF-I
Hi,
Please apply attached prebuilt libs and try again.
/usr/lib/aarch64-linux-gnu/gstreamer-1.0/libgstnvegltransform.so
/usr/lib/aarch64-linux-gnu/tegra/libnvbufsurftransform.so.1.0.0
R32_2_DS_4_0_PREBUILT_LIB.zip (4.4 MB)
Hi DaneLLL,
I am still getting the following:
bufferformat: NvBufferColorFormat_NV12_709_ER
Hi DaneLLL,
We’ve made tests of the prebuild with all the cameras we have and … problem solved !!
Tiler works now correctly with all kind of ip cameras we’ve tested, and with any nvstreammux input width and height.
Thanks a lot for your excellent work nvidia team !