Please provide complete information as applicable to your setup.
• Jetson nano 4Gb
• DeepStream 4.0.2
• JetPack Version 4.3-b134
• TensorRT Version 6.0.1.10-1+cuda10.0
IP Camera: Dahua
Why I can’t get videoStream from my IP camera when I enable type=6 of [sink] group?
I’m following this tutorial to connect Deepstream SDK to the cloud of IoT Central.
But seems that the IP camera crush when I enable MsgConvBroker in the config file. I have ran the config file with [sink] type=5 , type=4 and the RTSP of my IP camera as source and it display de video fine with the object detection doing the inference, but once I enable the sink=6 it doesn’t display anything and neither send the telemetry data to the cloud.
this is my config file:
# Copyright (c) 2018 NVIDIA Corporation. All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.
[application]
enable-perf-measurement=0
perf-measurement-interval-sec=5
gie-kitti-output-dir=/tmp/deepstream-detections-output-dir
[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
[source0]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=1
camera-width=1280
camera-height=720
camera-fps-n=30
camera-fps-d=1
camera-v4l2-dev-node=0
[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://admin:12345678W@192.168.1.108:554/cam/realmonitor?channel=1/subtype=1
[source2]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=
[source3]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=Overlay 6=MsgConvBroker
type=5
sync=0
display-id=0
offset-x=0
offset-y=0
width=0
height=0
overlay-id=1
source-id=0
[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=Overlay 6=MsgConvBroker
type=6
msg-conv-config=./msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
msg-conv-payload-type=1
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_azure_edge_proto.so
topic=mytopic
#Optional:
#msg-broker-config=./cfg_azure.txt
[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
codec=1
sync=0
bitrate=2000000
output-file=out.mp4
source-id=0
[sink3]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400
[osd]
enable=1
border-width=2
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
[streammux]
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1280
height=720
# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
model-engine-file=../../models/Primary_Detector_Nano/resnet10.caffemodel_b8_fp16.engine
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
config-file=config_infer_primary_nano.txt
[tracker]
enable=1
tracker-width=480
tracker-height=272
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for IOU only
#ll-config-file=iou_config.txt
gpu-id=0
[tests]
file-loop=0
IoT edge shows that all the four containerized modules are running, and when I run
$ iotedge logs -f NVIDIADeepStreamSDK to see it logs, show this:
Error from src_elem0: could not read from resource.
Debug info: gstrtspsrc.c(5917): gst_rtsp_src_receive_response (): /GstPipeline:pipeline/GstBin:multi_src_bin/GstBin:src_sub_bin0/GstRTSPSrc:src_elem0: Could not receive message. (Timeout while waiting for server response)
and
watch_source_status (null)
Reset sources pipeline reset_source_pipeline 0x7f86c22080
or you can see the error message in these screenshots:
I’m suspecting that the problem is my IP camera (I hope not), any suggestion to make this work?