After solving my first issue and getting my stream working, I got lots of qos bus messages and warnings about the stream being out of sync (“There may be timestamping problem…”). Initially I blamed the source, but the same source seems to work fine, even long term, with gst-play so I’m not so certain anymore.
This works fine:
$ gst-play-1.0 rtsp://192.168.foo.bar:7447/5db24c36cac8a601c50871d8_0
Press 'k' to see a list of keyboard shortcuts.
Now playing rtsp://192.168.1.2:7447/5db24c36cac8a601c50871d8_0
Pipeline is live.
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Redistribute latency...
Prerolled.
I took at look at one of the reference apps, and the sink properties “qos” and “sink” are set to false. I did that on my app, and now it no longer hangs after a while, but the frames are coming through in bursts instead, rather than at a constant frame rate.
My source (unifi-video) is 15fps, and with these new settings I get a half second burst of frames on my sink every half a second or so at what seems like at least twice the frame rate. I took at look at the callback to set properties on the decoder element and wrote this:
class DeepStreamApp(Gst.Pipeline)
...
def _on_decode_bin_child_added(self, bin: Gst.Bin,
element: Gst.Element):
# logic borrowed from :
# https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/runtime_source_add_delete/deepstream_test_rt_src_add_del.c
# sets properties on the nvv4l2decoder elements
logger.debug(f"{bin.name} child added: {element.name}")
if element.name.startswith('decodebin'):
# add this callback to the sub-bin
logger.debug(f'adding element-added callback to {element.name}')
element.connect('element-added', self._on_decode_bin_child_added)
elif element.name.startswith('nvv4l2decoder'):
logger.debug(f'setting properties on decoder: {element.name}')
element.set_property('enable-max-performance', True)
element.set_property('bufapi-version', True)
element.set_property('drop-frame-interval', 0)
element.set_property('num-extra-surfaces', 0)
However that doesn’t seem to help. The callback is added to the decodebin and the properties get set on nvv4l2decoder, but it doesn’t seem to change anything. It works fine with a file source. I’m not sure what’s wrong.
I tried the one of the reference apps (runtime_source_add_delete), and it doesn’t work at all for the source:
$ ./deepstream-test-rt-src-add-del rtsp://192.168.foo.bar:7447/5db24c36cac8a601c50871d8_0
creating uridecodebin for [rtsp://192.168.foo.bar:7447/5db24c36cac8a601c50871d8_0]
Using winsys: x11
Creating LL OSD context new
0:00:00.752864485 9229 0x55a542e390 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary-nvinference-engine3> NvDsInferContext[UID 4]:useEngineFile(): Failed to read from model engine file
0:00:00.753001290 9229 0x55a542e390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary-nvinference-engine3> NvDsInferContext[UID 4]:initialize(): Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:00:50.401255948 9229 0x55a542e390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary-nvinference-engine3> NvDsInferContext[UID 4]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_int8.engine
0:00:50.451383238 9229 0x55a542e390 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary-nvinference-engine2> NvDsInferContext[UID 3]:useEngineFile(): Failed to read from model engine file
0:00:50.451470570 9229 0x55a542e390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary-nvinference-engine2> NvDsInferContext[UID 3]:initialize(): Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:01:38.120324567 9229 0x55a542e390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary-nvinference-engine2> NvDsInferContext[UID 3]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_CarMake/resnet18.caffemodel_b16_int8.engine
0:01:38.172763100 9229 0x55a542e390 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<secondary-nvinference-engine1> NvDsInferContext[UID 2]:useEngineFile(): Failed to read from model engine file
0:01:38.172858625 9229 0x55a542e390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary-nvinference-engine1> NvDsInferContext[UID 2]:initialize(): Trying to create engine from model files
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
0:02:24.917389008 9229 0x55a542e390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<secondary-nvinference-engine1> NvDsInferContext[UID 2]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Secondary_CarColor/resnet18.caffemodel_b16_int8.engine
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
0:02:25.111214332 9229 0x55a542e390 WARN nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:useEngineFile(): Failed to read from model engine file
0:02:25.111321345 9229 0x55a542e390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:03:13.017702286 9229 0x55a542e390 INFO nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /opt/nvidia/deepstream/deepstream-4.0/samples/models/Primary_Detector/resnet10.caffemodel_b30_int8.engine
decodebin child added source
Now playing: rtsp://192.168.1.2:7447/5db24c36cac8a601c50871d8_0
Running...
decodebin child added decodebin0
decodebin child added decodebin1
decodebin child added rtpmp4gdepay0
decodebin child added rtph264depay0
decodebin child added aacparse0
decodebin child added h264parse0
decodebin child added capsfilter0
decodebin child added avdec_aac0
decodebin child added nvv4l2decoder0
decodebin new pad audio/x-raw
Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
decodebin new pad video/x-raw
Decodebin linked to pipeline
Creating LL OSD context new
Calling Start 3
creating uridecodebin for [rtsp://192.168.foo.bar:7447/5db24c36cac8a601c50871d8_0]
decodebin child added source
STATE CHANGE NO PREROLL
Calling Start 2
creating uridecodebin for [rtsp://192.168.foo.bar:7447/5db24c36cac8a601c50871d8_0]
decodebin child added source
STATE CHANGE NO PREROLL
Calling Start 1
creating uridecodebin for [rtsp://192.168.foo.bar:7447/5db24c36cac8a601c50871d8_0]
decodebin child added source
STATE CHANGE NO PREROLL
Calling Stop 3
STATE CHANGE SUCCESS
(deepstream-test-rt-src-add-del:9229): GStreamer-CRITICAL **: 17:17:36.759: gst_pad_send_event: assertion 'GST_IS_PAD (pad)' failed
(deepstream-test-rt-src-add-del:9229): GStreamer-CRITICAL **: 17:17:36.759: gst_element_release_request_pad: assertion 'GST_IS_PAD (pad)' failed
STATE CHANGE SUCCESS (nil)
(deepstream-test-rt-src-add-del:9229): GStreamer-CRITICAL **: 17:17:36.759: gst_object_unref: assertion 'object != NULL' failed
Calling Stop 1
STATE CHANGE SUCCESS
(deepstream-test-rt-src-add-del:9229): GStreamer-CRITICAL **: 17:17:46.758: gst_pad_send_event: assertion 'GST_IS_PAD (pad)' failed
(deepstream-test-rt-src-add-del:9229): GStreamer-CRITICAL **: 17:17:46.758: gst_element_release_request_pad: assertion 'GST_IS_PAD (pad)' failed
STATE CHANGE SUCCESS (nil)
(deepstream-test-rt-src-add-del:9229): GStreamer-CRITICAL **: 17:17:46.759: gst_object_unref: assertion 'object != NULL' failed
Calling Stop 2
STATE CHANGE SUCCESS
(deepstream-test-rt-src-add-del:9229): GStreamer-CRITICAL **: 17:17:56.756: gst_pad_send_event: assertion 'GST_IS_PAD (pad)' failed
(deepstream-test-rt-src-add-del:9229): GStreamer-CRITICAL **: 17:17:56.756: gst_element_release_request_pad: assertion 'GST_IS_PAD (pad)' failed
STATE CHANGE SUCCESS (nil)
(deepstream-test-rt-src-add-del:9229): GStreamer-CRITICAL **: 17:17:56.756: gst_object_unref: assertion 'object != NULL' failed
Calling Stop 0
STATE CHANGE SUCCESS
STATE CHANGE SUCCESS 0x7ec80281f0
All sources Stopped quitting
Returned, stopping playback
Deleting pipeline
I also tried deepstream_test_5 and have the same problem with my Python app before modifications with the sync error messages and hanging after a while. I assume there are qos messages on the bus as well but haven’t checked. It works for a few seconds, then stalls. Again, file source works fine. Likewise, are there any public rtsp sources available to test with that are known to work so I can completely rule out any interaction with my particular camera source?
My config for deepstream_test_5 is:
# Copyright (c) 2018 NVIDIA Corporation. All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl
[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=2
uri=rtsp://192.168.foo.bar:7447/5db24c36cac8a601c50871d8_0
num-sources=1
gpu-id=0
nvbuf-memory-type=0
#[source1]
#enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
#type=3
#uri=file://../../../../../samples/streams/sample_1080p_h264.mp4
#num-sources=2
#gpu-id=0
#nvbuf-memory-type=0
[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0
#[sink1]
#enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
#type=6
#msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM - Custom schema payload
#msg-conv-payload-type=0
#msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
#msg-broker-conn-str=<host>;<port>;<topic>
#topic=<topic>
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt
#[sink2]
#enable=0
#type=3
#1=mp4 2=mkv
#container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
#codec=3
#sync=1
#bitrate=2000000
#output-file=out.mp4
#source-id=0
[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0
[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
[primary-gie]
enable=1
gpu-id=0
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b4_int8.engine
labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/
[tracker]
enable=1
tracker-width=600
tracker-height=300
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=0
[tests]
file-loop=0
edit: attached pipeline pdf created just before program exit.