Smart Record issue - record files have strange grey blocks in the middle

I have a jetson nano and have just flashed it with the latest jetpack and developer preview of DS5.0.

I am testing the sample app deepstream-test5 with the smart record feature. I notice a lot of the recorded files have a strange artefact with a grey block in the middle of them for portions of the recorded video.

This system does not allow uploading of mp4 files so I’ve taken a screen shot of the issue. Over the 12 second clip this grey block appears twice and last for about 3 seconds each time.

I’m testing with hikvision rtsp cameras which work well with the deepstream components. I have also attached my config file. Note that I am running on a headless jetson nano so I re-stream the output as rtsp instead of using the egl sink.

I have also tested using a fakesink and I still get these strange artefacts in the recorded videos.

Memory usage is only 0.8GB before starting the deepstream-test 5app.

Please note I had to change the config file extension to .log as you system does not allow uploading txt files either - which is a bit odd??

js_infer_4_rtsp_sr_to_rtsp.log (6.7 KB)

When possible, please provide the following info:

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

Further testing of the smart record feature shows that it’s more stable when my sink is just a faksink. Most of my previous testing was with an rtsp sink so that i could watch the tiled output in VLC on my laptop.

For the jetson nano it seems this is too resource intensive. So using fakesink with smart record is more stable but also more limited in its use cases.

Maybe a Xavier is required to make full use of smart record over multiple rtsp streams…?

I also notice that the fps of recorded files (as reported by VLC) can be 25fps - even though my rtsp cameras are configured for 10fps. When I view the cameras directly in VLC they do show as 10 fps.
Why does smart record show higher fps?

Because the files are at a higher frame rate their duration is less that that given as a smart record parameter For example. My cameras are at 10fps. Smart record files are sometime at 25 fps, so my files seem to end up with a duration around 7 seconds - yet I told smart record to use 15 seconds…

I have a separate deepstream-test3 based app where I implemented my own version of smart record by dynamically adding a recording pipeline after the demuxer… It experiences these same issues.

We are checking internally and will give you a reply ASAP.

Hey customer, did you use deepstream-test5 to enable SR and observe the higher fps, could you share your config with us if yes.

Yes its with deepstream test5 on a jetson nano. Config file is already attached to the first post. I had to rename the extension from .txt to .log as this new forum platform does not allow attaching .txt files.

ps. please allow more file typed to be attached in forum posts. ;-)

Hey, I checked your config uploaded in 1st comment, but I didn’t find where you enable the SR, I think you should refer test5_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt:

# Copyright (c) 2018-2020 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=2
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://foo.com/stream1.mp4
num-sources=1
gpu-id=0
nvbuf-memory-type=0
# smart record specific fields, valid only for source type=4
#smart-record=1
# 0 = mp4, 1 = mkv
#smart-rec-container=0
#smart-rec-file-prefix
#smart-rec-dir-path
# video cache size in seconds
#smart-rec-video-cache
# default duration of recording in seconds.
#smart-rec-default-duration
# duration of recording in seconds.
# this will override default value.
#smart-rec-duration
# seconds before the current time to start recording.
#smart-rec-start-time
# value in seconds to dump video stream.
#smart-rec-interval

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://foo.com/stream2.mp4
num-sources=1
gpu-id=0
nvbuf-memory-type=0
# smart record specific fields, valid only for source type=4
#smart-record=1
# 0 = mp4, 1 = mkv
#smart-rec-container=0
#smart-rec-file-prefix
#smart-rec-dir-path
# video cache size in seconds
#smart-rec-video-cache

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=<host>;<port>;<topic>
topic=<topic>
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_kafka_proto.so
conn-str=<host>;<port>
config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=<topic1>;<topic2>;<topicN>

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=2
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=2
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
nvbuf-memory-type=0
model-engine-file=../../../../../samples/models/Primary_Detector/resnet10.caffemodel_b2_gpu0_int8.engine
labelfile-path=../../../../../samples/models/Primary_Detector/labels.txt
config-file=../../../../../samples/configs/deepstream-app/config_infer_primary.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/


[tracker]
enable=1
tracker-width=600
tracker-height=288
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=0

[secondary-gie0]
enable=1
gpu-id=0
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0;
batch-size=16
config-file=../../../../../samples/configs/deepstream-app/config_infer_secondary_vehicletypes.txt
labelfile-path=../../../../../samples/models/Secondary_VehicleTypes/labels.txt
model-engine-file=../../../../../samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_int8.engine

[secondary-gie1]
enable=1
gpu-id=0
gie-unique-id=5
operate-on-gie-id=1
operate-on-class-ids=0;
batch-size=16
config-file=../../../../../samples/configs/deepstream-app/config_infer_secondary_carcolor.txt
labelfile-path=../../../../../samples/models/Secondary_CarColor/labels.txt
model-engine-file=../../../../../samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_int8.engine

[secondary-gie2]
enable=1
gpu-id=0
gie-unique-id=6
operate-on-gie-id=1
operate-on-class-ids=0;
batch-size=16
config-file=../../../../../samples/configs/deepstream-app/config_infer_secondary_carmake.txt
labelfile-path=../../../../../samples/models/Secondary_CarMake/labels.txt
model-engine-file=../../../../../samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_int8.engine

[tests]
file-loop=0

Must’ve been the wrong file. Here it is:

# Copyright (c) 2018-2020 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0


[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://<my camera url>
num-sources=1
gpu-id=0
nvbuf-memory-type=0
smart-record=1
smart-rec-video-cache=30
smart-rec-duration=10
smart-rec-start-time=3
#smart-rec-default-duration=
#smart-rec-container=<0/1>
# smart-rec-interval is to tell deepstream-test5 how often to start and stop smart recording
smart-rec-interval=60
#smart-rec-file-prefix=<file name prefix>
#smart-rec-dir-path=<path of directory to save the file>

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://<my camera url>
num-sources=1
gpu-id=0
nvbuf-memory-type=0

[source2]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://<my camera url>
num-sources=1
gpu-id=0
nvbuf-memory-type=0

[source3]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
uri=rtsp://<my camera url>
num-sources=1
gpu-id=0
nvbuf-memory-type=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
sync=0
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

#source-id=0
#gpu-id=0
#nvbuf-memory-type=0

[sink1]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=6
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
msg-broker-proto-lib=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_kafka_proto.so
#Provide your msg-broker-conn-str here
msg-broker-conn-str=<host>;<port>;<topic>
topic=<topic>
#Optional:
#msg-broker-config=../../deepstream-test4/cfg_kafka.txt

[sink2]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

# sink type = 6 by default creates msg converter + broker.
# To use multiple brokers use this group for converter and use
# sink type = 6 with disable-msgconv = 1
[message-converter]
enable=0
msg-conv-config=dstest5_msgconv_sample_config.txt
#(0): PAYLOAD_DEEPSTREAM - Deepstream schema payload
#(1): PAYLOAD_DEEPSTREAM_MINIMAL - Deepstream schema payload minimal
#(256): PAYLOAD_RESERVED - Reserved type
#(257): PAYLOAD_CUSTOM   - Custom schema payload
msg-conv-payload-type=0
# Name of library having custom implementation.
#msg-conv-msg2p-lib=<val>
# Id of component in case only selected message to parse.
#msg-conv-comp-id=<val>

# Configure this group to enable cloud message consumer.
[message-consumer0]
enable=0
proto-lib=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_kafka_proto.so
conn-str=<host>;<port>
config-file=<broker config file e.g. cfg_kafka.txt>
subscribe-topic-list=<topic1>;<topic2>;<topicN>

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=1
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
#batched-push-timeout=40000
batched-push-timeout=120000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0
## If set to TRUE, system timestamp will be attached as ntp timestamp
## If set to FALSE, ntp timestamp from rtspsrc, if available, will be attached
# attach-sys-ts-as-ntp=1

[primary-gie]
enable=1
gpu-id=0
batch-size=4
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;1;1;1
bbox-border-color3=0;1;0;1
nvbuf-memory-type=0
interval=0
gie-unique-id=1
model-engine-file=../../../../../samples/models/Primary_Detector_Nano/resnet10.caffemodel_b4_gpu0_fp16.engine
labelfile-path=../../../../../samples/models/Primary_Detector_Nano/labels.txt
config-file=../../../../../samples/configs/deepstream-app/config_infer_primary_nano.txt
#infer-raw-output-dir=../../../../../samples/primary_detector_raw_output/

[tracker]
enable=1
tracker-width=600
tracker-height=288
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=0

[secondary-gie0]
enable=1
model-engine-file=../../../../../samples/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_gpu0_fp16.engine
gpu-id=0
batch-size=16
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=../../../../../samples/configs/deepstream-app/config_infer_secondary_vehicletypes.txt

[secondary-gie1]
enable=1
model-engine-file=../../../../../samples/models/Secondary_CarColor/resnet18.caffemodel_b16_gpu0_fp16.engine
batch-size=16
gpu-id=0
gie-unique-id=5
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=../../../../../samples/configs/deepstream-app/config_infer_secondary_carcolor.txt

[secondary-gie2]
enable=1
model-engine-file=../../../../../samples/models/Secondary_CarMake/resnet18.caffemodel_b16_gpu0_fp16.engine
batch-size=16
gpu-id=0
gie-unique-id=6
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=../../../../../samples/configs/deepstream-app/config_infer_secondary_carmake.txt

[tests]
file-loop=0

This one is using secondary gie’s as well but I’ve still seen the issue without them…

Ok, we will try to repro locally.

Sry for the late reply, try to use --no-force-tcp option with test5 and see if issue persists?

Hi jasonpgf2a,

Have you tried with our suggestions? Can it work?
Please update the status, then can help you to resolve the issue.

Thanks

I had to backport my nano to jetpack 4.3 for something else so I cant test just now. I don’t see the issue on the xavier nx so I think this thread can be closed for now…

Actually the issue is still happening with the deepstream-test5 app. Just using the provided config files inside the app directory.
Happens occasionally that you get big gray block in the saved video. I’m now testing on a xavier nx.

I have also added the code to start smart record to my own deepstream c app - before the decoder - as per the deepstream-test5 app and I occasionally get the issue as well.

However - in this same app I have also added smart record after the stream demux component where I re-encode before tee’ing off to the SR bin.
This actually records well without any of the weird gray blocks appearing and is very stable.
(Note I have a parameter so that the program uses SR before the decoder or after streamdemux/re-encode so they can never both be active.)

I’m testing with hikvision rtsp cameras set to
1080p
15fps
4mpbs bitrate.

I don’t know what the gray blocks in the recorded video are from but I find that I cannot post process the recorded file with ffmpeg. I use ffmpeg to take a thumbnail a couple seconds into the video file.
When a video is recorded that shows these gray blocks ffmpeg fails with rc=69.

Hey, so the original issue is fixed, right?

No it is not @bcao.

The issues with deepstream-test5 is still there.
I have coded my own app as well uisng smart record (before the decoder just like deepstream-test5) and get these weird issues too.
It seems that the files get corrupted by SR.

See my last comment though - if I use SR after a streamdemux and re-encode it seems to be stable and work well.

Hey @jasonpgf2a , not sure if you can create a new topic for the issue since the topic is open for a long time and you ask to close it on 6/9, BTW we had fixed some issue related to SR in next release.

Yeah I understand there are some streammux and SR fixes coming which is why I haven’t been pushing this. Maybe we can just close it till GA. Hopefully it’s not too far away… ;-)