deepstream4.0 with rtsp of sink makes the picture distortion with high bitrate

Hi,

Ubuntu18.04, GTX 1060

the deepstream versionis:
deepstream_sdk_v4.0_x86_64

The source0 of deepstream4.0 is rtsp. And the sink2 of deepstream4.0 is also rtsp.
Deepstream receive rtsp from PC A. And deepstream send rtsp to PC B.
When the bitrates of sink2 is set less than 8000kbps, it works well.
The sink0 can show smooth picture.
But when the bitrates of sink2 set more than 8000kbps, the sink0 show distorted picture.
The picture is not clear.

The config is:
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
camera-v4l2-dev-node=0
camera-width=1920
camera-height=1080
uri=rtsp://root:root@192.168.13.162:8554/session0.mpg
num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=8388608
rtsp-port=8553
udp-port=5401

How to fix it?
Thanks.

Hi,
Is it reproduced if you enable sink0(EglSink) + sink1(Encode + File Save)?

Hi,
It works well with sink0(EglSink) + sink1(Encode + File Save).

So, i think that the rtsp cause the problem.

And how to fix it?
thank you.

The config is:

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=4
camera-v4l2-dev-node=0
camera-width=1920
camera-height=1080
uri=rtsp://root:root@192.168.13.162:8554/session0.mpg
num-sources=1
gpu-id=0
cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=8388608
output-file=out.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=8388608

set below properties in case of RTSPStreaming

rtsp-port=8553
udp-port=5401

Hi,
We have DS4.0.1 release. Please upgrade and check if you still observe the issue. If it is still present, please attach the config file so that we can try to reproduce it.

Hi,
I have try DS4.0.1. It is also not work well.
Deepstream receive rtsp from PC A. And deepstream send rtsp to PC B.
The bitrate of rtsp send by PC A is 16000kbps.

Now the problem of DS4.0.1 is:
When the ratebits of sink2 is more than 8000kbps,sink0 can show the Smooth picture, but the video received by PC B by rtsp is picture carton.
It is also a problem of DS4.0.

I use the exmaple of /deepstream_sdk_v4.0.1_x86_64/sources/apps/sample_apps/deepstream-app with the config of /deepstream_sdk_v4.0.1_x86_64/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt.

I donot modify the code of /deepstream_sdk_v4.0.1_x86_64/sources/apps/sample_apps/deepstream-app.
And I modify the config of /deepstream_sdk_v4.0.1_x86_64/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt.

the config is :

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=2
columns=2
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=4
uri=rtsp://root:root@192.168.13.162:8554/session0.mpg
num-sources=4
#drop-frame-interval=2
gpu-id=0

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0

[sink2]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=8000000

set below properties in case of RTSPStreaming

rtsp-port=8553
udp-port=5401

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=4
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000

Set muxer output width and height

width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=1
gpu-id=0
model-engine-file=…/…/models/Primary_Detector/resnet10.caffemodel_b4_int8.engine
batch-size=4
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt

[tracker]
enable=1
tracker-width=640
tracker-height=368
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
#ll-config-file=tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=1

[secondary-gie0]
enable=1
model-engine-file=…/…/models/Secondary_VehicleTypes/resnet18.caffemodel_b16_int8.engine
gpu-id=0
batch-size=16
gie-unique-id=4
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_vehicletypes.txt

[secondary-gie1]
enable=1
model-engine-file=…/…/models/Secondary_CarColor/resnet18.caffemodel_b16_int8.engine
batch-size=16
gpu-id=0
gie-unique-id=5
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_carcolor.txt

[secondary-gie2]
enable=1
model-engine-file=…/…/models/Secondary_CarMake/resnet18.caffemodel_b16_int8.engine
batch-size=16
gpu-id=0
gie-unique-id=6
operate-on-gie-id=1
operate-on-class-ids=0;
config-file=config_infer_secondary_carmake.txt

[tests]
file-loop=0

Is rtsp with H264 encode hardcode?
And is the harcode by nvidia GPU or intel gpu.

Hi,

What is the command you run to receive the stream in PC B? If sink0 is good, it is more like an issue in network bandwidth. Maybe the bandwidth is not stable?

Hi,
The software of PC B is VLC. We use VLC to receive the rtsp send by deepstream sink2.
The command of VLC is “Open Network Stream” and give the address of rtsp.

The network is local network with 100M bandwidth.
We test the network is stable and enough for rtsp.

And because we can only modify the ratebits of sink2, so sink2 have the problem.
I think sink0 works well because it has no the parameter of ratebits, is that right?

Is rtsp with H264 encode hardcode?
And is the harcode by nvidia GPU or intel gpu.

Maybe the speed of H264 encode slowly?

Hi,
Please check
[url]https://devtalk.nvidia.com/default/topic/1065973/deepstream-sdk/-is-deepstream4-0-with-rtsp-hardcode-for-h264-/post/5398271/#5398271[/url]
There is hardware engine for video encoding. Since ‘EglSink + File Save’ works fine, it should not be an issue in encoding performance.

Just found
https://stackoverflow.com/questions/17149225/limiting-send-rate-of-gstreamers-udpsink
It worth a try of tuning MTU size. Please configure it to different values and try again.

Hi,
I use the cmd of “gst-inspect-1.0 --plugin” to find rtph264pay plugin.
But i cannot find it.
I donnot know how to add MTU size to config file or code.
Can you tell me?
thank you.

bin->codecparse = gst_element_factory_make (“h264parse”, “h264-parser”);
bin->encoder = gst_element_factory_make (NVDS_ELEM_ENC_H264, encode_name);
bin->rtppay = gst_element_factory_make (“rtph264pay”, rtppay_name);

Hi,
You may add one gst_object_set() like:

g_object_set (G_OBJECT (bin->rtppay), "mtu", _VALUE_OF_MTU_SIZE_, NULL);

Hi,
gint mtunum;
g_object_get (G_OBJECT (bin->rtppay), “mtu”, &mtunum, NULL);
g_object_set (G_OBJECT (bin->rtppay), “mtu”, 65500, NULL);

I find the default value of mtu is 1400, and i set the value to be 65500.

if i set the value more than 65530, it return the error:

WARNING from sink_sub_bin_udpsink2: Attempting to send a UDP packets larger than maximum size (65530 > 65507)
Debug info: gstmultiudpsink.c(722): gst_multiudpsink_send_messages (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstBin:sink_sub_bin2/GstUDPSink:sink_sub_bin_udpsink2:
Reason: Error sending message: Message too long

The scope of mtu is 0-4294967295,but the maximum vlaue of UPD packets is 65507.

I set the value of mtu to be 65500.
The picture is more smooth. But the picture sometime is also carton.

I think if the mtu value is more than 65507, the picture will be more smooth.
But UDP will return error.

How to fix it.
thank you

Hi,
Along with setting ‘mtu’, you may set some other properties in rtph264pay such as ‘min-ptime’, ‘max-ptime’
https://gstreamer.freedesktop.org/documentation/rtp/rtph264pay.html?gi-language=c

Also you may try different ‘iframeinterval’ and ‘profile’ in nvv4l2h264enc

iframeinterval      : Encoding Intra Frame occurance frequency
                    flags: readable, writable, changeable only in NULL or READY state
                    Unsigned Integer. Range: 0 - 4294967295 Default: 30
profile             : Set profile for v4l2 encode
                    flags: readable, writable, changeable only in NULL or READY state
                    Enum "GstV4l2VideoEncProfileType" Default: 0, "Baseline"
                       (0): Baseline         - GST_V4L2_H264_VIDENC_BASELINE_PROFILE
                       (2): Main             - GST_V4L2_H264_VIDENC_MAIN_PROFILE
                       (4): High             - GST_V4L2_H264_VIDENC_HIGH_PROFILE

Hi,
I add the code of /deepstream_sdk_v4.0.1_x86_64/sources/apps/sample_apps/deepstream-app:

gint value;
gint value1;
g_object_get (G_OBJECT (bin->rtppay), “max-ptime”, &value, NULL);
g_object_get (G_OBJECT (bin->rtppay), “min-ptime”, &value1, NULL);

g_object_set (G_OBJECT (bin->rtppay), “max-ptime”, 100000000, NULL);
g_object_set (G_OBJECT (bin->rtppay), “min-ptime”, 0, NULL);

g_object_set (G_OBJECT (bin->encoder), “iframeinterval”, 10, NULL);
g_object_set (G_OBJECT (bin->encoder), “profile”, 0, NULL);

I find that the default value of max-ptime is -1, the default value of min-ptime is 0.
And i modify the value of max-ptime to be 100000000ns(100ms),the value of min-ptime is 0.
The picture sometime is also carton.

And i modify the value of max-ptime to be-1,the value of min-ptime is 100000000ns(100ms).
The picture sometime is also carton.

I set the iframeinterval to be 10 and to be 60, the profile to be 0 and to be 2 and to be 4.
The picture sometime is also carton.

It doesnot seem to work.

And I think the value of iframeinterval is bigger and the value of profile is Baseline, the picture should be more smooth.
But it doesnot seem to work.

thank you.

Hi,
A user has shared a patch to run rtsp in TCP protocol.
https://devtalk.nvidia.com/default/topic/1062748/deepstream-sdk/nvvideoconvert-crashes-on-rtsp-input-src-crop-x-y-w-h-pipeline/post/5387790/#5387790

Please apply it to deepstream-app and try again. If network is stable, it should be more reliable in TCP mode.

/* Set only TCP transport for stream ========================================== */
gst_rtsp_media_factory_set_protocols(factory, GST_RTSP_LOWER_TRANS_TCP);
/* ============================================================================ */

Hi,
I add the code of
gst_rtsp_media_factory_set_protocols(factory, GST_RTSP_LOWER_TRANS_TCP);
The picture sometime is also carton.

And i find that these codes in /deepstream_sdk_v4.0.1_x86_64/sources/apps/sample_apps/deepstream-app:
1.
bin->sink = gst_element_factory_make (“udpsink”, elem_name);
NVGSTDS_LINK_ELEMENT (bin->queue, bin->transform);
NVGSTDS_LINK_ELEMENT (bin->transform, bin->cap_filter);
NVGSTDS_LINK_ELEMENT (bin->cap_filter, bin->encoder);
NVGSTDS_LINK_ELEMENT (bin->encoder, bin->rtppay);
NVGSTDS_LINK_ELEMENT (bin->rtppay, bin->sink);

sprintf (udpsrc_pipeline,
"( udpsrc name=pay0 port=%d caps="application/x-rtp, media=video, "
“clock-rate=90000, encoding-name=%s, payload=96 " )”,
updsink_port_num, encoder_name);

server = gst_rtsp_server_new ();
g_object_set (server, “service”, port_num_Str, NULL);

mounts = gst_rtsp_server_get_mount_points (server);

factory = gst_rtsp_media_factory_new ();

gst_rtsp_media_factory_set_protocols(factory, GST_RTSP_LOWER_TRANS_TCP);

gst_rtsp_media_factory_set_launch (factory, udpsrc_pipeline);

It seems that rtspserver first receive upd stream from udpsink, then rtspserver send stream to PC B.
Maybe this cause some performance issues?

And i find that the rtsp of deepstream default support udp and tcp without the code of gst_rtsp_media_factory_set_protocols(factory, GST_RTSP_LOWER_TRANS_TCP).

so, without the code of gst_rtsp_media_factory_set_protocols(factory, GST_RTSP_LOWER_TRANS_TCP):
If VLC of PC B request rtsp with upd, the rtspserver of deepstream send udp stream.
If VLC of PC B request rtsp with tcp, the rtspserver of deepstream send tcp stream.

with the code of gst_rtsp_media_factory_set_protocols(factory, GST_RTSP_LOWER_TRANS_TCP):
If VLC of PC B request rtsp with upd, the rtspserver of deepstream send tcp stream.
If VLC of PC B request rtsp with tcp, the rtspserver of deepstream send tcp stream.

so, If VLC of PC B request rtsp with upd, the rtspserver of deepstream send udp stream

Or, Can you give me a complete command of gstreamer to receive the udp stream of deepstream?
Such as gst-launch-1.0 udpsrc name=pay0 port=%d caps=“application/x-rtp, …” …
I want to firstly test udp stream.
thank you.

Hi,
Below are posts about using udpsrc and rtspsrc:
https://devtalk.nvidia.com/default/topic/1014789/jetson-tx1/-the-cpu-usage-cannot-down-use-cuda-decode-/post/5188538/#5188538
https://devtalk.nvidia.com/default/topic/1027423/jetson-tx2/gstreamer-issue-on-tx2/post/5225972/#5225972
FYR.

Hi,
I try to use the commod of “gst-launch-1.0 udpsrc port=5000 ! ‘application/x-rtp,encoding-name=H264,payload=96’ ! tee name=t t. ! queue ! filesink location= test.mpg t. ! queue ! rtph264depay ! h264parse ! omxh264dec ! nveglglessink” according to https://devtalk.nvidia.com/default/topic/1027423/jetson-tx2/gstreamer-issue-on-tx2/post/5225972/#5225972.

But it return the info of WARNING: erroneous pipeline: no element “omxh264dec”.
And i find there are avenc_h264_omx plugin in gstreamer.
But the command of gst-launch-1.0 udpsrc port=5000 ! ‘application/x-rtp,encoding-name=H264,payload=96’ ! tee name=t t. ! queue ! filesink location= test.mpg t. ! queue ! rtph264depay ! h264parse ! avenc_h264_omx ! nveglglessink return another info:
WARNING: erroneous pipeline: could not link h264parse0 to avenc_h264_omx0

Maybe the command is old?
thank you.

Hi,
The omx plugins are on Jetson platforms. Please use nvv4l2decoder on dGPU platforms.