RTSP Issue with DeepStream SDK Module on x86_64 based Azure Virtual Machine

I’m doing a bit of investigation using the NVIDIA DeepStream SDK IoT Container on Azure N-Series Virtual Machines.

It seems that I have encountered an issue in the current Azure Container Registry | Microsoft Azure image with id 8092f50a1b18

I have attached the DeepStream configuration that is being employed along with a description of the issues and my findings. I appreciate any assistance you may be able to provide.

Environment:
Azure Data Science Virtual Machine running Ubuntu 18.04.3 LTS.

Specs are included below:

Untitled

Issue:
If I enable an output sink of type RTSP in the DeepStream Config, I am able to remotely view the video stream for at most ~40 seconds before the DeepStreamSDK module seems to die without an error.

I had to dial the RTSP bitrate down from 4000000 to 1000000, before I was able to view the stream for any duration. The lower I go, the longer it seems to stay up. The behavior is intermittent, restarting the container with the same configuration keeps the stream running for a random amount of time, sometimes a second or less with a maximum of around 40 seconds observed.

If I disable RTSP output, the DeepStream workload runs as expected without issue.

I like your avatar.

To make DeepStream, or any gstreamer app spit out debug info, there are some tools you can use. Try this maybe:

GST_DEBUG=4 some-deepstream-app --argument-here > log.txt 2>&1

The contents of log.txt may give you some idea of what’s going on and you can attach it here. I am not too familiar with Azure. It probably has some built in mechanisms for caputuring stdout and stederr.

@mdegans,

Likewise, I am a fan of your avatar! Thank you for the suggestion!

I was able to set the GST_DEBUG environment variable to loglevel 4 to capture verbose output of the RTSP output sink streaming and abruptly failing. I have attached the log output of this session.

out.log (1.1 MB)

At time of failure, the logs report:

{"log":"0:00:04.345098023 \u001b[332m    1\u001b[00m 0x560b80c7da80 \u001b[33;01mWARN   \u001b[00m \u001b[00m      v4l2bufferpool gstv4l2bufferpool.c:1463:gst_v4l2_buffer_pool_dqbuf:\u003csink_sub_bin_encoder3:pool:sink\u003e\u001b[00m V4L2 provided buffer has bytesused 0 which is too small to include data_offset 0\n","stream":"stderr","time":"2020-03-30T16:35:09.377522194Z"}
{"log":"0:00:04.345115723 \u001b[332m    1\u001b[00m 0x560b80c7da80 \u001b[33;01mWARN   \u001b[00m \u001b[00m      v4l2bufferpool gstv4l2bufferpool.c:1463:gst_v4l2_buffer_pool_dqbuf:\u003csink_sub_bin_encoder3:pool:sink\u003e\u001b[00m V4L2 provided buffer has bytesused 0 which is too small to include data_offset 0\n","stream":"stderr","time":"2020-03-30T16:35:09.377535194Z"}
{"log":"0:00:04.345121223 \u001b[332m    1\u001b[00m 0x560b80c7da80 \u001b[33;01mWARN   \u001b[00m \u001b[00m      v4l2bufferpool gstv4l2bufferpool.c:1463:gst_v4l2_buffer_pool_dqbuf:\u003csink_sub_bin_encoder3:pool:sink\u003e\u001b[00m V4L2 provided buffer has bytesused 0 which is too small to include data_offset 0\n","stream":"stderr","time":"2020-03-30T16:35:09.377539494Z"}
{"log":"0:00:04.345141724 \u001b[332m    1\u001b[00m 0x560b80c7da80 \u001b[36mINFO   \u001b[00m \u001b[00m        videoencoder gstvideoencoder.c:1144:gst_video_encoder_sink_event_default:\u003csink_sub_bin_encoder3\u003e\u001b[00m upstream tags: taglist;\n","stream":"stderr","time":"2020-03-30T16:35:09.377543095Z"}
{"log":"0:00:04.345158924 \u001b[332m    1\u001b[00m 0x560b80c7da80 \u001b[36mINFO   \u001b[00m \u001b[00m        videoencoder gstvideoencoder.c:1144:gst_video_encoder_sink_event_default:\u003csink_sub_bin_encoder3\u003e\u001b[00m upstream tags: taglist;\n","stream":"stderr","time":"2020-03-30T16:35:09.377546695Z"}
{"log":"0:00:04.346817138 \u001b[332m    1\u001b[00m 0x560b80c7da80 \u001b[36mINFO   \u001b[00m \u001b[00m        videoencoder gstvideoencoder.c:1144:gst_video_encoder_sink_event_default:\u003csink_sub_bin_encoder3\u003e\u001b[00m upstream tags: taglist;\n","stream":"stderr","time":"2020-03-30T16:35:09.379049607Z"}
{"log":"0:00:04.346838538 \u001b[332m    1\u001b[00m 0x560b80c7da80 \u001b[36mINFO   \u001b[00m \u001b[00m        videoencoder gstvideoencoder.c:1144:gst_video_encoder_sink_event_default:\u003csink_sub_bin_encoder3\u003e\u001b[00m upstream tags: taglist;\n","stream":"stderr","time":"2020-03-30T16:35:09.379061607Z"}

These are just WARN and INFO entries though, however, this ERROR entry that appear slightly higher up seems to hint at drivers being a potential issue, I’m not sure how to address if this is the case:

{"log":"0:00:04.060243344 \u001b[332m    1\u001b[00m 0x7fd6a8003850 \u001b[31;01mERROR  \u001b[00m \u001b[00m                v4l2 gstv4l2object.c:2072:gst_v4l2_object_get_interlace_mode:\u001b[00m Driver bug detected - check driver with v4l2-compliance from http://git.linuxtv.org/v4l-utils.git\n","stream":"stderr","time":"2020-03-30T16:35:09.092501814Z"}

Cheers,

Paul

I see a lot of this:

{"log":"0:00:04.272352916 \u001b[332m    1\u001b[00m 0x560b80c7da80 \u001b[33;01mWARN   \u001b[00m \u001b[00m      v4l2bufferpool gstv4l2bufferpool.c:1463:gst_v4l2_buffer_pool_dqbuf:\u003csink_sub_bin_encoder3:pool:sink\u003e\u001b[00m V4L2 provided buffer has bytesused 0 which is too small to include data_offset 0\n","stream":"stderr","time":"2020-03-30T16:35:09.304572885Z"}

Which sounds ominous, but according to this post isn’t a problem.

Not sure about the driver bug warning. I can’t find much on a Google, but here is the source to where it’s coming from. Looks like it happens when the driver reports “V4L2_FIELD_ANY” for the field type, in which case interlace mode is set to progressive scan. There are other warnings about this “Driver should never set v4l2_buffer.field to ANY”. I am not sure if or why the warnings are important. Somebody more familiar with the gstreamer source would have to answer. What’s your source element(s)?

Nothing stands out to me from the log other than the above and what you’ve already noted. Have you charted that for your example? I have seem configurations listed on the forum that can cause deepstream_app to run out of memory. Also, what’s the return code for the process if you can find where Azure provides that?

Hi,
On remote machine, please try following pipeline:

gst-launch-1.0 uridecodebin uri=rtsp://<rtsp link> ! nveglglessink sync=0.

If you would like to try vlc, you might need to tune caching size.

Issue appears to be related to DeepStream incompatibility with K80 / M60 GPUs present on Azure NC-Series and NV-Series Virtual Machines. Workload works fine on machines with Tesla V100 GPU present.

1 Like