• Hardware Platform (Jetson / GPU): GPU NVIDIA A100-SXM4-40GB • DeepStream Version: 6.1 • TensorRT Version: 8.2.5.1 • NVIDIA GPU Driver Version (valid for GPU only): 510.47.03 • Issue Type( questions, new requirements, bugs): bugs • How to reproduce the issue? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Hello,
I am running the nvcr.io/nvidia/deepstream:6.1-devel docker container on a virtualized Linux server. The server has NVIDIA GPU Driver 510.47.03 installed and access to NVIDIA A100-SXM4-40GB GPU. I also have nvidia-container-toolkit installed and have confirmed that the deepstream:6.1-devel container has access to GPU and CUDA features.
I am able to successfully run the analytics with video files and some RTSP streams, but other RTSP streams seem to cause problems with the nvv4l2decoder. With some RTSP streams the decoder seems to produce only green frames to the output video.
With some trial and error, I noticed that this has something to do with the decoder buffers as increasing the decoder’s ‘num-extra-surfaces’ property to the maximum value of 24 seems to alleviate the problem. With the 24 extra surfaces only 4 consecutive frames out of 25 each second are green. With less extra surfaces the number of consecutive green frames increases and with 0 extra surfaces all frames are green.
I have also confirmed that the green frames are not caused by the encoding as I can run the same code with nvmultistreamtiler, nvvideoconvert, nvdsosd, nvvideoconvert, capsfilter, avenc_mpeg4, mpeg4videoparse, qtmux and filesink replaced with a fakesink and still see consecutive empty nvds_batch_meta buffers every second.
The IP cameras that I use are from various manufacturers and the ones that are causing this problem seem to be somewhat randomly distributed. I have tested the camera streams with VLC and Gstreamer without Nvidia components and they both produce normal video output. I am sorry but I cannot provide more specific information about the cameras on a public forum. I can share specific information about the cameras privately If needed.
I compared the debug output of both pipelines and noticed that the h264parse element in the RTSP pipeline prints a lot of information about dropping data and not complete nal. The same kind of logs were not produced by the video file pipeline.
DEBUG h264parse gsth264parse.c:186:gst_h264_parse_reset_frame:<h264parse0> reset frame
DEBUG h264parse gsth264parse.c:1227:gst_h264_parse_handle_frame:<h264parse0> last parse position 0
DEBUG h264parse gsth264parse.c:1247:gst_h264_parse_handle_frame:<h264parse0> Dropping filler data 1
DEBUG h264parse gsth264parse.c:1405:gst_h264_parse_handle_frame:<h264parse0> Dropped data
LOG h264parse gsth264parse.c:1212:gst_h264_parse_handle_frame:<h264parse0> parsing new frame
DEBUG h264parse gsth264parse.c:186:gst_h264_parse_reset_frame:<h264parse0> reset frame
DEBUG h264parse gsth264parse.c:1227:gst_h264_parse_handle_frame:<h264parse0> last parse position 0
DEBUG h264parse gsth264parse.c:1280:gst_h264_parse_handle_frame:<h264parse0> not a complete nal found at offset 3
DEBUG h264parse gsth264parse.c:1286:gst_h264_parse_handle_frame:<h264parse0> draining, accepting with size 11937
DEBUG h264parse gsth264parse.c:1333:gst_h264_parse_handle_frame:<h264parse0> complete nal found. Off: 3, Size: 11937
DEBUG h264parse gsth264parse.c:833:gst_h264_parse_process_nal:<h264parse0> processing nal of type 1 Slice, size 11937
DEBUG h264parse gsth264parse.c:942:gst_h264_parse_process_nal:<h264parse0> first_mb_in_slice = 0
DEBUG h264parse gsth264parse.c:945:gst_h264_parse_process_nal:<h264parse0> frame start: 1
DEBUG h264parse gsth264parse.c:953:gst_h264_parse_process_nal:<h264parse0> parse result 0, first MB: 0, slice type: 0
LOG h264parse gsth264parse.c:1010:gst_h264_parse_process_nal:<h264parse0> collecting NAL in AVC frame
DEBUG h264parse gsth264parse.c:431:gst_h264_parse_wrap_nal:<h264parse0> nal length 11937
DEBUG h264parse gsth264parse.c:186:gst_h264_parse_reset_frame:<h264parse0> reset frame
I also searched for other problems with h264parse and after trial and error found out that by adding a second h264 parse to the pipeline I can get a normal video.
The first pipeline’s output rtsp_1_h264parse.mp4 contains only green frames but the second pipeline’s output rtsp_2_h264parse.mp4 is a normal video from the IP camera. The second pipeline solves my green frame problem but feels more like a workaround than a fix since the pipeline should not need two h264parse elements.
Hello kesong and motyaedu and sorry for my inactivity.
Lately I have noticed that the problem has disappeared from most of the cameras I use even though I did not change the pipeline. The fact that more cameras are working correctly without changes in the pipeline would suggest that the connection to the cameras might have been bad earlier. So, forcing TCP should also solve this issue as motyaedu suggests and I will try that out once I have the time. But since my pipeline is now mostly working and I also have a workaround this issue is now solved for me. I will post an update if I find out something new when I try out the TCP.