Hi @mfoglio . I use your cli in my env. It cannot always work well. I change the decoder to avdec_h264, it still cannot get data and decoder the stream.
gst-launch-1.0 --gst-debug=libav:3 rtspsrc location=XXX protocols=tcp latency=1000 drop-on-latency=1 timeout=5000000 ! rtph264depay ! h264parse ! avdec_h264 ! fakesink
When you use uridecoder bin, you can also change the decoder to avdec_h264, it has the same issue.
I think this is a compatible issue between gstreamer rtsp source plugin and your camera.
So We suggest you open a topic at gstreamer forum by the link below.
Hi @yuweiw , thank you for your reply.
In the other thread https://forums.developer.nvidia.com/t/nvv4l2decoder-does-not-work-with-certain-rtsp-video-stream/ , you suggested to use
uridecodebin to decode video streams. I agree with you on the fact that
uridecodebin might be a better (more generic) solution. Sois there a way to decode this stream with
I’d like to stress that I am just looking for any possible way to support all cameras with deepstream.
I am not interested in using a method or the other, I am just looking for a way to decode both the streams.
To sum up, as of now:
- My custom uridecoder bin does not support the first stream I sent you. It kinda work with the second stream. ( I noticed I was using
video/x-raw(memory:NVMM) in the last
capsfilter if you want to try it out)
uridecodebin cannot decode the second video stream at all
As a side note: I just tried again to launch:
gst-launch-1.0 uridecodebin uri=$RTSP_STREAM ! nvvideoconvert ! filesink location='test-nvv4l2decoder.mp4'
This pipeline seemed to previously work with the first stream. But I am not sure of that anymore. It saved a file, but I can’t find a way to open the file.
ffprobe version 4.2.7-0ubuntu0.1 Copyright (c) 2007-2022 the FFmpeg developers
built with gcc 9 (Ubuntu 9.4.0-1ubuntu1~20.04.1)
configuration: --prefix=/usr --extra-version=0ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
libavutil 56. 31.100 / 56. 31.100
libavcodec 58. 54.100 / 58. 54.100
libavformat 58. 29.100 / 58. 29.100
libavdevice 58. 8.100 / 58. 8.100
libavfilter 7. 57.100 / 7. 57.100
libavresample 4. 0. 0 / 4. 0. 0
libswscale 5. 5.100 / 5. 5.100
libswresample 3. 5.100 / 3. 5.100
libpostproc 55. 5.100 / 55. 5.100
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55ce58b1ff00] Format mov,mp4,m4a,3gp,3g2,mj2 detected only with low score of 1, misdetection possible!
[mov,mp4,m4a,3gp,3g2,mj2 @ 0x55ce58b1ff00] moov atom not found
test-nvv4l2decoder.mp4: Invalid data found when processing input
Note: I really doubt this has anything to do with it, but the only changes I made to my system was updating cuda on my host machine from 11.1 to 11.6 update 1.
Again, my goal is to find a way to decode the cameras. I don’t care how as long as it’s a Deepstream pipeline ;)
Hi @mfoglio , the pipeline you attached just save the raw data, so you cannot open it with a player. If you want to save it in a file, you can try the pipeline below,it save the data as h264 format. Then you can play it with ffmepg or other player support h264 format video.
gst-launch-1.0 uridecodebin --gst-debug=v4l2videodec:5 uri=XXX ! queue2 ! nvvideoconvert ! nvv4l2h264enc ! h264parse ! 'video/x-h264,stream-format=byte-stream' ! filesink location=test.h264
We suggeset that you use uridecoderbin in your code, cause it’s more simple and has a better compatibility.
The second rtsp source cannot send right data to decoder(either avdec_h264 or nvdecoder), so you can use the avdec_h264 test it and report a issuse to Gstreamer Forum. Thanks
Hi @yuweiw , thank you for your reply.
I think the second stream can send data correctly to
avdec_h264 because we are able to process it when using
avdec_h264. Still, it is not working with the nvdecoder.
Hi, @mfoglio Did you try the second stream with uridecoderbin using avdec_h264? I have tried, it also cannot receive data. But avdec_h264 will try to get data again and again, it will not stuck. You can test in your env with uridecoderbin using avdec_h264 by refer the link below:
Hi @yuweiw , the second streams works with both pipelines:
gst-launch-1.0 --gst-debug=v4l2videodec:5 rtspsrc location=$RTSP_STREAM protocols=tcp latency=1000 drop-on-latency=1 timeout=5000000 ! rtph264depay ! h264parse ! nvv4l2decoder cudadec-memtype=2 num-extra-surfaces=1 ! queue leaky=2 max-size-buffers=1 ! nvvideoconvert nvbuf-memory-type=3 output-buffers=1 ! capsfilter caps=video/x-raw,format=RGBA ! filesink location='test.mp4'
gst-launch-1.0 --gst-debug=v4l2videodec:5 rtspsrc location=$RTSP_STREAM protocols=tcp latency=1000 drop-on-latency=1 timeout=5000000 ! rtph264depay ! h264parse ! avdec_h264 ! queue leaky=2 max-size-buffers=1 ! nvvideoconvert nvbuf-memory-type=3 output-buffers=1 ! capsfilter caps=video/x-raw,format=RGBA ! filesink location='test.mp4'
I also verified that it works correctly with
gst-launch-1.0 uridecodebin uri=$RTSP_STREAM ! filesink location='test.mp4'
Note that to have this running you need to disable nvidia decoder. In the link you sent me ( Dynamically deleting a stream causes a deadlock - #15 by 549981178 ) there is an error: instead of
cp libnvv4l2.so libnvv4l2.so.bk , you should use
mv libnvv4l2.so libnvv4l2.so.bk.
Let me know if you can reproduce this.
Year， I can reproduce this. Your second source stream maybe a little special. We are analyzing
Hi @mfoglio , I check the stream data with WireShark. The second rtsp stream do not send the sps, pps data to decoder at the beginning. So our decoder cannot initialize.
The first rtsp stream always send the sps, pps data at the beginning.
So could you help to check why the second rtsp source do not send sps, pps data first? Thanks
Hi @yuweiw , I am not sure why that’s happening but I can tell you that the stream is coming from a camera “Axis Q6135-LE PTZ Network Camera”. This is a $2500 camera by a well-known manufacturer so I doubt it should have software issues.
I found another post from Video Codec SDK: Decoding problem reporting issues in decoding a video from an Axis camera. Quoting the post:
For the Axis camera, the callback
NvDecoder::HandlePictureDisplay(CUVIDPARSERDISPINFO *pDispInfo) has never been called. As a result I have always 0 decoded frames.
Video decoding is not my expertise so I can’t understand all the technical content of the post. However, in the last post, a user found a solution for FFmpeg. Maybe it could be useful to fix the problem in the source code of
nvv4l2decoder. Let me know what you think.
Thank you for your help
Hi, @mfoglio , what you infered is ffmpeg code, It’s diffrent from Gstreamer. So it’s not significant for reference. Since it can play well with v4l2videodec when I dumped the h264 stream from your rtsp source, .
So it’s not a one-sided reason. It maybe be incompatible between your special rtsp source and v4l2videodec. We are debugging for it.
Thank you, @yuweiw . Let know when you have any update. Please, don’t close the post in the meantime.
Good morning @yuweiw , do you have any update about a possible fix?
Hi @mfoglio , we have found the cause of the problem, we are tring to find a better solution.It may be fixed in the next version.
Also, could you try to set the rtsp stream to send sps&pps&IDR together at the first time of sending data?
Hi @yuweiw , unfortunately sending sps&pps&IDR together at the first time of sending data would not solve my problem as I have to deal with some cameras that to not belong to my organization, but to our customers. As a consequence, I can’t enable that on all their cameras.
OK. We are tring to resolve this issue, please wait for our future release. Thanks
Fixed in DeepStream 6.1.1, topic closed.