I encountered an issue related to nvv4l2decoder, that I noticed it is automatically by decodebin called when reading an RTSP Input Stream in a GStreamer pipeline.
I noticed that if the input stream from the RTSP Server (a ZED Camera in my case) is in I420 format, the RTSP I/O pipeline from deepstream_python_apps runs smoothly
Error String : Feature not supported on this GPUError Code : 801
Error: gst-resource-error-quark: Failed to process frame. (1): gstv4l2videodec.c(2273): gst_v4l2_video_dec_handle_frame (): /GstPipeline:pipeline0/GstBin:source-bin-00/GstURIDecodeBin:uri-decode-bin/GstDecodeBin:decodebin0/nvv4l2decoder:nvv4l2decoder0:
Maybe be due to not enough memory or failing driver
As an additional source of information I get on the server terminal this message:
x264 [error]: baseline profile doesn't support 4:4:4
Current investigation*
I tried several different solutions, but the key is that so far I have not found a good way to convert an RTSP Input Stream from one format to another. nvvideoconvert does work with the Usb Camera Demo, but as my speculation, it seems there is something in the RTSP handling that leads to the issue in this case.
I explored all these related issues, without success:
Or you could try changing the encoder input to yuv420 like you did in your first command line
Would it be possible to perform some software decoding from let’s say YUV444 to YUV420 with GStreamer?
Like having a RTSP Input source from my original input in YUV444, converting and then forwarding to YUV420, so that then the rest of the pipeline succeeds?
You can do this. Since nvv4l2decoder sets the highest rank, when using uridecodebin, nvv4l2decoder will be used by default. Build the pipeline as follows and you can use software decoding.