Using nvvidconv in two different pipelines, getting "nvbuf_utils: nvbuffer Payload Type not supported"

Hi,

My project uses two cameras, one CSI and one USB. I have two issues, and I am posting them together because I suspect they are related.

(1) I cannot find a GStreamer pipeline that will read from the USB camera using MJPEG. Supported modes for the camera are:

$ v4l2-ctl -d /dev/video1 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'MJPG' (compressed)
	Name        : Motion-JPEG
		Size: Discrete 1920x1080
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 640x480
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 320x240
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 640x360
			Interval: Discrete 0.033s (30.000 fps)

Pipeline:

gst-launch-1.0 -e v4l2src device=/dev/video1 io-mode=2 \
 ! 'image/jpeg, width=1920, height=1080, framerate=30/1, format=MJPG' \
 ! jpegparse \
 ! nvjpegdec \
 ! 'video/x-raw(memory:NVMM), format=I420' \
 ! nvvidconv \
 ! 'video/x-raw(memory:NVMM), format=NV12' \
 ! nvv4l2h264enc bitrate=4000000 ! h264parse config-interval=-1 \
 ! rtph264pay ! udpsink host=$STREAM_IP port=$STREAM_PORT auto-multicast=true

Result:

Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
nvbuf_utils: nvbuffer Payload Type not supported
NvBufferGetParams failed for src_dmabuf_fd
nvbuffer_transform Failed
gst_nvvconv_transform: NvBufferTransform Failed 
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason error (-5)
EOS on shutdown enabled -- waiting for EOS after Error
Waiting for EOS...

If I replace the last two lines of the pipeline (everything after ‘format=NV12’) with fakesink,
I get the same problem. If I move fakesink earlier, the problem goes away. So it seems like
nvvidconv is a key element of the problem. I am using it to force a format that nvv4l2h264enc can use, and I’m not sure if there is any other way to do it.

(2) The CSI camera works but is inverted. If I put in the usual flip-method element, then it fails. Pipeline that works:

gst-launch-1.0 -e nvarguscamerasrc bufapi-version=1  \
 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' \
 ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 \
 ! nvinfer config-file-path=$DEEPSTREAM/samples/configs/deepstream-app/config_infer_primary_nano.txt batch-size=1 unique-id=1 \
 <more stuff that works fine...>

Pipeline that fails:

gst-launch-1.0 -e nvarguscamerasrc bufapi-version=1  \
 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' \
 ! nvvidconv flip-method=2 \
 <... from above; can just put fakesink here with same results>

Result:

Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 2 
   Output Stream W = 1920 H = 1080 
   seconds to Run    = 0 
   Frame Rate = 29.999999 
GST_ARGUS: PowerService: requested_clock_Hz=13608000
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
nvbuf_utils: nvbuffer Payload Type not supported
NvBufferGetParams failed for src_dmabuf_fd
nvbuffer_transform Failed
gst_nvvconv_transform: NvBufferTransform Failed 
ERROR: from element /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstNvArgusCameraSrc:nvarguscamerasrc0:
streaming stopped, reason error (-5)
EOS on shutdown enabled -- waiting for EOS after Error
Waiting for EOS...
^Chandling interrupt.
Interrupt: Stopping pipeline ...
Interrupt while waiting for EOS - stopping pipeline...
Execution ended after 0:00:03.180131555
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
GST_ARGUS: Cleaning up

In both cases, it seems like nvvidconv is the culprit. BTW if I try bufapi-version=0, I get “ERROR: from element /GstPipeline:pipeline0/GstNvStreamMux:m: Input buffer number of surfaces (0) must be equal to mux->num_surfaces_per_frame (1) \ Set nvstreammux property num-surfaces-per-frame appropriately”

Lastly… just wondering why we have nvvidconv and nvvideoconvert, with similar (but not identical) capabilities and descriptions. At the very least, these should have more distinctive names. Or they could be combined into one (working) element.

Hi,
For USB camera outputting MJPEG, please try

gst-launch-1.0 -e v4l2src device=/dev/video1 io-mode=2 \
 ! 'image/jpeg, width=1920, height=1080, framerate=30/1' \
 ! nvjpegdec \
 ! 'video/x-raw, format=I420' \
 ! nvvidconv \
 ! 'video/x-raw(memory:NVMM), format=NV12' \
 ! nvv4l2h264enc bitrate=4000000 ! h264parse config-interval=-1 \
 ! rtph264pay ! udpsink host=$STREAM_IP port=$STREAM_PORT auto-multicast=true

Also there is a patch for enhancement. Please check
https://devtalk.nvidia.com/default/topic/1049311/jetson-agx-xavier/nvjpegdec-slower-then-jpegdec-in-gstreamer/post/5348034/#5348034

If you run Deepstream SDK, please utilize nvvidconvert plugin. nvvideoconvert is a general plugin working on Jetson platforms and desktop GPUs. The nvvidconv plugin is only for multimedia usecases without DS SDK on Jetson platforms.

Thanks DaneLLL, that worked!

Thanks for the info on the video converter elements, that distinction isn’t clear in the docs. I am going to be adding DeepStream elements eventually - will that conflict with the use of nvvidconv here? I tried to build these pipelines with just nvvideoconvert but it does not seem to have the capabilities I need (like image flip).