Gstreamer with v4l2src and nvivafilter

Hi all,

I have successfully been writing a CUDA process that modifies video coming in from the included MIPI camera, using the following command:

gst-launch-1.0 nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)30/1' ! nvivafilter cuda-process=true customer-lib-name="libnvsample_cudaprocess.so" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)RGBA' ! nvoverlaysink display-id=0 -e

I wanted to manipulate the pixels in RGB, so I discovered that just adding “format=(string)RGBA” toward the end there seems to work. I can also scale the image there. The RGBA and the scaling seem to happen before input to the libnvsample_cudaprocess, even though it’s listed afterward in the gstreamer pipeline. Could someone explain why this works?

Now I would like to make this work with a v4l2src (usb webcam) source. I’m not sure if it is a video format issue, or if this is just not supported yet. I get the following error:

gst-launch-1.0 v4l2src device=/dev/video0 ! nvivafilter cuda-process=true customer-lib-name="libnvsample_cudaprocess.so" ! 'video/x-raw(memory:NVMM), width=(int)640, height=(int)480, format=(string)RGBA' ! nvoverlaysink display-id=0 -e
Setting pipeline to PAUSED ...
Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and MjstreamingPipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data flow error.
Additional debug info:
gstbasesrc.c(2948): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming task paused, reason error (-5)
EOS on shutdown enabled -- waiting for EOS after Error
Waiting for EOS...

Any assistance would be appreciated. Thanks!

Hello, Undertow10:
You can check ‘nvivafilter’ capability by gst-inspect-1.0. The plugin can work with ‘nvcamerasrc’ or omxh264dec.
As to v4l2src, that provides user space data. You can write your CUDA processing code directly. Refer to CUDA sample code for details.

br
ChenJian

Hi there,

We had the same problem and found a way to get v4l2src and nvivafilter to work in a GStreamer pipeline.

It appears that the capsfilter that is used just before nvivafilter MUST NOT contain a framerate. Otherwise it will cause an “Internal data flow error” and the pipeline execution will abort.

An example for a pipeline that worked for us (Default GStreamer 1.8.0 / L4T r24.2.1 and v4l2src with our custom driver for the tc358840):

gst-launch-1.0 v4l2src device=/dev/video0 ! 'video/x-raw, width=3840, height=2160, format=UYVY, framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM), width=3840, height=2160, format=NV12' ! nvtee ! nvivafilter cuda-process=true pre-process=true post-process=true customer-lib-name="libnvsample_cudaprocess.so" ! 'video/x-raw(memory:NVMM), format=(string)NV12' ! nvoverlaysink display-id=0 -e