V4l2src & nvvideoconvert connection issue

Please provide complete information as applicable to your setup.

• Hardware Platform (DGPU)
• DeepStream Version - 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) - 535
• Issue Type( questions, new requirements, bugs) - Questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

[Note: I am using NV Docker container]

Hi,

I am trying to run deepstream gst pipeline using v4l camera.

Some point to note that I am able to successfully do the below things

  1. Able to run “deepstream-app -c source1_usb_dec_infer_resnet_int8.txt”

  2. Able to check the device by v4l2-ctl -d /dev/video0 --list-formats-ext. Below is the output
    ioctl: VIDIOC_ENUM_FMT
    Type: Video Capture

     Size: Discrete 1280x720
     	Interval: Discrete 0.033s (30.000 fps)
     Size: Discrete 160x120
     	Interval: Discrete 0.033s (30.000 fps)
     Size: Discrete 176x144
     	Interval: Discrete 0.033s (30.000 fps)
     Size: Discrete 320x240
     	Interval: Discrete 0.033s (30.000 fps)
     Size: Discrete 352x288
     	Interval: Discrete 0.033s (30.000 fps)
     Size: Discrete 640x480
     	Interval: Discrete 0.033s (30.000 fps)
    
     Size: Discrete 1280x720
     	Interval: Discrete 0.100s (10.000 fps)
     Size: Discrete 160x120
     	Interval: Discrete 0.033s (30.000 fps)
     Size: Discrete 176x144
     	Interval: Discrete 0.033s (30.000 fps)
     Size: Discrete 320x240
     	Interval: Discrete 0.033s (30.000 fps)
     Size: Discrete 352x288
     	Interval: Discrete 0.033s (30.000 fps)
     Size: Discrete 640x480
     	Interval: Discrete 0.033s (30.000 fps)
    

Now, when I try to run the below pipeline

“gst-launch-1.0 v4l2src device=/dev/video0 ! ‘video/x-raw,format=YUY2,width=640,height=480,framerate=30/1’ ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=NV12’ ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! fakesink”

I am getting error as below
“WARNING: erroneous pipeline: could not link v4l2src0 to nvvideoconvert0, nvvideoconvert0 can’t handle caps video/x-raw, format=(string)YUY2, width=(int)640, height=(int)480, framerate=(fraction)30/1”

Kindly clarify.

What’s the format of your camera video?

It is ‘YUYV’ (YUYV 4:2:2). v4l2-ctl -d /dev/video0 --list-formats-ext returns below.

ioctl: VIDIOC_ENUM_FMT
Type: Video Capture

[0]: 'MJPG' (Motion-JPEG, compressed)
	Size: Discrete 1280x720
		Interval: Discrete 0.033s (30.000 fps)
	Size: Discrete 160x120
		Interval: Discrete 0.033s (30.000 fps)
	Size: Discrete 176x144
		Interval: Discrete 0.033s (30.000 fps)
	Size: Discrete 320x240
		Interval: Discrete 0.033s (30.000 fps)
	Size: Discrete 352x288
		Interval: Discrete 0.033s (30.000 fps)
	Size: Discrete 640x480
		Interval: Discrete 0.033s (30.000 fps)

**[1]: 'YUYV' (YUYV 4:2:2)**
	Size: Discrete 1280x720
		Interval: Discrete 0.100s (10.000 fps)
	Size: Discrete 160x120
		Interval: Discrete 0.033s (30.000 fps)
	Size: Discrete 176x144
		Interval: Discrete 0.033s (30.000 fps)
	Size: Discrete 320x240
		Interval: Discrete 0.033s (30.000 fps)
	Size: Discrete 352x288
		Interval: Discrete 0.033s (30.000 fps)
	Size: Discrete 640x480
		Interval: Discrete 0.033s (30.000 fps)

But you set the format to YUY2 in your pipeline. Could you try to change that and run the pipeline?

But it is still giving the same error
WARNING: erroneous pipeline: could not link v4l2src0 to nvvideoconvert0, neither element can handle caps video/x-raw, format=(string)YUYV, width=(int)640, height=(int)480, framerate=(fraction)30/1

BTW, Earlier I had set it to YUY2 (even though my format is YUYV) based on one of the comment from NVIDIA member on this post.

However, even if I give YUYV, it still throws same error

Command used this time :
gst-launch-1.0 v4l2src device=/dev/video0 ! ‘video/x-raw,format=YUYV,width=640,height=480,framerate=30/1’ ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=NV12’ ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! fakesink

OK. Could you refer to the 28.3.2 of the FAQ and try to add a videoconvert first?

It worked.
Thanks for the support.

BTW, Using the same camera set up,

When I run “deepstream-app -c source1_usb_dec_infer_resnet_int8.txt”, the video runs smoothly without any lag.

However, when I try the below gst pipeline, the video lags a lot and not smooth and I also get message like “WARNING: from element /GstPipeline:pipeline0/GstEglGlesSink:eglglessink0: A lot of buffers are being dropped.”

gst-launch-1.0 v4l2src device=/dev/video0 ! ‘video/x-raw,format=YUY2,width=640,height=480,framerate=30/1’ ! videoconvert ! ‘video/x-raw, format=NV12’ ! nvvideoconvert ! ‘video/x-raw(memory:NVMM),format=NV12’ ! m.sink_0 nvstreammux name=m batch-size=1 width=1280 height=720 ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt batch-size=1 unique-id=1 ! nvtracker ll-lib-file=/opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so ! nvmultistreamtiler rows=1 columns=1 width=1280 height=720 ! nvvideoconvert ! nvdsosd ! nveglglessink

Am I missing in the pipeline?

Thanks

You can try to set the same parameter of each plugin. Like nvstreammux, you can set the batched-push-timeout=40000, live-source=1, etc…

Thanks.
When I used sync=0, it worked

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.