How can I construct a gstreamer pipe for my V4L2 camera?

• Hardware Platform (Jetson / GPU) : Nuvo IPC / RTX2080
• DeepStream Version : N/A
• JetPack Version (valid for Jetson only) : N/A
• TensorRT Version : N/A
• NVIDIA GPU Driver Version (valid for GPU only) : 460.91.03
• Issue Type( questions, new requirements, bugs) : questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hello,
I’d like to use gstreamer to capture video images(RGB/BGR), frames(RGB/BGR) from my V4L2 camera.
The output of v4l2-ctl is shown below:

$ vl42ctl -d /dev/video0 --all

Driver Info (not using libv4l2):
	Driver name   : basa
	Card type     : FPD1-0103
	Bus info      : PCI:0000:09:00.0
	Driver version: 5.4.143
	Capabilities  : 0x84200001
		Video Capture
		Streaming
		Extended Pix Format
		Device Capabilities
	Device Caps   : 0x04200001
		Video Capture
		Streaming
		Extended Pix Format
Priority: 2
Video input : 0 (FPD1-0103: ok)
Format Video Capture:
	Width/Height      : 1920/1080
	Pixel Format      : 'YUYV'
	Field             : None
	Bytes per Line    : 3840
	Size Image        : 4147232
	Colorspace        : JPEG
	Transfer Function : Default (maps to sRGB)
	YCbCr/HSV Encoding: Default (maps to ITU-R 601)
	Quantization      : Default (maps to Full Range)
	Flags             :

But I have no idea how to make the pipe string that should go to the gst-launch-1.0
Could you suggest me a good gstreamer pipe string for my camera?

I’d like to use as much H/W accellerations as possible that the Nvidia RTX2080 provides.
The source would be the camera above.
The sink should be the appsink. But with the gst-launch-1.0 the sink can be fakesink. (ex. gst-launch-1.0 nvv4l2camerasrc device=/dev/video0 ! ‘video/x-raw(memory:NVMM),format=UYVY,width=3840,height=2160,framerate=30/1’ ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw, format=BGR ! fakesink -v)
The application that receives data from the appsink must receive RGB or BGR data and upon this data opencv will be used and applied to process many vision algorithm.

Thank you!

You can try to replace the nvvidconv and videoconvert plug-ins with nvvideoconvert plugin.

1 Like

Thank you very much!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.