Streaming with gst v4l2src - Can't use a pipeline + Pixel packing fix

I’m using the tool qv4l2 for streaming video from a custom camera (RAW10 Grayscale). The configuration in this tool is the following one:


Anyway, I’m not able to make any pipeline work, although it should be possible as the tool is working fine… Any suggestion to create a pipeline that can make the camera work (based in the qv4l2 tool configuration)?

Moreover, I’d like to know how can I fix the pixel packing when capturing RAW video data. At this moment, as I’m using RAW10, the pixel formatting is the following one (based on the TRM)

bit:  15 14 13 12 11 10 09 08 07 06 05 04 03 02 01 00
data:  0  0 d9 d8 d7 d6 d5 d4 d3 d2 d1 d0 d9 d8 d7 d6

In a previous post, they suggested me to build a GST plugin or appsink, but I’d like to know if there is any other way to take this pixel packing into consideration

Hi,
Please run following command to get information about the camera:

$ v4l2-ctl -d /dev/video1 --list-formats-ext

I get the following:

nvidia@nvidia-desktop:~$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'Y10 '
	Name        : 10-bit Greyscale
		Size: Discrete 1920x1080
			Interval: Discrete 0.040s (25.000 fps)

	Index       : 1
	Type        : Video Capture
	Pixel Format: 'Y16 '
	Name        : 16-bit Greyscale
		Size: Discrete 1920x1080
			Interval: Discrete 0.040s (25.000 fps)

Anyway, the framerate is defined in the driver in this way:

static const int cam2000_25fps[] = {
	25,
};
static const struct camera_common_frmfmt cam2000_frmfmt[] = {
	{{1920,1080}, cam2000_25fps, 1, 0, CAM2000_MODE}
};

I do this because the framework camera_common_sensor_ops needs it, and I just set it in a “dummy” way, because if I increase the framerate in the camera, the tool qv4l2 or capturing images via v4l2-ctl will do it with the “correct” framework.

Looks like the same topic as https://devtalk.nvidia.com/default/topic/1070971
You may try if can sensor can output gary16 to try.

Hi ShaneCCC,
Althoguh these both threads are related, I didn’t consider appropriate to continue in that thread as I’ve made some progress (supporting Y16 for example. Moreover, I also want to discuss the pixel packing in this thread, alongside the v4l2src pipelines.

I’ve tried the following pipeline, but I’m not able to make it work…

nvidia@nvidia-desktop:~$ gst-launch-1.0 v4l2src device="/dev/video0" ! "video/x-raw, width=1920, height=1080, format=(string)GRAY16_LE" ! nvvidconv ! xvimagesink -e
WARNING: erroneous pipeline: could not link v4l2src0 to nvvconv0, nvvconv0 can't handle caps video/x-raw, width=(int)1920, height=(int)1080, format=(string)GRAY16_LE

We can continue in the thread you prefer, but I thought adding all these information to the previous post was too much for a single thread.

I don’t think nvvidconv support GRAY16_LE
Looks like the nvvidconv and xvimagesink only support GRAY8. Have a check if can modify the v4l2src to support GRAY8 or you need any others element for it.

I’ve tried giving support to GRAY8, so I get now:

nvidia@nvidia-desktop:~$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'GREY'
	Name        : 8-bit Greyscale
		Size: Discrete 1920x1080
			Interval: Discrete 0.040s (25.000 fps)

	Index       : 1
	Type        : Video Capture
	Pixel Format: 'Y16 '
	Name        : 16-bit Greyscale
		Size: Discrete 1920x1080
			Interval: Discrete 0.040s (25.000 fps)

I’ve also modified it in the DT, so I have:

mode0 {
    ...
    csi_pixel_bit_depth = "8";
    mode_type = "raw";
    pixel_phase = "grey";
    ...
}
mode1 {
    ...
    csi_pixel_bit_depth = "16";
    mode_type = "raw";
    pixel_phase = "y16";
    ...
}

I also set the output format of the camera to RAW8.
I’m able to stream it with qv4l2 and GRAY8 (although it seems to be a bit faulty

Anyway, when launching the pipeline with GRAY8, it freezes:

nvidia@nvidia-desktop:~$ gst-launch-1.0 v4l2src device="/dev/video0" ! "video/x-raw, width=1920, height=1080, format=(string)GRAY8" ! nvvidconv ! xvimagesink -e
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
gst_nvvconv_transform: NvBufferTransform not supported 
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Internal data stream error.
Additional debug info:
gstbasesrc.c(3055): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason error (-5)
EOS on shutdown enabled -- waiting for EOS after Error
Waiting for EOS...
^Chandling interrupt.
Interrupt: Stopping pipeline ...
Interrupt while waiting for EOS - stopping pipeline...
Execution ended after 0:00:07.701861269
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

I’ve also tried this pipeline for GRAY16_LE, but it isn’t working neither

nvidia@nvidia-desktop:~$ gst-launch-1.0 v4l2src device="/dev/video0" ! "video/x-raw,width=1920,height=1080,format=GRAY16_LE" ! videoconvert ! video/x-raw,format=P010_10LE ! nvvidconv ! 'video/x-raw(memory:NVMM),format=I420_10LE' ! omxh265enc ! matroskamux ! filesink location= a.mkv
Setting pipeline to PAUSED ...
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Framerate set to : 25 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 8 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 8 
NVMEDIA: H265 : Profile : 1 
^Chandling interrupt.
Interrupt: Stopping pipeline ...
Execution ended after 0:00:19.178748094
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

Hi,
Please run v4l2-ctl command to confirm the source supports gray16. Right now it looks like it supports RG10 only. It doesn’t work if you arbitrarily modify the format that is not supported.

Actually I thought it was supporting GRAY16 format. Anyway, when adding support to RAW8 it will no longer capture any image through v4l2-ctl. So does it mean that, although giving support to Y16, it is still using the Y10 format although specifying GRAY16_LE?

Just check the TRM that didn’t support the gray format directly. You may hack the driver to get the data and may need to do some post process for it?
What’s the VI_CHn_PIXFMT_FORMAT do you programing to?

For supporting both Y10 and Y16, I add the support in all vi2_formats.h, vi4_formats.h and vi5_formats.h, replicating a RAW10 format already programmed, so I get:
vi2_formats.h -> T_R16_I
vi4_formats.h -> T_R16_I
vi5_formats.h -> T_R16

For the pixel format, I use the following ones:
Y10 -> V4L2_PIX_FMT_Y10
Y16 -> V4L2_PIX_FMT_Y16

By the way, to confirm that it is working with Y16, I’ve generated an image for the TX2 with only Y16 support, so I get:

nvidia@nvidia-desktop:~$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'Y16 '
	Name        : 16-bit Greyscale
		Size: Discrete 1920x1080
			Interval: Discrete 0.040s (25.000 fps)
		Size: Discrete 384x216
			Interval: Discrete 0.040s (25.000 fps)

I try the following pipeline:

v4l2-ctl -d /dev/video0 --set-fmt-video=width=1920,height=1080,pixelformat=Y16 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=200 --stream-to=test.y16

And then reproduce it with:

gst-launch-1.0 filesrc location=test.y16 blocksize=4147200  ! video/x-raw,format=GRAY16_LE, width=1920, height=1080, framerate=25/1  ! videoconvert ! xvimagesink

And it is working. So I guess Y16 is well suported (although I still see it darker, but I’m pretty sure that has to do with pixel packing, which I’m not considering atm)