support for grayscale sensors is missing

I split and convert the raw output file to pgm, which is the simplest image file format using the following script :

v4l2-ctl -d /dev/video0 --set-fmt-video=width=1920,height=1080,pixelformat=RG12 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=20 --stream-to=test.raw
for i in $(seq 10 19)
do
        for m in 4095 32767 65535
        do
                (
                echo P5
                echo 1920 1080
                echo $m
                dd if=test.raw conv=swab bs=4147200 skip=$i count=1
                ) > frame-$i-swab-$m.pgm
                (
                echo P5
                echo 1920 1080
                echo $m
                dd if=test.raw bs=4147200 skip=$i count=1
                ) > frame-$i-raw-$m.pgm
        done
done

I skip the first 10 frames to give some time to the sensor to stabilize, and try every combination of big-endian or little-endian way of encoding 12 bits on 2 bytes and not rescaled (highest pixel value is 4095), rescaled to 15 bits (highest pixel value is 32767) or rescaled to 16 bits (highest pixel value is 65535), but no combination gives me a perfect raw monochrome image.

hello phdm,

could you please attach the *.pgm file, let’s see what’s it looks like.
thanks

Hi JerryChang,

I’d like to attach a .pgm file, but I don’t find how to do that.
frame-10-raw-65535.pgm.gz (2.28 MB)

You can attach a file to a previous post.
Moving your mouse cursor in the upper right corner of a previous post (being logged in), you will see a paper clip icon for this purpose.

OK, file is attached to comment #16 (as .pgm.gz, because plain .pgm is refused)

It’s the best combination I’ve found of swab or notswab and maxpixel = 2^12-1, 2^15-1 or 2^16-1.
Surprisingly, 2^16-1 is the best maxval but that does not match documentation (2^15-1 or 2^12-1) as I read it

Have you tried qv4l2 ? Can you get correct capture with it ?

Furthermore, it may have changed, but I think that gstreamer v4l2src plugin was only working with 8 bits (rggb, maybe gray8) format.

hello phdm,

I had check your pgm file,
could you please update the kernel driver to modify the TEGRA_STRIDE_ALIGNMENT to 64,

for example,

--- a/drivers/media/platform/tegra/camera/vi/core.h
+++ b/drivers/media/platform/tegra/camera/vi/core.h
@@ -22,8 +22,10 @@
-#define TEGRA_STRIDE_ALIGNMENT 256
+#define TEGRA_STRIDE_ALIGNMENT 64

Hi JerryChang,

here is an image taken with TEGRA_STRIDE_ALIGNMENT 64. I cannot tell if it is better or worse, but it is not good :(

This time, the best looking picture was the raw (bytes not swapped) one.
frame-10-65535-raw.pgm.gz (2.57 MB)

I seems to me that what I get with the command in comment #4 is not the raw output from the sensor, but what I get is the output from the sensor processed by some other step (probably the ISP). Is that true ?

hello phdm,

the commands in comment #4 dump the raw files, which process by the V4L2 media framework.
please also refer to the [Camera Architecture Stack] from [L4T Documentation]-> [NVIDIA Tegra Linux Driver Package]-> [Release 28.2 Development Guide]-> [Camera Development] chapter-> [Camera Software Development Solution] session.
thanks

Sorry for some noise here : actually the connection to my monochrome sensor was faulty : some mipi lane was garbled. I now have another sensor that does not have that bug. Command in comment #4 (updated to my 12-bit sensor) actually gives a dump of the images sent by the sensor, with pixels stored as 16-bit big endian values : each 12-bit pixel is shifted by 4 positions to give a 16-bit value.

After adding the following entries to vi2_video_formats

TEGRA_VIDEO_FORMAT(RAW12, 12, Y12_1X12, 2, 1, T_R16_I,
                        RAW12, Y16_BE, "GRAY16_BE"),
TEGRA_VIDEO_FORMAT(RAW12, 12, Y12_1X12, 1, 1, T_L8,
                        RAW12, GREY, "GRAY8"),

and of course my driver telling that its mediabus_format is MEDIA_BUS_FMT_Y12_1X12, I now have the following list of formats :

nvidia@cam5-ubuntu:~$ v4l2-ctl --list-formats
ioctl: VIDIOC_ENUM_FMT
        Index       : 0
        Type        : Video Capture
        Pixel Format: 'Y16 -BE'
        Name        : 16-bit Greyscale BE

        Index       : 1
        Type        : Video Capture
        Pixel Format: 'GREY'
        Name        : 8-bit Greyscale
nvidia@cam5-ubuntu:~$

Unfortunately, using it in a gstreamer pipeline ‘gst-launch-1.0 v4l2src …’ only gives me 2 fps.

If I let my driver pretend that its media_bus_format is MEDIA_BUS_FMT_SRGGB12_1X12, and use a gstreamer pipeline ‘gst-launch-1.0 nvcamerasrc …’ I get at least 20 fps. Why such a poor performance with v4l2src ?

Hello JerryChang,

I checked the above command on a freshly installed tx1 devkit with jetpack-3.2.1 (l4t 28.2) and equipped with a ov5693 sensor provided by nvidia. It confirms a bug in the handling of rawN sensors, with N > 8 : LSB bits are discarded !

Here is the test I made :

nvidia@tegra-ubuntu:/run/user/1001$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
        Index       : 0
        Type        : Video Capture
        Pixel Format: 'BG10'
        Name        : 10-bit Bayer BGBG/GRGR
                Size: Discrete 2592x1944
                        Interval: Discrete 0.033s (30.000 fps)
                ...
nvidia@tegra-ubuntu:/run/user/1001$ v4l2-ctl --set-fmt-video=width=2592,height=1944,pixelformat=BG10 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=20 --stream-to=bg10.raw
<<<<<<<<<<<<<<<<<<<<
nvidia@tegra-ubuntu:/run/user/1001$ hexdump -C bg10.raw
00000000  32 00 65 00 3a 00 5d 00  33 00 62 00 2e 00 5f 00  |2.e.:.].3.b..._.|
00000010  35 00 6d 00 39 00 57 00  3f 00 6e 00 4b 00 5d 00  |5.m.9.W.?.n.K.].|
...

As one can see, all the odd bytes are 0[0-3]. This is unexpected from a 10-bit sensor.
This is confirmed bye the following test

nvidia@tegra-ubuntu:/run/user/1001$ hexdump -Cv bg10.raw | grep -v '0[0-3] .. 0[0-3] .. 0[0-3] .. 0[0-3]  .. 0[0-3] .. 0[0-3] .. 0[0-3] .. 0[0-3]'
0c756000
nvidia@tegra-ubuntu:/run/user/1001$

Do I misunderstand ?
Does the CSI-VI pair truncate the incoming pixels to 8 bits while also replicating the most significant bits ?
Is the image stored in memory in big endian or little endian mode ?
Is the RAW10 value shifted into 16 bits to produce values between 0 and 0xffc0, or are the 10-bits not shifted at all and produce values between 0 and 3ff ? I have read chapters 29 and 32 of the TX1 TRM, but experimental results seem not to match documentation.

You might read this post.

Hi Honey_Patouceul

Thank you.

I have read https://devtalk.nvidia.com/default/topic/1028611/jetson-tx2/is-there-a-parameter-for-enable-disable-the-tx2-on-board-camera-/post/5274555/#5274555, and this seems to suggest that the raw10 images of the ov5693 sensor, when acquired with

v4l2-ctl -d /dev/video0 --set-fmt-video=width=1920,height=1080,pixelformat=RG10 --set-ctrl bypass_mode=0 --stream-mmap --stream-count=10 --stream-to=test.raw

are actually stored in 16 bytes word, in little endian form, without any shift occuring, thus with values between 0 and 1023 (2^10-1).

This does not seem to match the TX1 TRM 31.3.2.3 “Raw Pixel Formatting to Memory”/“T_R16_I Formatting for RAW10, RAW12 and RAW14”, where RAW10 pixels are shown written in memory as

[ 0 | 0 | D9| D8| D7| D6| D5| D4| D3| D2| D1| D0| D9| D8| D7| 0 ]

, thus shifted 4 places to the left with lsb filled with some msb replication.

In a CSI->VI->MEM path, how are RAW10 or RAW12 pixels actually transformed and written into memory ?

Sorry I cannot tell much more, but the 16 bits format may only be used by option stream-to which writes to disk.
v4l2-ctl would have read into memory in RG10 format as explained by TRM, and then made the 16 bits format for disk storage with stream-to. So the raw file may not be an exact image of the memory stream. It is much easier to read just 2 bytes and get ‘short’ values for further computing, than having to manage the shifts and mask.

Hello,

I am currently trying to add grayscale format support to a Jetson Xavier with L4T 31.1 by following the steps described in the above comment for the sensor OV24A1B. I started from the driver for the sensor OV5693.

In ov5693.c, I modified the default data format to match the required format (Gray 10 bits)

#define OV5693_DEFAULT_DATAFMT	MEDIA_BUS_FMT_Y10_1X10

In sensor_common.c, I set the pixel format to V4L2_PIX_FMT_Y10

static int extract_pixel_format(
	const char *pixel_t, u32 *format)
{
	 *format = V4L2_PIX_FMT_Y10;
	
	return 0;
}

In camera_common.c, I added a camera common color formats for Grayscale

static const struct camera_common_colorfmt camera_common_color_fmts[] = {

	{
		MEDIA_BUS_FMT_SBGGR10_1X10,
		V4L2_COLORSPACE_SRGB,
		V4L2_PIX_FMT_SBGGR10,
	},
        // Grayscale support
	{
		MEDIA_BUS_FMT_Y10_1X10,
		V4L2_COLORSPACE_RAW,
		V4L2_PIX_FMT_Y10,
	},
}

And as mentionned in the above comment, in vi2_formats.h, I added other entries to static const struct tegra_video_format vi2_video_formats :

TEGRA_VIDEO_FORMAT(RAW10, 10, Y10_1X10, 2, 1, T_R16_I,
                                RAW10, Y10, "GRAY10"),			
	
	TEGRA_VIDEO_FORMAT(RAW10, 10, Y10_1X10, 2, 1, T_R16_I,
                                RAW10, Y12, "GRAY12"),
								
	TEGRA_VIDEO_FORMAT(RAW10, 10, Y10_1X10, 2, 1, T_R16_I,
                                RAW10, Y16, "GRAY16"),

However, I still get a kernel panic when running this kernel with the stack pointing to nvidia/drivers/media/platform/tegra/camera/vi/channel.c

[    9.025262] Process kworker/1:0 (pid: 18, stack limit = 0xffffffc3ed81c028)
[    9.032165] Call trace:
[    9.034285] [<ffffff8008b5e8cc>] tegra_channel_fmts_bitmap_init+0x17c/0x238
[    9.040920] [<ffffff8008b60754>] tegra_channel_init_subdevices+0x15c/0x738
[    9.047654] [<ffffff8008b61a1c>] tegra_vi_graph_notify_complete+0x53c/0x678
[    9.053885] [<ffffff8008b45f88>] v4l2_async_test_notify+0x108/0x120
[    9.059911] [<ffffff8008b460cc>] v4l2_async_notifier_register+0x12c/0x1a0
[    9.066287] [<ffffff8008b626a8>] tegra_vi_graph_init+0x210/0x290
[    9.071633] [<ffffff8008b5ca98>] tegra_vi_media_controller_init+0x1a8/0x1e0
[    9.078545] [<ffffff8008536340>] vi5_probe+0x340/0x390
[    9.083096] [<ffffff80087afc50>] platform_drv_probe+0x60/0xc8
[    9.088342] [<ffffff80087ad3dc>] driver_probe_device+0x26c/0x420
[    9.094380] [<ffffff80087ad76c>] __device_attach_driver+0xb4/0x160
[    9.100679] [<ffffff80087aae34>] bus_for_each_drv+0x6c/0xa8
[    9.106363] [<ffffff80087acfb4>] __device_attach+0xcc/0x160
[    9.111965] [<ffffff80087ad894>] device_initial_probe+0x24/0x30
[    9.117913] [<ffffff80087ac108>] bus_probe_device+0xa0/0xa8
[    9.123429] [<ffffff80087ac760>] deferred_probe_work_func+0x90/0xe0
[    9.129995] [<ffffff80080d3f60>] process_one_work+0x1e0/0x478
[    9.135769] [<ffffff80080d4444>] worker_thread+0x24c/0x4d0
[    9.141279] [<ffffff80080dabd8>] kthread+0xe8/0x100
[    9.145918] [<ffffff8008083500>] ret_from_fork+0x10/0x50
[    9.151607] ---[ end trace 6425dc7be0d116c3 ]---

Do you have any ideas which step(s) am I missing to add grayscale support ?

Thanks in advance,

Hello,

I do not have a Xavier, only TX1 and TX2, and while the TX1 uses the vi2_formats.h file, TX2 uses vi4_formats.h. Additions for grayscale must go there also for Xavier, I surmise.

Also, the stack trace is not the most interesting part of the error message (except for the name of the function where it happens : ‘tegra_channel_fmts_bitmap_init’ here), but there is also a panic message (probably about a null pointer). ‘printk’ is your friend :)

Oops :)

Actually, seeing ‘vi5_probe’ in your stack trace, you probably have a ‘vi5_formats.h’

Hello phdm,

Thank you for your fast answer.

I tried to put the new formats in vi5_formats.h instead of vi2_formats.h and it made v4l2-ctl happy without any kernel panic.

Thanks again,