Jetson Nano driver for 14-bit packed CSI-2 pixel data

Hello,

I am trying to add driver support for a sensor that outputs 14-bit CSI-2 data per the RAW14 format defined in the MIPI Alliance CSI-2 Specification. This pixel format does not appear to be part of the L4T kernel and I am hoping someone can offer assistance on how to add it.

The CSI-2 data is packed and can be described briefly as follows:

  • Every 7 bytes of data contain 4 pixels (7 bytes * 8 bits/byte = 56 bits, which holds 4 14-bit pixel values)
  • The bits from each pixel are distributed among 2 or 3 bytes of the incoming stream and are not contiguous

I started with the driver developed by @adrien.leroy2 per Jetson Nano MIPI CSI-2 without I2C from FPGA. It works well for 8-bit raw video. Here’s what I’ve tried so far to add support for the 14-bit data:

  • Added another mode to the dtsi file for the 14-bit data. Some of the properties are shown here:
active_w = "320";
active_h = "240";
dynamic_pixel_bit_depth="14";
csi_pixel_bit_depth = "14";
mode_type="gray14";
pixel_t="gray14";
line_length = "560";
  • Added support for this mode to the driver files

  • Added this to kernel-4.9/include/uapi/linux/media-bus-format.h (is this name OK or should it be different?):

#define MEDIA_BUS_FMT_Y14_1X14            0x202d
  • Added definition of V4L2_PIX_FMT_Y14 in videodev2.h; added handling it in sensor_common.c and v4l2-ioctl.c.

  • Added to camera_common_color_fmts in camera_common.c:

{
    MEDIA_BUS_FMT_Y14_1X14,
    V4L2_COLORSPACE_RAW,
    V4L2_PIX_FMT_Y14,
 },
  • Added the following to vi2_video_formats in vi2_formats.h (I think using T_R16_I here is wrong. This is part of my fundamental question - how do I go about defining this pixel format?):
TEGRA_VIDEO_FORMAT(RAW14, 14, Y14_1X14, 2, 1, T_R16_I, RAW14, Y14, "GRAY14"),
  • Added pixfmt-y14.rst file (though I’m pretty sure this isn’t defined properly)

I think I need to define a new pixel format (I’ve looked in the TRM at Pixel Formats and it’s unclear how to do this for this type of packed data).

Should I define a different MEDIA_BUS_FMT such as MEDIA_BUS_FMT_Y14_4X7 (currently I have MEDIA_BUS_FMT_Y14_1X14 defined in /kernel-4.9/include/uapi/linux/media-bus-format.h)?

I think I also need a new pixel format to use instead of T_R16_I, but I’m not sure how to name it.

Lastly, where would I put a parser to unpack/extract the pixel data from the incoming CSI-2 stream?

I apologize for the many questions, but I wanted to give a full picture of what I’ve tried so far. At this point I do receive frames when I put the camera into 14-bit mode using this command, but the data isn’t right:

sudo v4l2-ctl -d /dev/video0 --set-fmt-video=width=320,height=240,pixelformat='Y14 ' --set-ctrl bypass_mode=0 --stream-mmap --stream-count=200 --stream-to=test14_f200.raw

The formats shown for the video device are here:

eng@nano2:~$ v4l2-ctl -d /dev/video0 --list-formats
ioctl: VIDIOC_ENUM_FMT
        Index       : 0
        Type        : Video Capture
        Pixel Format: 'GREY'
        Name        : 8-bit Greyscale

        Index       : 1
        Type        : Video Capture
        Pixel Format: 'Y14 '
        Name        : 14-bit Greyscale

Thanks to anyone who can provide some assistance!

Looks like the modification is correct for add new formats.
You can check the memory layout for RAW14 to handle it by software for the input data.

Thank you @ShaneCCC for the quick reply.

Our camera uses a Toshiba parallel-to-CSI-2 bridge. We tweaked the registers for the 14-bit mode and now the 14-bit video looks right. I thought I’d have to do the unpacking somewhere, but it seems to be getting done. When you wrote “check the memory layout for RAW14”, how/where would I do this?

The thing that’s puzzling to me is that the RAW14 format in the source code seems to be defined as the lower 14 bits of a 16-bit word, but the format of the CSI-2 data coming from our camera to the Nano is a packed format (7 bytes for 4 pixels).

I’m using v4l2-ctl as follows to save frames to a file:

sudo v4l2-ctl -d /dev/video0 --set-fmt-video=width=320,height=240,pixelformat="Y14 " --set-ctrl bypass_mode=0 --stream-mmap --stream-count=200 --stream-to=test.raw

Where in the stream would the unpacking be happening?

Thank you!

1 Like

CSI/VI didn’t have unpacking function may need software for it.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.