Image Sensor Compressed RAW 12bit Format to Decoded 16bit Format

I have an image sensor that supports a compressed 12bit output in SE HDR mode over MIPI. The compressed 12bit mode needs to be decompressed/decoded according to the following table:

I noticed from the Parker TRM section 27.10.3 Companding Module that the hardware appears to be able to do the necessary companding/decompanding and supports the OV10640 and the AR0231. However, I could not find any examples for either of these sensors and the TRM text describing the Companding Module is not detailed enough to support an implementation. It also looks like there is a kernel ioctl interface

#define VI_CAPTURE_SET_COMPAND _IOW('I', 8, struct vi_capture_compand)

to configure the companding base, scale, and offset parameters.

  1. Does the Tegra X2 VI module support decompanding/decompressing RAW 12bit MIPI data to 16bit data?
  2. After decompanding, does the output pass to the ISP for additional processing?
  3. Does the kernel ioctl interface properly support the configuration and enabling the decompanding support?
  4. Are there any example programs (i.e., for the OV10640 or the AR0231) that configure the decompanding from userspace with the ioctl or some other mechanism?
  5. Could you please provide the VI PIXFMT COMPAND KNEE BASE, SCALE, and OFFSET values for one or two rows of my table so I can understand how to translate my entire table to the VI_PIXFMT_COMPAND_KNEE_CFG… values?

hello JDSchroeder,

there’s camera software feature, PWL-WDR,
PWL-WDR technology achieve wide dynamic range by sending compress data into camera pipeline, an internal ISP to decompress the content;
you may also check PWL compression function as an example. we had validated PWL functionality with Sony IMX185 sensor.

That does look promising and right in line with what I’m trying to do with my sensor.

A couple questions:

  1. I cannot find any references to the DT properties “num_control_point” or “dynamic_pixel_bit_depth” in the kernel sources (they are in the IMX185 dtsi files). Is the PWL WDR handled completely in libargus and the “Camera Core” in userspace?
  2. This seems very similar to the TRM section 27.10.3 Companding Module. Is the PWL WDR using the VI Companding Module hardware underneath?
  3. If not, is the PWL WDR being efficiently done with hardware acceleration/support or in software?
  4. Just want to confirm: the description of the PWL HDR with “12 bit Output” and Linear “Compression” Function is in reference to the image sensor and not the NVIDIA hardware, correct? The NVIDIA hardware takes the 12 bit Output as “Input” and “Decompresses” it to 16 bit output, right?
  5. Does the PWL WDR support work when going through the v4l2src path or only the nvarguscamerasrc or both?

hello JDSchroeder,

PWL-WDR decompression solution is software approach on TX2, it was handling by CUDA accelerator.
there’s hardware units to support PWL-WDR decompression on Jetson-Xavier.

for that example of IMX185 (dynamic_pixel_bit_depth=16; csi_pixel_bit_depth=12),
CSI/VI is receiving 12-bit data buffers, it’s using CUDA to decompress into 16-bit and perform PWL processing;
after that, converting the results to adapt ISP buffer for further processing.

you can only enable nvarguscamerasrc to support PWL-WDR functionality,

Okay, I have updated my DT mode with:

dynamic_pixel_bit_depth = “16”;
csi_pixel_bit_depth = “12”;
mode_type = “bayer_wdr_pwl”;
num_control_point = “13”;
control_point_x_0 = “0”;
control_point_x_1 = “32”;
control_point_x_2 = “64”;
control_point_x_3 = “128”;
control_point_x_4 = “256”;
control_point_x_5 = “512”;
control_point_x_6 = “1024”;
control_point_x_7 = “2048”;
control_point_x_8 = “4096”;
control_point_x_9 = “8192”;
control_point_x_10 = “16384”;
control_point_x_11 = “32768”;
control_point_x_12 = “65536”;
control_point_y_0 = “0”;
control_point_y_1 = “256”;
control_point_y_2 = “512”;
control_point_y_3 = “768”;
control_point_y_4 = “1024”;
control_point_y_5 = “1280”;
control_point_y_6 = “1536”;
control_point_y_7 = “1792”;
control_point_y_8 = “2048”;
control_point_y_9 = “2560”;
control_point_y_10 = “3072”;
control_point_y_11 = “3584”;
control_point_y_12 = “4096”;

I’m seeing noticeable difference in my camera images when I change mode_type from “bayer” to “bayer_wdr_pwl”. However, I’m not sure that the PWL control points that I have defined are doing anything. I have swapped the X and Y and I don’t see any change. If I reduce the num_control_point to “4” I see no change. If I remove num_control_point from the DT, I don’t even get an error. So I am thinking this PWL WDR is not actually being applied.

I’m using the argus_camera sample application and L4T 32.4.2 for testing. Is there a way I can check that my control points and decompression curve are actually getting applied?

Does argus_camera work to do the PWL WDR decompression or do I need to use a different program?

hello JDSchroeder,

DT properties only report the corresponding configs, you should check the sensor specification, please enable PWL-WDR sensor mode to have configuration. if you look into sensor drivers, for example, kernel/nvidia/drivers/media/i2c/imx185_mode_tbls.h, there’s HDR sensor mode that using PWL WDR technology;

it’s internal camera software stack to handle PWL-WDR decompression solutions, you can enable argus_camera to choose enable different sensor modes to check the results.

I have already verified that the correct image sensor mode is being selected/used by argus_camera and I know how to switch between the image sensor modes using argus_camera.

My sensor is sending out compressed 12-bit data. I do not need help with how to configure my sensor as I have the datasheet and access to all of the information I need related to my image sensor. What I do need help with is the undocumented PWL WDR mode that you have referenced. The vendor for my image sensor looked at the jpg images output from argus_camera and said that it looks like no decompression is happening in the camera pipeline. How do I know that the 12-bit to 16-bit decompression is being done is the “camera software stack”?

My testing indicates that simply defining the correct DT properties and running the NVIDIA argus_camera sample application is not enough. Something is missing from the equation. Is there a log or something I can check in the system?

Is there something I can trace or log from the libargus and nvargus-daemon to check if it is doing anything? Why does changing the num_control_points and control point X/Y data have no apparent effect?

Do I need to add code to the argus_camera or turn on a compile time option/switch or something to make it do the PWL WDR properly?

Does the PWL WDR use the ioctl for the Companding Module in the kernel? If not, what image sensor and user space code can you point me to that does using the Companding Module with that particular ioctl?

One other piece of information that confirms that I have my DT configured properly. I ran argus_camera -i and got the following output:

   Sensor mode: 1
    Resolution: 1920x1080
    Exposure time range: 59000 - 199938000 ns
    Frame duration range: 33333334 - 200000032 ns
    Framerate range: 5 - 30 fps
    InputBitDepth:  16
    OutputBitDepth: 12
    Analog gain range: 64 - 2047
    Piecewise Linear (PWL) WDR Extension supported with: 13 control points.
    - Control Points: 
                    (0, 0)
                    (0.000488281, 0.0625)
                    (0.000976562, 0.125)
                    (0.00195312, 0.1875)
                    (0.00390625, 0.25)
                    (0.0078125, 0.3125)
                    (0.015625, 0.375)
                    (0.03125, 0.4375)
                    (0.0625, 0.5)
                    (0.125, 0.625)
                    (0.25, 0.75)
                    (0.5, 0.875)
                    (1, 1)

It looks like my integer value control points got converted to a float representation, but it still has the correct number of points and the float values look like the appropriate scaling values for 16-bit x values and 12-bit y values.

hello JDSchroeder,

there’s configuration files that you should also specify to enable PWL controls, please contact with sensor vendors for the settings.

After offline discussion, I now have this working properly in my system. Thanks!