I have an image sensor that supports a compressed 12bit output in SE HDR mode over MIPI. The compressed 12bit mode needs to be decompressed/decoded according to the following table:
I noticed from the Parker TRM section 27.10.3 Companding Module that the hardware appears to be able to do the necessary companding/decompanding and supports the OV10640 and the AR0231. However, I could not find any examples for either of these sensors and the TRM text describing the Companding Module is not detailed enough to support an implementation. It also looks like there is a kernel ioctl interface
to configure the companding base, scale, and offset parameters.
Does the Tegra X2 VI module support decompanding/decompressing RAW 12bit MIPI data to 16bit data?
After decompanding, does the output pass to the ISP for additional processing?
Does the kernel ioctl interface properly support the configuration and enabling the decompanding support?
Are there any example programs (i.e., for the OV10640 or the AR0231) that configure the decompanding from userspace with the ioctl or some other mechanism?
Could you please provide the VI PIXFMT COMPAND KNEE BASE, SCALE, and OFFSET values for one or two rows of my table so I can understand how to translate my entire table to the VI_PIXFMT_COMPAND_KNEE_CFG… values?
there’s camera software feature, PWL-WDR,
PWL-WDR technology achieve wide dynamic range by sending compress data into camera pipeline, an internal ISP to decompress the content;
you may also check PWL compression function as an example. we had validated PWL functionality with Sony IMX185 sensor.
thanks
That does look promising and right in line with what I’m trying to do with my sensor.
A couple questions:
I cannot find any references to the DT properties “num_control_point” or “dynamic_pixel_bit_depth” in the kernel sources (they are in the IMX185 dtsi files). Is the PWL WDR handled completely in libargus and the “Camera Core” in userspace?
This seems very similar to the TRM section 27.10.3 Companding Module. Is the PWL WDR using the VI Companding Module hardware underneath?
If not, is the PWL WDR being efficiently done with hardware acceleration/support or in software?
Just want to confirm: the description of the PWL HDR with “12 bit Output” and Linear “Compression” Function is in reference to the image sensor and not the NVIDIA hardware, correct? The NVIDIA hardware takes the 12 bit Output as “Input” and “Decompresses” it to 16 bit output, right?
Does the PWL WDR support work when going through the v4l2src path or only the nvarguscamerasrc or both?
FYI,
PWL-WDR decompression solution is software approach on TX2, it was handling by CUDA accelerator.
there’s hardware units to support PWL-WDR decompression on Jetson-Xavier.
for that example of IMX185 (dynamic_pixel_bit_depth=16; csi_pixel_bit_depth=12),
CSI/VI is receiving 12-bit data buffers, it’s using CUDA to decompress into 16-bit and perform PWL processing;
after that, converting the results to adapt ISP buffer for further processing.
moreover,
you can only enable nvarguscamerasrc to support PWL-WDR functionality,
thanks
I’m seeing noticeable difference in my camera images when I change mode_type from “bayer” to “bayer_wdr_pwl”. However, I’m not sure that the PWL control points that I have defined are doing anything. I have swapped the X and Y and I don’t see any change. If I reduce the num_control_point to “4” I see no change. If I remove num_control_point from the DT, I don’t even get an error. So I am thinking this PWL WDR is not actually being applied.
I’m using the argus_camera sample application and L4T 32.4.2 for testing. Is there a way I can check that my control points and decompression curve are actually getting applied?
Does argus_camera work to do the PWL WDR decompression or do I need to use a different program?
DT properties only report the corresponding configs, you should check the sensor specification, please enable PWL-WDR sensor mode to have configuration. if you look into sensor drivers, for example, kernel/nvidia/drivers/media/i2c/imx185_mode_tbls.h, there’s HDR sensor mode that using PWL WDR technology;
it’s internal camera software stack to handle PWL-WDR decompression solutions, you can enable argus_camera to choose enable different sensor modes to check the results.
thanks
I have already verified that the correct image sensor mode is being selected/used by argus_camera and I know how to switch between the image sensor modes using argus_camera.
My sensor is sending out compressed 12-bit data. I do not need help with how to configure my sensor as I have the datasheet and access to all of the information I need related to my image sensor. What I do need help with is the undocumented PWL WDR mode that you have referenced. The vendor for my image sensor looked at the jpg images output from argus_camera and said that it looks like no decompression is happening in the camera pipeline. How do I know that the 12-bit to 16-bit decompression is being done is the “camera software stack”?
My testing indicates that simply defining the correct DT properties and running the NVIDIA argus_camera sample application is not enough. Something is missing from the equation. Is there a log or something I can check in the system?
Is there something I can trace or log from the libargus and nvargus-daemon to check if it is doing anything? Why does changing the num_control_points and control point X/Y data have no apparent effect?
Do I need to add code to the argus_camera or turn on a compile time option/switch or something to make it do the PWL WDR properly?
Does the PWL WDR use the ioctl for the Companding Module in the kernel? If not, what image sensor and user space code can you point me to that does using the Companding Module with that particular ioctl?
It looks like my integer value control points got converted to a float representation, but it still has the correct number of points and the float values look like the appropriate scaling values for 16-bit x values and 12-bit y values.
I am also working on HDR mode with companding on ar0321at. Can you please share the offline discussion? I have also implemented a similar .dtsi for one of my sensor modes.