PWL HDR for Jetson AGX Orin

Hello,

We are using an IMX623 sensor, with HDR PWL.

Our PWL compression was provided by the sensor’s vendor and is as follows:

As seen above, the compression goes from 24b to Raw12.

Then, following Image Sensor Compressed RAW 12bit Format to Decoded 16bit Format, we added on our device tree the values seen in Example Piece-Wise Linear Compression Function for the IMX185, as we noticed our Raw12 was being stored in 16b.

The result is the following :


imx623_example.txt (3.2 KB)

Without HDR / PWL, the image is as follows :

The point of this topic is to clarify the following questions:

  1. When using PWL in the sensor, is it required to match the compression in the .dtsi ? (In this case, the vendor used 17 control points, whereas Jetson supports a maximum of 9).

  2. When specifying bayer_wdr_pwl in the device tree + adding control points, are these used only for decompression?

  3. Since 12b data is being stored in 16b, is it required to specify control points for a decompression of 12b to 16b?

  4. Should Optical Black (OB) lines be added to the active_h property?

Thanks in advance.

hello joao.malheiro.silva,

let me reply each of question as following.

>>Q1
please refer to developer guide, Property-Value Pairs,
it supports up-to 9 control points as you can see for the description of num_control_point property,

The camera core library interface supports up to nine control points.

>>Q2, Q3
there’re two properties, csi_pixel_bit_depth and dynamic_pixel_bit_depth.
taking an example of IMX185, which has… csi_pixel_bit_depth=12 and dynamic_pixel_bit_depth=16.
it means IMX185 used 16-bit PWL HDR, and using companding curve to compressed data from 16-bit to 12-bit as ISP input.

>>Q4
no, OB it’s not included in the active height.

Hello @JerryChang,

Thank you for your feedback.

About this statement, we would like to clarify the following points:

  • “IMX185 used 16-bit PWL HDR, and using companding curve to compressed data from 16-bit to 12-bit as ISP input” :

    1. Does the IMX185 specify PWL control points within its mode tables ?

      • In our case, the vendor specified the compression within the mode tables.
    2. If the PWL control points are specified in the mode tables, should they also be specified in the .dtsi ?

    3. Does Jetson AGX Orin take a maximum of 12-bit as ISP Input ?

    4. When specifying control points in a .dtsi, are they used to perform only compression from dynamic_pixel_bit_depth to csi_pixel_bit_depth ? (meaning no decompression is done)

Again, thank you for your valuable inputs.

hello joao.malheiro.silva,

>> Q1,Q2
PWL control points were defined in the sensor device tree.
you may visit Jetson Linux Archive | NVIDIA Developer to download [Driver Package (BSP) Sources] package for reference.
for example, $public_sources/kernel_src/hardware/nvidia/platform/t23x/common/kernel-dts/t234-common-modules/tegra234-camera-imx185-a00.dtsi

>> Q3
I assume you’re asking ISP (input) capability for Orin series,
FYI,
the maximum SDR we’re supported is Raw 16-bits,
the maximum PWL-WDR supported is Raw 20-bits. (ISP will take maximum 16-bits input, PWL compressed/de-compress internally as 20-bit before processing)

>>Q4
as mentioned above, PWL frames were compressed/de-compress internally.

Hello @JerryChang,

We have made some progress in improving the image quality. However, the issue with decompression still remains.

Like mentioned in Image Sensor Compressed RAW 12bit Format to Decoded 16bit Format - Jetson & Embedded Systems / Jetson TX2 - NVIDIA Developer Forums, we have confirmed that our 12b data is also being stored in 16b, with the 4MSBs repeated.

This very simple Python script fixes the image color, after discarding the 4MSBs for an image saved with NvRaw (.raw format) :

with open(file_path, 'rb') as f:
    raw_data = f.read()

# Image is 16-bit and stored in little endian format
image_data = np.frombuffer(raw_data, dtype=np.uint16)

# Discard the last 4 bits of every hexadecimal
image_data = image_data >> 4

# Convert to 12 bits
image_data = image_data & 0xFFF

# Reshape the image
image_data = image_data.reshape((height, width))

# Debayer the image using OpenCV
bgr = cv2.cvtColor(image_data, cv2.COLOR_BAYER_BG2BGR)

# Display as 8 bits using OpenCV
image_data = (bgr / 16).astype(np.uint8) # (12-8 = 4 -> 2^4 = 16)

Input (left) & Output (right)

Below, you can find described our current pipeline.

PNG image retrieved with GStreamer (commands in image above) :

Is it correct to assume that, using the params below, we get a correct decompression from 12b to 16b? :

dynamic_pixel_bit_depth = "16";
csi_pixel_bit_depth = "12";
mode_type = "bayer_wdr_pwl";
num_control_point = "4";
control_point_x_0 = "0";
control_point_x_1 = "2048";
control_point_x_2 = "16384";
control_point_x_3 = "65536";
control_point_y_0 = "0";
control_point_y_1 = "2901";
control_point_y_2 = "3617";
control_point_y_3 = "4095";

By “correct decompression from 12b to 16b”, I mean:

  1. The 4 repeated MSBs are discarded;
  2. The relevant 12b are expanded to 16b (to be passed to the ISP).

Thank you

Do you have this line (or something similar)

wdr.PWL.v4.EnablePWL = TRUE;

in your camera_overrides.isp file?

Hello @JDSchroeder,

Thank you for joining the discussion.

Our camera_overrides.isp file did not have that / similar statement.

We just added it on top of the file, as such:
image

The image image remains visually unchanged:

Do you know if the solution undergoes :

a)  Changing the *camera_overrides.isp* file

or

b)  Did you find an alternative to avoid the 4 repeated MSBs (12b stored in 16b, which we tried avoiding with PWL in the *.dtsi*).

Thank you for your time.

Sorry, I can’t remember many details as I ended up not using that mode for my image sensor. It has been a long time, so I vaguely remember the 16-bit output put the video pipeline in HDR mode. So maybe you need to set some more options in either your dts file or camera_overrides.isp file related to HDR (unsure).

You might also specify the
pixel_phase = “wxyz”; /* replace wxyz with Bayer pattern */
right below your mode_type. I’m not sure it is required.

This functionality was to some degree working on the older R32.5.0 so things may have changed significantly since then. Your best bet is to find a vendor that sells a PWL image sensor camera that is advertised to work with Jetson, then look at the camera_overrides.isp they provide you with and/or dts files and see if you can find the magic setting OR work with one of the camera partners, if you can, to support your image sensor.

hello joao.malheiro.silva,

I have several questions…

  1. why your gst pipeline used v4l2src? this will bypass ISP to capture the raw contents.
  2. how did you came out those PWL control points?

besides…
please contact with sensor vendor as we don’t support image tuning via forum discussion thread.

Hello @JerryChang,

  1. I pasted the wrong command. We are in fact using this one :

    gst-launch-1.0 nvarguscamerasrc sensor-id=1 num-buffers=1 ! "video/x-raw(memory:NVMM), format=(string)NV12, width=(int)1936, height=(int)1552" ! nvvidconv ! pngenc ! filesink location=image.png
    
  2. The sensor manufacturer provided us a mode table with 24 control points, as seen in the first post of this thread.

    • The sensor’s HDR output is Raw24, which is compressed to Raw12;

    • Since NVIDIA supports only 9 control points (0 to 8), we created an equivalent 24b:12b curve, this time for 16b:12b, and placed it on the .dtsi.

Q: As previously asked, when using the params seen before, can we assume we get a correct decompression from 12b to 16b?

By “correct decompression from 12b to 16b”, I mean:

  1. The 4 repeated MSBs are discarded;
  2. The relevant 12b are expanded to 16b (to be passed to the ISP).

Thank you

hello joao.malheiro.silva,

Orin series were using T_R16 to handle Raw12, it’s VI input memory formats.

how many sensor modes available? if you had several modes, please add sensor-mode=<N> to the gst pipeline to specify the mode index.
besides, it’s your gst pipeline to dump the 1st frame as PNG file, sensor controls may not finalize, that might be cause your dim capture results.
hence…
please give it another try to capture more frames, and take the 100th image for comparison.
for instance,
$ gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0 num-buffers=101 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' ! nvvidconv ! jpegenc ! multifilesink location=~/Desktop/tmp/capture%d.jpeg

Hello @JerryChang,

Our sensor properties:

nvargus_nvraw --lps
 
## -- Results -- ##
Number of supported sensor entries 4
Entry  Source Mode      Uniquename             Resolution   FR  BitDepth  Mode
Index  Index  Index                                             CSI Dyn   Type
  0      0      0          imx623_bottomright   1936x1552   29  12  16    Bayer_WDR_PWL
  1      1      0          imx623_bottomright   1936x1552   29  12  16    Bayer_WDR_PWL
  2      2      0          imx623_bottomright   1936x1552   29  12  16    Bayer_WDR_PWL
  3      3      0          imx623_bottomright   1936x1552   29  12  16    Bayer_WDR_PWL

New command to retrieve the 100th image:

gst-launch-1.0 nvarguscamerasrc sensor-id=1 sensor-mode=0 num-buffers=101 ! "video/x-raw(memory:NVMM), format=(string)NV12, width=(int)1936, height=(int)1552" ! nvvidconv ! pngenc ! multifilesink location=~/test/capture%d.png

Override ISP with only:

  • wdr.PWL.v4.EnablePWL = TRUE;

Override ISP with only:

  • wdr.PWL.v4.EnablePWL = TRUE;
  • ae.wdr.DreMin = 16;
  • ae.wdr.DreMax = 16;

Question 1:

Seems that:

  • min/max_hdr_ratio have no impact on the image brightness;
  • wdr.PWL.v4.EnablePWL = TRUE alone does not change brightness as well.

However, ae.wdr.DreMin/Max changes the brightness.

Could you clarify why this is the case?

Question 2:

A “Yes/No” answer is sufficient.

When T_R16 is used, specifying a PWL decompression from 12b to 16b fixes the 4 repeated MSBs?

Thank you.

hello joao.malheiro.silva,

>>Q1

yes… these two were necessary for WDR sensors as their value were 1 by default.
please also set ae.wdr.DreMin = ae.wdr.DreMax, and these two should be as same as your HDR ratio.

>>Q2
I don’t understand what you meant “fixes” the 4 repeated MSBs.
the flow looks like… [Sensor] → [CSI] → [VI] → [ISP]. whereas ISP output is YUV420.

Hello @JerryChang,

Thank you for the clarification on Q1.

About Q2:

You said:

So, our Raw12 (sensor / csi output) data is being stored as such:

How are those 4MSBs removed?

a) Is it with a PWL decompression from 12b to 16b?

dynamic_pixel_bit_depth = "16";
csi_pixel_bit_depth = "12";
mode_type = "bayer_wdr_pwl";

b) OR Is it handled already by Jetson and we can leave mode_type = "bayer";

Thank you.

hello joao.malheiro.silva,

there’s format conversion (convert CSI Raw value to VI normalized floating)
it is handled already by Jetson internal ISP.

no… you should specify mode type as bayer_wdr_pwl for WDR mode.

Hello @JerryChang,

Thank you for the clarification.

I think the “convert CSI Raw value to VI normalized floating” part has an issue.

Take these two captures as example:

It’s quite clear that noise is added and the image quality is compromised.

This behavior must be coming from the ISP, which seems to not be handling correctly our Raw12 CSI Output :

  • Raw capture + simple bit shift in Python give a normal image.

Q: Is there a way for us to validate if these 4 repeated MSBs are actually being discarded?

Thanks.

hello joao.malheiro.silva,

let me check this internally,
in the meanwhile, how about you capture the nvraw and jpg and attach that for reference,
you may using nvargus_nvraw to capture both of them simultaneously.
for instance, $ sudo nvargus_nvraw --c 0 --mode 0 --format "nvraw, jpg" --file /home/nvidia/output

Hello @JerryChang,

Thank you for checking.

Here are the files you requested.

nvraw_captures.zip (9.1 MB)

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.