The point of this topic is to clarify the following questions:
When using PWL in the sensor, is it required to match the compression in the .dtsi ? (In this case, the vendor used 17 control points, whereas Jetson supports a maximum of 9).
When specifying bayer_wdr_pwl in the device tree + adding control points, are these used only for decompression?
Since 12b data is being stored in 16b, is it required to specify control points for a decompression of 12b to 16b?
Should Optical Black (OB) lines be added to the active_h property?
>>Q1
please refer to developer guide, Property-Value Pairs,
it supports up-to 9 control points as you can see for the description of num_control_point property,
The camera core library interface supports up to nine control points.
>>Q2, Q3
there’re two properties, csi_pixel_bit_depth and dynamic_pixel_bit_depth.
taking an example of IMX185, which has… csi_pixel_bit_depth=12 and dynamic_pixel_bit_depth=16.
it means IMX185 used 16-bit PWL HDR, and using companding curve to compressed data from 16-bit to 12-bit as ISP input.
>>Q4
no, OB it’s not included in the active height.
When specifying control points in a .dtsi, are they used to perform only compression from dynamic_pixel_bit_depth to csi_pixel_bit_depth ? (meaning no decompression is done)
>> Q1,Q2
PWL control points were defined in the sensor device tree.
you may visit Jetson Linux Archive | NVIDIA Developer to download [Driver Package (BSP) Sources] package for reference.
for example, $public_sources/kernel_src/hardware/nvidia/platform/t23x/common/kernel-dts/t234-common-modules/tegra234-camera-imx185-a00.dtsi
>> Q3
I assume you’re asking ISP (input) capability for Orin series,
FYI,
the maximum SDR we’re supported is Raw 16-bits,
the maximum PWL-WDR supported is Raw 20-bits. (ISP will take maximum 16-bits input, PWL compressed/de-compress internally as 20-bit before processing)
>>Q4
as mentioned above, PWL frames were compressed/de-compress internally.
This very simple Python script fixes the image color, after discarding the 4MSBs for an image saved with NvRaw (.raw format) :
with open(file_path, 'rb') as f:
raw_data = f.read()
# Image is 16-bit and stored in little endian format
image_data = np.frombuffer(raw_data, dtype=np.uint16)
# Discard the last 4 bits of every hexadecimal
image_data = image_data >> 4
# Convert to 12 bits
image_data = image_data & 0xFFF
# Reshape the image
image_data = image_data.reshape((height, width))
# Debayer the image using OpenCV
bgr = cv2.cvtColor(image_data, cv2.COLOR_BAYER_BG2BGR)
# Display as 8 bits using OpenCV
image_data = (bgr / 16).astype(np.uint8) # (12-8 = 4 -> 2^4 = 16)
Sorry, I can’t remember many details as I ended up not using that mode for my image sensor. It has been a long time, so I vaguely remember the 16-bit output put the video pipeline in HDR mode. So maybe you need to set some more options in either your dts file or camera_overrides.isp file related to HDR (unsure).
You might also specify the
pixel_phase = “wxyz”; /* replace wxyz with Bayer pattern */
right below your mode_type. I’m not sure it is required.
This functionality was to some degree working on the older R32.5.0 so things may have changed significantly since then. Your best bet is to find a vendor that sells a PWL image sensor camera that is advertised to work with Jetson, then look at the camera_overrides.isp they provide you with and/or dts files and see if you can find the magic setting OR work with one of the camera partners, if you can, to support your image sensor.
Orin series were using T_R16 to handle Raw12, it’s VI input memory formats.
how many sensor modes available? if you had several modes, please add sensor-mode=<N> to the gst pipeline to specify the mode index.
besides, it’s your gst pipeline to dump the 1st frame as PNG file, sensor controls may not finalize, that might be cause your dim capture results.
hence…
please give it another try to capture more frames, and take the 100th image for comparison.
for instance, $ gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0 num-buffers=101 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=30/1, format=NV12' ! nvvidconv ! jpegenc ! multifilesink location=~/Desktop/tmp/capture%d.jpeg
yes… these two were necessary for WDR sensors as their value were 1 by default.
please also set ae.wdr.DreMin = ae.wdr.DreMax, and these two should be as same as your HDR ratio.
>>Q2
I don’t understand what you meant “fixes” the 4 repeated MSBs.
the flow looks like… [Sensor] → [CSI] → [VI] → [ISP]. whereas ISP output is YUV420.
let me check this internally,
in the meanwhile, how about you capture the nvraw and jpg and attach that for reference,
you may using nvargus_nvraw to capture both of them simultaneously.
for instance, $ sudo nvargus_nvraw --c 0 --mode 0 --format "nvraw, jpg" --file /home/nvidia/output