NvBufferColorFormat support and transformation

Hallo,

Is there any further documentation on the ISP pipeline and ColorFormat conversion?
I particular interested in:

  • In which order are the ISP steps applied (eg DeBayer → CCM → Gamma (Tone Map Curve) → YUV conversion)?
  • What are the supported output formats of the ISP (like NV12, NV24,…) ?
  • How are the Color formats defined (like NV12 in comparison to NV12_ER)?
  • What color space / format conversion are supported (like NV24 to NV12, NV24 to ARGB_10_10_10_2, NV12 to NV24_ER…)?
  • How are these conversion performed (using NvBufferTransform), which RGB primaries and YUV transformation parameters are used? Is the nonlinear (eg. Gamma) also applied during this transformations?

Until now I have found the following resources:
https://docs.nvidia.com/vpi/1.2/algo_imageconv.html
https://docs.nvidia.com/jetson/l4t-multimedia/group__ee__nvbuffering__group.html#gaae53b45fe3f04b8f9135cb80baeac6e4
NVIDIA Xavier Series System-on-Chip Technical Reference Manual

but they don’t really give an coherent picture of whats going on inside the ISP.

Best regards.

Environment

Platform: Jetson Xavier NX
Jetpack version: 4.62
Multimedat API: 0.98
TensorRT Version: 8.0.1
CUDA Version: 10.2
CUDNN Version: 8.2.1
Operating System + Version: L4T 32.7

hello rbayr,

you may download Xavier TRM for checking [7.2.2 Video Input (VI)] session, and you may refer to [7.2.2.2.4 PIXFMT] for supported pixel formats.
ISP process those and output YUV images as NV12. you may also involve video converter to change the format as others.

In the Xavier TRM the path to the ISP is described but i can’t find a description of the ISP itself.

For example:
Is the ToneMapCurve used at the beginning of the pipeline to linearize the sensor data or at the end to apply e.g. a Gamma compression?
Dose the ToneMapCurve replace the compacting function of the given standard from the BufferPixleFormat (e.g. dose it replace the BT601 compacting function for a normal NV12 frame)?

When are functions like the DeNoise or EdgeEnhacment applied in the pipeline?

I also get an Error when I try to transform an NV24_ER Buffer into a NV24_ER Buffer, however the Transormation NV24_ER to NV12 or NV24 works fine.

Hi,
There is constraint for converting ER format to ER format in hardware converter, so converting NV24_ER from one NVBuffer to the other mat not work. Please convert ER to limited range or limited range to ER.

This is good to know, however it is counterintuitive that a conversion which only has to copy the data dosn’t work. Is there an overview/documentation of the supported conversions?

And is there also an description of the ISP pipeline available?

Any information on this?

Hi,
The conversion is done on hardware converter. For supported conversions, please check the sample:

/usr/src/jetson_multimedia_api/samples/07_video_convert

Jetson Linux API Reference: 07_video_convert (NvBufSurface conversion) | NVIDIA Docs

All supported conversions are demonstrated in the sample. Please take a look.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.