NvBufferTransform vs copyToNvBuffer vs createNvBuffer for format conversion

Sorry if this has already been asked, but I couldn’t find an answer. We need both RGB and YUV data for processing in our application. Right now we are getting YUV from argus using createNvBuffer after calling acquireFrame and getImage:

iNativeBuffer->createNvBuffer(xxx, NvBufferColorFormat_YUV420, NvBufferLayout_Pitch);

For the YUV → RGB conversion is there any performance difference between these three paths?

1.) iNativeBuffer->createNvBuffer(xxx, NvBufferColorFormat_ABGR32, NvBufferLayout_Pitch);
2.) NvBufferCreateEx + iNativeBuffer->copyToNvBuffer
3.) NvBufferCreateEx + NvBufferTransform

Do NvBufferTransform, copyToNvBuffer, and createNvBuffer all use the same code/path underneath for image format conversion and scaling?

Where does this processing happen? cpu, gpu, isp, or some separate hardware? We could instead perform the YUV → RGB conversion ourselves in a cuda kernel, but would rather save compute on the gpu if possible.

Hi,
ISP output is YUV420. So if you do

iNativeBuffer->createNvBuffer(xxx, NvBufferColorFormat_YUV420, NvBufferLayout_Pitch);

You get ISP output buffer directly.

If you do

iNativeBuffer->createNvBuffer(xxx, NvBufferColorFormat_ABGR32, NvBufferLayout_Pitch);

It is

ISP output -> NvBufferTransform(YUV420->RGBA) -> RGBA output

I see, so createNvBuffer and copyToNvBuffer are both calling NvBufferTransform under the hood, but where does the NvBufferTransform computation (format conversion, scaling, etc…) actually happen? That’s mainly what I’m interested in.

Also, I’m slightly confused by the first comment. Based on other posts I thought the ISP output was NvBufferLayout_BlockLinear? Doesn’t that require a conversion to NvBufferLayout_Pitch even if both are YUV420?

Hi,

It is done on VIC engine. Please check Figure 1 in Technical Reference Manual
https://developer.nvidia.com/embedded/downloads#?search=trm

Yes. It requires NvBufferTransform(YUV420 block linear → YUV420 pitch linear)

Thank you DaneLLL, that doc is quite helpful.

Following up with another question regarding ISP vs VIC processing. When creating an output stream it seems possible to specify resolutions that are not indicated by the sensor mode. For example, my cameras report only 1280x1080 resolution modes. However, I can call iEglStreamSettings->setResolution() with a value of 640x540, and I get 640x540 images from acquireFrame() + getImage(). In this case is the resize being performed by the ISP or VIC? If I only want half resolution images, would it be better to do that by configuring the output stream or by scaling in createNvBuffer after acquireFrame?

Hi,
The output of ISP is sensor resolution(1280x1080) and VIC is used to downscale to 640x540.