Sorry if this has already been asked, but I couldn’t find an answer. We need both RGB and YUV data for processing in our application. Right now we are getting YUV from argus using createNvBuffer after calling acquireFrame and getImage:
iNativeBuffer->createNvBuffer(xxx, NvBufferColorFormat_YUV420, NvBufferLayout_Pitch);
For the YUV -> RGB conversion is there any performance difference between these three paths?
1.) iNativeBuffer->createNvBuffer(xxx, NvBufferColorFormat_ABGR32, NvBufferLayout_Pitch);
2.) NvBufferCreateEx + iNativeBuffer->copyToNvBuffer
3.) NvBufferCreateEx + NvBufferTransform
Do NvBufferTransform, copyToNvBuffer, and createNvBuffer all use the same code/path underneath for image format conversion and scaling?
Where does this processing happen? cpu, gpu, isp, or some separate hardware? We could instead perform the YUV -> RGB conversion ourselves in a cuda kernel, but would rather save compute on the gpu if possible.