Sorry if this has already been asked, but I couldn’t find an answer. We need both RGB and YUV data for processing in our application. Right now we are getting YUV from argus using createNvBuffer after calling acquireFrame and getImage:
Do NvBufferTransform, copyToNvBuffer, and createNvBuffer all use the same code/path underneath for image format conversion and scaling?
Where does this processing happen? cpu, gpu, isp, or some separate hardware? We could instead perform the YUV → RGB conversion ourselves in a cuda kernel, but would rather save compute on the gpu if possible.
I see, so createNvBuffer and copyToNvBuffer are both calling NvBufferTransform under the hood, but where does the NvBufferTransform computation (format conversion, scaling, etc…) actually happen? That’s mainly what I’m interested in.
Also, I’m slightly confused by the first comment. Based on other posts I thought the ISP output was NvBufferLayout_BlockLinear? Doesn’t that require a conversion to NvBufferLayout_Pitch even if both are YUV420?
Following up with another question regarding ISP vs VIC processing. When creating an output stream it seems possible to specify resolutions that are not indicated by the sensor mode. For example, my cameras report only 1280x1080 resolution modes. However, I can call iEglStreamSettings->setResolution() with a value of 640x540, and I get 640x540 images from acquireFrame() + getImage(). In this case is the resize being performed by the ISP or VIC? If I only want half resolution images, would it be better to do that by configuring the output stream or by scaling in createNvBuffer after acquireFrame?