I am using libargus c++ api to capture image from imx477 and save to jpg. I need to use cuEGLStreamConsumerAcquireFrame to get the image from ISP and pass the nvbuffer of this frame to the NvJPEGEncoder.
But it happened like what you have disscussed in this post. I got wrong black level output from ISP. I store the original YUV data from ISP eglframe in a YUV_NVbuffer with NvBufferColorFormat_NV12_ER and send the buffer to NvJPEGEncoder. At the same time I have another RGBA_nvbuffer with NvBufferColorFormat_ARGB32, which is transformed by the mentioned YUV_NVbuffer, and then save to PNG format by opencv. The strange thing is that the black level of the saved png format picture is correct and the JPG is wrong.
I wonder what may cause this problem and how can I fix this? I still hope to use NVJPEG encoder since saving a (4032x3040) PNG by opencv cpu is really slow(1000ms).
Yeah that’s what I mean. It’s that a known issue and possible to fix this in Jetpack 4.6.2? or Is the libargus api in Jetpack 4.6.4 is compatible with jp4.6.2? Because we are trying to get our product into mass production and we don’t want the wide range of differences in api.
Hi,
For information, do you call createNvBuffer() to get the NvBuffer and then call encodeFromFd()? If yes, do you set block linear or pitch linear in createNvBuffer()?
Sure,I created the YUV buffer in NvBufferLayout_BlockLinear so that I can use cudaMemcpyArrayToArray to copy it from CUeglFrame. And for the RGBA buffer, I created it with NvBufferLayout_Pitch and use NvBufferTransform transformed by YUV buffer. Because I need to use the RGBA buffer to get a cv::cuda::GpuMat. Is there any better suggestion?
Thank you for your update. I tried to replace the /usr/lib/aarch64-linux-gnu/tegra/libnvbuf_utils.so.1.0.0 with the prebuilt libnvbuf_utils.so in the link. But I found no difference with saved JPEGs encoded by encodeFromFd(). What else should I do? Should I change the buffer layout or color format?
We can locally reproduce this issue on our side, and we will be checking it.
By saying wrong black levels, you mean images produced by NvJPEGEncoder are more whitish and foggy, which I can tell based on my observation, than those produced by OpenCV?
Yes that’s what I mean. We are using fisheye lens with imx 477. We captured images in a dark room and save it with NvJPEGEncoder. Then we found that the image is not totally black in the ‘dark region’(outsize the lens region). The pixel values are always larger than 16 in each channel. We are planning for the release. May I ask when I can get a newer patch to fix this?
Both m_frame and yuvArray.eglFrame are in type ofCUeglFrame. I created a nvbuffer by NvBufferCreate in the format of NvBufferLayout_BlockLinear and mapped this buffer toyuvArray.eglFrame, so that I can get the YUV frame from cuEGLStreamConsumer. But it doesn’t work after we change the format to NvBufferLayout_Pitch.