Argus captures encoded to JPEG output wrong black levels

Dear sir,

I am using libargus c++ api to capture image from imx477 and save to jpg. I need to use cuEGLStreamConsumerAcquireFrame to get the image from ISP and pass the nvbuffer of this frame to the NvJPEGEncoder.

But it happened like what you have disscussed in this post. I got wrong black level output from ISP. I store the original YUV data from ISP eglframe in a YUV_NVbuffer with NvBufferColorFormat_NV12_ER and send the buffer to NvJPEGEncoder. At the same time I have another RGBA_nvbuffer with NvBufferColorFormat_ARGB32, which is transformed by the mentioned YUV_NVbuffer, and then save to PNG format by opencv. The strange thing is that the black level of the saved png format picture is correct and the JPG is wrong.

I wonder what may cause this problem and how can I fix this? I still hope to use NVJPEG encoder since saving a (4032x3040) PNG by opencv cpu is really slow(1000ms).

Hi,
For more information, so the compressed JPEG is in [16, 235] and you would like it to be [0,255]. Is the understanding correct?

And please upgrade to latest Jetpack 4.6.4 or 5.1.3 if you are using previous version.

Yeah that’s what I mean. It’s that a known issue and possible to fix this in Jetpack 4.6.2? or Is the libargus api in Jetpack 4.6.4 is compatible with jp4.6.2? Because we are trying to get our product into mass production and we don’t want the wide range of differences in api.

Thank you!

Hi,
For information, do you call createNvBuffer() to get the NvBuffer and then call encodeFromFd()? If yes, do you set block linear or pitch linear in createNvBuffer()?

Sure,I created the YUV buffer in NvBufferLayout_BlockLinear so that I can use cudaMemcpyArrayToArray to copy it from CUeglFrame. And for the RGBA buffer, I created it with NvBufferLayout_Pitch and use NvBufferTransform transformed by YUV buffer. Because I need to use the RGBA buffer to get a cv::cuda::GpuMat. Is there any better suggestion?

Hi,
On Jetpack 4.6.2, there is a known issue in NvBufferTransform(). Please apply this prebuilt lib:

Jetson/L4T/r32.7.x patches - eLinux.org
[MMAPI] 07_video_convert sample result not match as expected

Thank you for your update. I tried to replace the /usr/lib/aarch64-linux-gnu/tegra/libnvbuf_utils.so.1.0.0 with the prebuilt libnvbuf_utils.so in the link. But I found no difference with saved JPEGs encoded by encodeFromFd(). What else should I do? Should I change the buffer layout or color format?

Hi,
Please check if you can share a patch to 09 sample. So that we can set up Xavier NX developer kit + Jetpack 4.6.4 to replicate the issue.

09_jpeg_sample.zip (9.3 KB)
Sure. I would recommend you to compile and run directly with the modified source code. I made several changes:

  1. comment out the part of preview consumer.
  2. uninstall the original opencv and link to recompiled opencv4.5 with cuda, which installed in /usr/local/.
  3. add NvBufferTransform to get the cv::cuda::GputMat and save the image in png format.

Since I am using the imx 477, I run the command like:

./camera_jpeg_capture --img-res 4032x3040 --fps 5 -v

Thanks for your patience.

Hi,

We can locally reproduce this issue on our side, and we will be checking it.
By saying wrong black levels, you mean images produced by NvJPEGEncoder are more whitish and foggy, which I can tell based on my observation, than those produced by OpenCV?

Yes that’s what I mean. We are using fisheye lens with imx 477. We captured images in a dark room and save it with NvJPEGEncoder. Then we found that the image is not totally black in the ‘dark region’(outsize the lens region). The pixel values are always larger than 16 in each channel. We are planning for the release. May I ask when I can get a newer patch to fix this?

Hi,

Please modify the code like this:

            m_dmabuf = iNativeBuffer->createNvBuffer(iEglOutputStream->getResolution(),
                                                     NvBufferColorFormat_YUV420_ER, // or NvBufferColorFormat_NV12_ER
                                                     NvBufferLayout_Pitch);

Thanks for your reply. Should I try under JP 4.6.4?

YES.
We tested on 4.6.4 and it should work.

It works on 4.6.4. But I need to copy the frame from CUeglFrame to in our application:

cuEGLStreamConsumerAcquireFrame(&m_connection, &m_resource, &m_stream, 0xFFFFFFFF);
cuGraphicsResourceGetMappedEglFrame(&m_frame, m_resource, 0, 0);
cudaMemcpyArrayToArray(
    (cudaArray_t)yuvArray.eglFrame.frame.pArray[0], ZERO_OFFSET, ZERO_OFFSET,
    (cudaArray_t)m_frame.frame.pArray[0], ZERO_OFFSET, ZERO_OFFSET, size,
    cudaMemcpyDeviceToDevice);

cudaMemcpyArrayToArray(
    (cudaArray_t)yuvArray.eglFrame.frame.pArray[1], ZERO_OFFSET, ZERO_OFFSET,
    (cudaArray_t)m_frame.frame.pArray[1], ZERO_OFFSET, ZERO_OFFSET, size / 2,
    cudaMemcpyDeviceToDevice);

Both m_frame and yuvArray.eglFrame are in type ofCUeglFrame. I created a nvbuffer by NvBufferCreate in the format of NvBufferLayout_BlockLinear and mapped this buffer toyuvArray.eglFrame, so that I can get the YUV frame from cuEGLStreamConsumer. But it doesn’t work after we change the format to NvBufferLayout_Pitch.

How should I modefied our codes ?

Please file a new topic for that issue.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.