Acquire Frame from cuEGLStream Consumer

Dear sir,

I am using libargus c++ api to capture image from imx477 and save to jpg. I need to use cuEGLStreamConsumerAcquireFrame to get the image from ISP and pass the nvbuffer of this frame to the NvJPEGEncoder.

In our application:

cuEGLStreamConsumerAcquireFrame(&m_connection, &m_resource, &m_stream, 0xFFFFFFFF);
cuGraphicsResourceGetMappedEglFrame(&m_frame, m_resource, 0, 0);
cudaMemcpyArrayToArray(
    (cudaArray_t)yuvArray.eglFrame.frame.pArray[0], ZERO_OFFSET, ZERO_OFFSET,
    (cudaArray_t)m_frame.frame.pArray[0], ZERO_OFFSET, ZERO_OFFSET, size,
    cudaMemcpyDeviceToDevice);

cudaMemcpyArrayToArray(
    (cudaArray_t)yuvArray.eglFrame.frame.pArray[1], ZERO_OFFSET, ZERO_OFFSET,
    (cudaArray_t)m_frame.frame.pArray[1], ZERO_OFFSET, ZERO_OFFSET, size / 2,
    cudaMemcpyDeviceToDevice);

Both m_frame and yuvArray.eglFrame are in type ofCUeglFrame. I created a nvbuffer by NvBufferCreate in the format of NvBufferLayout_BlockLinear and mapped this buffer toyuvArray.eglFrame, so that I can get the YUV frame from cuEGLStreamConsumer.

But it doesn’t work after we change the format to NvBufferLayout_Pitch(to solve fix issue in this post).

How should I modefied our codes ?

Hi,
Please feed NvBuffer in pitch linear to JPEG encoder. If your data source is in block linear, please create a NvBuffer in pitch linear and call NvBufferTransform() to convert block linear to pitch linear.

Just to confirm, is that mean I need to obtain a EglFrame in block linear from cuGraphicsResourceGetMappedEglFrame(or I have other ways to set the source data to pitch linear from isp?), then use NvBuffer to transform the m_frame above to pitch linear rather then copy it directly by cudaMemcpyArrayToArray? Could you tell me how can I get the NvBufferFd of the m_frame? Can I feed the pitch linear NvBufferColorFormat_ARGB32 into JPEGEncoder using color space JCS_RGBA_8888 ?
Thank you.

Hi,

You have the above method to have NvBuffer in block linear. You can then create a NvBuffer in pitch linear, copy data from NvBuffer in block linear to NvBuffer in pitch linear through NvBufferTransform(), and feed NvBuffer in pitch linear to JPEG encoder. Please try and see if it works.

Yes it does work. But we wonder whether this kind of transformation is the most efficient solution? It seems that we may need many times of transformation for one frame. The feature that we want is obtain the cuda frame from ISP to : 1. perform custom AE,AWB estimation and image processing by opencv for realtime application. 2. save the jpeg image by NVJPEGEncoder. Can you suggest a better pipeline?

Hi,
This method is optimal. If you feed NvBuffer in block linear to JPEG encoder, it is internally converted to pitch linear and then being encoded. It is identical to callingNvBufferTransform() externally and feed NvBuffer in pitch linear to JPEG encoder.

Sure that’s clear enough. As for this question:

Can I feed the pitch linear NvBufferColorFormat_ARGB32 into JPEGEncoder using color space JCS_RGBA_8888 ?

We hope to use nvjpegencoder to save the processed jpeg. Can we feed rgba? Otherwise we may need to transform jpeg to rgba and transform it back to yuv after image processing and save it. Thank you.

Hi,
The supported format is YUV420. Please check 2.7.3 in module data sheet:

Log in | NVIDIA Developer

Thank you.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.