Using Jetson MMAPI sample 13, I can recieve the frames on time. However I need to copy the frame into a custom buffer to keep the frames for further processes. I have used cuGraphicsResourceGetMappedEglFrame to do that which works to some extent, but the main trouble is the TIME. I am using a high frameRate (fps=50), so I get images every 20 ms seconds. If I try to copy the data to any other buffer, it will add time to the iterations of the consumer loop, which will lead to the next frame latency and frame loss.
while (m_framesRemaining--)
{
for (uint32_t i = 0; i < m_streams.size(); i++)
{
/* Acquire a frame */
UniqueObj<Frame> frame(iFrameConsumers[i]->acquireFrame());
IFrame *iFrame = interface_cast<IFrame>(frame);
if (!iFrame)
break;
/* Get the IImageNativeBuffer extension interface */
NV::IImageNativeBuffer *iNativeBuffer =
interface_cast<NV::IImageNativeBuffer>(iFrame->getImage());
if (!iNativeBuffer)
ORIGINATE_ERROR("IImageNativeBuffer not supported by Image.");
/* If we don't already have a buffer, create one from this image.
Otherwise, just blit to our buffer */
if (!m_dmabufs[i])
{
batch_surf[i] = NULL;
m_dmabufs[i] = iNativeBuffer->createNvBuffer(iEglOutputStreams[i]->getResolution(),
NVBUF_COLOR_FORMAT_YUV420,
NVBUF_LAYOUT_BLOCK_LINEAR);
if (!m_dmabufs[i])
CONSUMER_PRINT("\tFailed to create NvBuffer\n");
if (-1 == NvBufSurfaceFromFd(m_dmabufs[i], (void**)(&batch_surf[i])))
ORIGINATE_ERROR("Cannot get NvBufSurface from fd");
}
else if (iNativeBuffer->copyToNvBuffer(m_dmabufs[i]) != STATUS_OK)
{
ORIGINATE_ERROR("Failed to copy frame to NvBuffer.");
}
///-----------------------> Here I try to copy the frame
///------------------------
}
}
Therefore my specific question is, in the above loop of frame delivery how do I get to copy (or keep) the frames in a fast pace without losing any frames?
Which one of the objects should be shared with the that auxiliary thread? (for the copy)
I mean should I share the direct output of iFrameConsumers[i]->acquireFrame() with another thread?
Or IFrame *iFrame or even convert to CUeglFrame eglFrame in the above loop, then let the other thread to take care of it?
Well, at the end using this code, I could copy the image data from the eglFrame. Not sure if this is the best way, but seems working so far:
EGLStream::NV::IImageNativeBuffer *iNativeBuffer =
Argus::interface_cast<EGLStream::NV::IImageNativeBuffer>(iFrame->getImage());
if (!m_dmabufs[i])
{
m_dmabufs[i] = iNativeBuffer->createNvBuffer(
iEglOutputStreams[i]->getResolution(),
NVBUF_COLOR_FORMAT_YUV420,
NVBUF_LAYOUT_PITCH
);
if (!m_dmabufs[i])
std::cout<< "Failed to create NvBuffer for stream "<< i <<"\n";
}
else if (iNativeBuffer->copyToNvBuffer(m_dmabufs[i]) != Argus::STATUS_OK)
{
std::cout<< "Failed to copy frame to NvBuffer. "<< i <<"\n";
}
//// ========================================= Convert to eglFrame:
NvBufSurface *nvbuf_surf = 0;
int ret = NvBufSurfaceFromFd(m_dmabufs[i], (void**)(&nvbuf_surf));
ret = NvBufSurfaceMapEglImage(nvbuf_surf, -1);
NvBufSurfaceParams *sfparams = &nvbuf_surf->surfaceList[0];
NvBufSurfacePlaneParams *nvspp = &nvbuf_surf->surfaceList[0].planeParams;
CUgraphicsResource pResource;
cuGraphicsEGLRegisterImage( &pResource, sfparams->mappedAddr.eglImage,
CU_GRAPHICS_MAP_RESOURCE_FLAGS_NONE);
CUeglFrame eglFrame;
cuGraphicsResourceGetMappedEglFrame( &eglFrame, pResource, 0, 0 );
And then copied the image data from eglFrame.frame.pPitch[0],eglFrame.frame.pPitch[1] and eglFrame.frame.pPitch[2] with cudaMemcpy to my Y,U and V arrays respectively.
@DaneLLL@kayccc
Ok, it seems something is wrong with the method I mentioned. After running the code in a program for a while (specially if I destroy the camera related objects and define them again), I get a seg fault error after copying the data from eglFrame.frame.pPitch. It seems the original frame data pointer gets messy or disappear somehow after a while!