How to read eglImage in NvMOTFrame

DeepStream: 7.0

I am having problems with loading of the egl image in my customized tracker.

I have implemented my version of NvMOTStatus NvMOTContext::processFrame(const NvMOTProcessParams *params, NvMOTTrackedObjBatch *pTrackedObjectsBatch) function and inside this function I need to get an image of each of the objects that were detected.

I found there is a variable called eglImage here https://docs.nvidia.com/metropolis/deepstream/4.0/dev-guide/DeepStream_Development_Guide/baggage/structNvBufSurfaceMappedAddr.html#ac6fd65316f8c42f66be7664455a91962, basically within this structure: NvMOTProcessParams → NvMOTFrame → bufferList → mappedAddr → eglImage. It is not the most programmer friendly variable as it’s of type "void * " without any further explanation on what is inside.

So far, I have tried this approach, recommended to me by a chatbot using CUDA standard processing of eglImage:

NvMOTFrame            *frame          = &params->frameList[streamIdx];
 EGLImageKHR egl_image = EGLImageKHR((*frame->bufferList)[0].mappedAddr.eglImage);

cudaGraphicsResource_t cuda_resource;
cudaError_t status = cudaGraphicsEGLRegisterImage(&cuda_resource, egl_image, cudaGraphicsRegisterFlagsReadOnly);
if (status != cudaSuccess) {
      std::cout << "ERROR with cuda, code: " << status << std::endl;
}

but I get “ERROR with cuda, code: 999”, which again can mean basically anything, becuase “cudaErrorUnknown = 999 → This indicates that an unknown internal error has occurred.”

Can you give me any hint, please on how to get the cropout of a detected object, or a full frame image on gpu like I was trying to do?

thank you!

Please provide complete information as applicable to your setup. Thanks
Hardware Platform (Jetson / GPU)
DeepStream Version
JetPack Version (valid for Jetson only)
TensorRT Version
NVIDIA GPU Driver Version (valid for GPU only)
Issue Type( questions, new requirements, bugs)
How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hardware Platform (Jetson / GPU) GPU
DeepStream Version 7.0
TensorRT Version 8.6.1.6
NVIDIA GPU Driver Version (valid for GPU only) NVIDIA GeForce RTX 3090 Ti
Issue Type( questions, new requirements, bugs) Explained above. There is almost 0 information on how to read image stored in gpu into a cuda variable in custom tracker implementation.
How to reproduce the issue ? Implement a custom tracker and NvMOTContext::processFrame function.

I will simplify the question.

How do I convert/extract *NvBufSurfaceParams into cudaArray_t ?

I want to extract a frame image in my custom tracker, but there is 0 documentation how to do so or if it’s even possible. I have seen code example that reads *NvBufSurface into CPU cv::mat, which this is close, but still not helpful.

You can refer to our FAQ Dump NV12 NvBufSurface into a YUV file. This shows how to get nv12 data from the nvbufsurface.

Thank you, but I asked how to extract into a CUDA array, not CPU array.

It should be possible, since there exists a closed source NvDCF tracker written by NVIDIA, which has a REID model operating on GPU.

Inside the NvBufSurfaceParams structure, there is “void * dataPtr” which I assume holds pointer to GPU array. So the question is, how is the data stored - does dataPtr hold GPU data in NV12 format? Is it a contiguous array?

Yes.

No. You need to consider the following parameters surface->surfaceList[0].pitch;.

      void * ydata =  surface->surfaceList[ibatch].dataPtr;
      void * uvdata =  (uint8_t*)surface->surfaceList[ibatch].dataPtr + surface->surfaceList[0].pitch * input_height;