How to use OpenCV MAT image frames with detectnet?

Hello,

I am using an IP camera with Jetson nano and using opencv that comes with jetpack for image reading. Since tensorrt optimized network uses only NCHW format data, I converted the Mat file to the required format after referring to this link :https://github.com/dusty-nv/jetson-inference/issues/129
using the below steps:
uchar3* imgBufferRGB = NULL;
float4* imgBufferRGBAf = NULL;
cudaMalloc((void**)&imgBufferRGB, 1920 * sizeof(uchar3) * 1080);
cudaMalloc((void**)&imgBufferRGBAf, 1920 * sizeof(float4) * 1080);
cudaMemcpy2D((void*)imgBufferRGB, 1920sizeof(uchar3), (void)cvImage.data, cvImage.step,
1920sizeof(uchar3), 1080, cudaMemcpyHostToDevice);
cudaRGBToRGBAf(imgBufferRGB, imgBufferRGBAf, 1920, 1080); // defined in cudaRGB.h
int numBoundingBoxes = maxBoxes;
if (net->detect((float
)imgBufferRGBAf, 1920, 1080, bbCPU, &numBoundingBoxes, confCPU)) {
printf("%i bounding boxes detected\n", numBoundingBoxes);
}

I found that there is a memory leak in the detectnet example after making these changes. Is there a full length opencv image capture—> tensorrt inference example that I can refer to?

If your code is running in a loop, are you making the cudaMalloc() calls every frame? If you aren’t calling cudaFree() for each associated cudaMalloc() call after the memory is done being used, this will be where the memory leak is coming from.

If your image size is the same across all frames, it’s recommended to allocate the memory with cudaMalloc() once at the initialization of the program.

Hi,

I have used cudaFree() for the cudaMalloc() and what I notice is either I get the message:

detectnet:: shutting down

or the camera feed comes in corrupted.

Currently the memory allocation and freeing is done inside the while loop. I have not tried allocating memory at the initialization of the program. Will try that and update.

I have tried using cudaMalloc() during program initialization cudaFree() after the while loop. I still notice frame corruption . Also, another issue is that the image displayed using RenderOnce function is not an RGB image. How do we do the correct conversion from Opencv MAT to the format that detectnet requires. It will be great if you could point out to a simple example.

It will also be helpful if you can provide help on converting NCHW format back to opencv mat.

Hi,

Is there any update on this issue? It will be useful if there is a documented way of converting the NCHW format to opencv MAT format.

Also, I am facing issues with streaming of HKVision camera. There is corruption of frames.

Hi nralka2007, the DNN itself uses NCHW format, however the detectNet code performs pre-processing to convert the image from float4 RGBA image into NCHW (so you needn’t be concerned with NCHW).

I believe you need to use the cudaRGB8ToRGBA32() function from cudaRGB.h to convert your data into float4 RGBA. You then pass this float4 RGBA image (residing in CUDA device memory) to detectNet::Detect()