I’m trying to image processing application.
My application abstract is as below.
Capture camera device
Convert color format
image processing
Encode with HW encoder
No.1. to 3. will be implemented using OpenCV.
What i want to achieve is passing the result of image processing(e.g.overlay image with bounding box) to HW encoder.
I refered jetson_multimedia_api_reference but I confuse.
What is the best solution ?
Thank you for DaneLLL and Honey_Patouceul.
I archived importing image from V4L2 device.
Next, I’ll try some image processing and encode with H.264.
I consider to use NvVideoEncoder in MMAPI.
How could I pass cv::Mat to encoder ?
hmm, I tried cv::Mat into NvBuffer.
Is the below code is correct ?
I think that if par,num_planes euals 1, this is correct.
But this way seems to be ineffective.
cv::Mat src = cv::Mat::zeros(height, width, CV_8UC4);
ret = NvBufferGetParams(fd, par); //fd is V4L2_PIX_FMT_ABGR32
for( unsigned int plane = 0; plane < par.num_planes; plane++){
ret = NvBufferMemMap(fd, plane, NvBufferMem_Write, &vaddr); // vaddr is void*
if(ret == 0){
for( unsigned int i = 0; i < par.height[plane]; i++){
memcpy((uint8_t *)vaddr + i *par.pitch[plane], src.data[ i * src.step], src.width * sizeof((uint8_t)) * 4);
}
}
NvBufferMemSyncForDevice (fd, plane, &vaddr);
}
Hi,
Since the main format in OpenCV is BGR, which is not well supported on Jetson platforms. After the processing, you would need to convert it to RGBA, copy to NvBuffer, convert to YUV420 and do encoding. It may not bring good performance. There are CUDA filters which can be applied to RGBA buffer. If you can use the CUDA filters in your usecase, performance can be better.
I suppose @DaneLLL was referring to nvivafilter. This gstreamer plugin can take a custom library for doing GPU processing on NVMM frames. You can use opencv cuda with it.
You may find some links from this post.