I want to crop an image in OpenCV and then pass it to DeepStream.
I want to do real-time processing in this way.
First, connect a USB camera to the Jetson Nano.
USB camera -> OpenCV -> Partial Trimming -> DeepStream -> Object Detection & Trimming
How do I pass my OpenCV image processing to DeepStream?
Could you try this topic first?
How do I get the mapped_ptr, buf_params.pitch passed to this function from the DeepStream SDK Python Binding?
cv::cuda::GpuMat(height, width, CV_8UC4, mapped_ptr, buf_params.pitch);
It should look like this:
cv::cuda::GpuMat d_mat(dsexample->processing_height, dsexample->processing_width, CV_8UC4, eglFrame.frame.pPitch);
Please find the corresponding in our python binding.
The data that corresponds to eglFrame.frame.pPitch is not included in DeepStrem’s metadata, is it?
You can check this comment for the detail:
To access the image buffer, it’s required to register with EGL first.