Some question about jetson nano/xavier-nx and deep stream

I’m working around multi-stream processing with jetson, and I used Opencv+Gstreamer using python to decode the frames with HW accelerator, and I used optimal solution for decoding with opencv+gstreamer, but because I connected USB Coral TPU to jetson, I have to copied the decoded frames from NVMM buffer to CPU buffer to USB TPU.

I’ve tested the deepstream samples and I found that the decoder of this SDK don’t use memory. because the NVMM buffer directly connected to GPU processing.

Q1) NVMM buffer is independent of jetson memory?

Q2) CPU buffer is part of jetson memory?

Q3) How I can access to NVMM buffer for GPU processing without copy to CPU buffer?

Hi,
We would like to suggest not to make duplicate posts with same questions. Please check explanation in

Main memory in OpenCV is CPU buffers in BGR format. This is not well supported on Jetson platforms due to limitation of hardware VIC engine, so you would need to execute memory copy.

Thanks,
with opencv cuda support, using nivafiler element, Is it possible to do pre/post processing? If so, How? How I can pass a custom function to that in opencv?

Hi,

It is possible in C but not in python. In C code, we can register prob function to get cv::gpu::gpuMat buffer and use cuda filter.

‘rtspsrc location=rtsp latency=300 !’
'rtph264depay ! h264parse ! ’
‘omxh264dec !’
‘video/x-raw(memory:NVMM),format=(string)NV12 !’
‘nvvidconv ! video/x-raw , format=(string)BGRx !’
‘videoconvert !’
'appsink ').

If I want to use that pipeline gsteamer in opencv in c api, How I can to use nvinafiler in that pipeline?

we can register prob function to get cv::gpu::gpuMat buffer and use cuda filter.

If possible share a sample code for this.

Hi,
There is a C sample of accessing gpuMat. Please check: