Hi @dusty_nv, @Honey_Patouceul
I used gstreamer+opencv for decoding the RTSP stream with python codes. like that:
gstream_elemets = (
'rtspsrc location=rtsp latency=300 !'
'rtph264depay ! h264parse ! '
'nvvidconv ! video/x-raw , format=(string)BGRx !'
As you know that’s not very efficient way for decoding, because I copied the decoded frames from NVMM buffer to CPU buffer, that cause jetson used more memory for decoding.
Q1- I before tested the deepstream for multi-stream, and that not used any memory for decoding, because this sdk used NVMM buffer directory for GPU processing, I want to know, Is there a way to use opencv + gstream python without CPU buffer copy? I compiled the opencv 4.1.1 for CUDA support.
If there is not way with opencv + gstreamer, Is these a other solution for decoding the streams without copying to CPU buffer especially python code?
Q2 - If I want to connect USB Coral TPU for other processing, I have to bring into the decodes frames from NVMM buffer to CPU buffer?
Q3- In this diagram, batching of frames is done over CPU, I want to know, even in this way(deepstream solution), for gathering batch of frames, the decoded frames copied from NVMM buffer to CPU buffer, right? what’s difference between this solution and opencv+gstreamer solution? In both way the decoded frames bring into NVMM buffer into CPU buffer, right?
Q4- For scaling and cropping part of diagram, I also can use nvvidconv plugin in gstreamer+opencv to do these operation, I want to know this plugin in gstreamer+opencv use VIC HW?
Q5- Is it possible to access NVMM buffer from CPU?
Finally, I looking for a best python solution for multi-stream decoding without 2 times copied in memory from NVMM buffer to CPU buffer, I want to use in USB Coral TPU and jetson GPU.
Q6- nvivafilter plugin has post/pre processing, How does it? do custom pre/post processing? get function for do?
You would have to output RGBA frames from this plugin.
this plugin is like nvvidconv plugin, right?