Hi @DaneLLL @kayccc @Honey_Patouceul
I have some question, If possible guidance me.
a) Using cv2.VideoCapture + Gstreamer, and this solution copied the decoded frames from NVVM buffer to CPU buffer, indeed occurred duplicated copy for one decoded frame, right?
b) Jetson nano used shared memory, then CPU and GPU memory are same, right? why we need GPU memory? Every things in CPU memory aren’t in GPU memory?
c) If I use cv2.Videocapture + Gstreamer using H.264 HW decoder, the decoded frames copied from NVMM buffer to CPU buffer, in this case, for one decoded frame we use 2 times memory out of whole memory?
d) If I use cv2.Videocapture + Gstreamer using H.264 HW decoder, the decoded frames copied from NVMM buffer to CPU buffer, in this case, then If I want to use GPU for pre/post processing, we again need to copied from CPU memory to GPU memory? in this case we use 3 times memory out of whole memory for one decode frame?
e) We know the disadvantage of gstreamer+opencv is copied GPU memory to CPU memory, I agree with this, but In this link used pure gstreamer pipeline with python code. In this case, the decoded frames go to GPU memory without copied into CPU memory, but in that link that I highlighted(line 123), the decoded frames bring into numpy format, in this case we have to use CPU memory, I want to know in this case also we copied gpu mem to cpu mem, in the term of performance these are same? Is it difference the coping of opencv+gsteamer with this link? which ones optimal?
f) If I want to access decodef frames without convert to numpy foramt, my mean is I want to do preprocessing directory in GPU memory, How I can do this? Is It need to bring into numpy format then do some preprocessing for that on GPU?