Continuing the discussion from Fundamental Question about gstreamer(deepstream):
You said like this before.
I’d like to know the difference you pointed out there.
You mentioned there is additional buffer copy in gstreamer but it doesn’t exist in deepstream sdk.
Could you explain more about this possible difference?
In the pipeline, you will get frame data in BGR format in appsink:
... ! video/x-raw, format=BGR ! appsink drop=1
Hardware converter does not support 24-bit BGR, and the solution to get BGR in appsink is to convert to RGBA or BGRx, copy the data from DMA buffer to CPU buffer, and use software converter to convert to BGR format. In the steps we have additional memory copy.
In DeepStream SDK, the data is in DMA buffer and convert to RGBA. GPU can access DMA buffer directly so we don’t need to copy the data from DMA buffer to CPU buffer.
If you can port the model to be runnable on TensorRT, we would suggest use DeepStream SDK.
It’s a surprise to me!
I’d like to use deepstream/GPU to avoide memory copy from DMA buffer to CPU buffer.
To do this, I guess the pipe string has to be changed.
I believe I need to replace videoconvert to something of DS.
nvv4l2camerasrc device=/dev/video0 !
video/x-raw, format=BGR !
This is current pipe I use.
How am I supposed to change what/how?
I am reading Deepstream SDK document.
I’d like to get some hint…
Thank you very much!
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.