How to put video frames efficiently into the memory of Jetson Nano?

There is a Jetson Nano with Jetpack 4.6.1 and I would like to get frames from an rtsp camera and put the frames into its memory, then process it.

To make the kernels run faster, I am considering to use unified memory or pinned memory. Which is better in the case of a Jetson Nano? In that case, there is one memory modul for the GPU and CPU and it is also have to mention, that Maxvell GPUs handles the unified memory in a way, where the all memory pages are migrated upon the kernel call.

I also searched for the differences between the unified and the pinned memory. One of the NVIDIA sites says:
One key difference between the two is that with zero-copy allocations the physical location of memory is pinned in CPU system memory such that a program may have fast or slow access to it depending on where it is being accessed from. Unified Memory, on the other hand, decouples memory and execution spaces so that all data accesses are fast.
Why would the location of the access have an affect on the access speed of a program to the pinned memory?

Any help is appreciated.

Hi,
For optimal performance we would suggest use jetson_multimedia_api, but it is low-leve and you would need to implement RTSP delaying to extract h264 stream and then feed into hardware decoder. The other solution is to use gstreamer and you can run like:

$ gst-launch-1.0 rtspsrc ! rtph264depay ! h264parse ! nvv4l2decoder ! nvoverlaysink

If you can run above command and see video preview, you can then try to construct the pipeline:

gst-launch-1.0 rtspsrc ! rtph264depay ! h264parse ! nvv4l2decoder ! appsink

And get NvBuffer in appsink for further processing. Here is a sample app:
How to run RTP Camera in deepstream on Nano - #29 by DaneLLL

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.