How to share buffers across processes using jetpack 5

Hi there!

My mission is to share DMA-Video-Buffers across processes with zero-copy.

about my system:

cat /etc/nv_tegra_release
# R34 (release), REVISION: 1.1, GCID: 30414990, BOARD: t186ref, EABI: aarch64, DATE: Tue May 17 04:20:55 UTC

Until r32 I used NvBuffer, took the fd from it using ExtractFdFromNvBuffer(...). Then I transferred it using unix sockets, where I had to copy the buffer using NvBufferTransformEx to another NvBuffer.

With the release 34 it looks like NvBuffer and these functions got deprecated in favor of NvBufSurface.

I could not find any functions that resemble to the r32 method.
I tried to transfer the fd from dmabuf_fd = surface->surfaceList[0].bufferDesc and access the buffer in the other process using NvBufSurfaceFromFd with no success.

So how can I do DMA-Buffer sharing accross multiple processes with zero-copy?

Any help is greatly appreciated,
Thanks for your time!


JetPack 5 supports NvSCI which provides utilities for streaming data packets between different applications and for inter-process communication (IPC).
Please give it a check to see if it can meet your requirement.


Thanks for your quick answer!

I checked the documentation (the pdf NVIDIA NVSCI for L4T) and I searched the C++ headers in /usr/src.
It looks like the NvSci API let’s me work with NvSciBufObj objects and share them across processes. The Memory types (cuda unified, surface array) look familiar with the ones in NvBufSurface headers.

However I could not find an interface on how to wrap the content of a NvSciBufObj object to an NvBufSurface.

Could you please give me a hint on where to look? Or am I on the wrong path?

My goal is to share 3x 2160p@30 video-streams across processes. So zero-copy dma-buffer copying is a must.

Thanks for your time!


So this thread got flagged with nvbugs. Does this mean, that in a future release there will be an interface to cast NvSciBufObj to a NvBufSurface?

Thanks for your time

NvBufferTransformEx() is deprecated on Jetpack 5. We are checking with our teams whether there is solution for replacing it.

Did you solve the problem? I also encountered this problem, could you share the solution?thank you !

Yes, I was able to resolve the problem using EGLStreams and cuda on the Producer side and the gstreamer element nveglstreamsrc on the consumer side.

It works kinda like this


int fd = eglGetStreamFiledescriptor()
cuEGLStreamConsumerConnect() // this happens in the gstreamer element `nveglstreamsrc`

on the producer side:

int fd = wait_for_fd_from_unix_socket()
// some "simple" cuda functions, needed for the connect are left out 
// wait for NvBuffer
EGLImageKHR egl_image = egl_image_from_nvbuffer()

I’m on holiday, for two more weeks, so I have no access to the exact code, but I still hope, it helps.
You can find some information here: DRIVE OS Linux

This solution, has zero copy of the NvBuffers, but the whole NvBuffer to CudaBuffer to eglstream and back feels quite dirty. I would love to see NvSci support for NvBuffer soon, then I’d refactor.