• Hardware Platform (Jetson / GPU) Jetson Nx Orin 16Gb • JetPack Version (valid for Jetson only) 5.1.2
Hello Nvidia,
I am trying to share NvBufSurface between two processes.
I could not find any example how to use it, except small piece of information at the linked thread.
I create simple POC that launch two processes and followed to example above.
I launch ‘producer’ first and it allocates couple buffers, I take FDs of buffers and trying to Import them in consumer process.
But I got:
ImportNvMMBufferSurfaceArray: Invalid dmabuf_fd 64
NvBufSurfaceImportImpl: ImportNvMMBufferSurfaceArray failed
ImportNvMMBufferSurfaceArray: Invalid dmabuf_fd 65
NvBufSurfaceImportImpl: ImportNvMMBufferSurfaceArray failed
Is something missed?
What is strange for me that if I run two producers both of them allocates buffers with the same FDs - is it correct behavior? I expected FD to be unique.
Thanks.
ps. At some point I got it worked - at least NvBufSurfaceImport returned 0, but then something happened and it stopped working, I rolled back changes in between working/not working, but can’t get working again.
I can’t help you with sharing an NvBufSurface between processes, but I can offer you a different approach which we use a lot in our applications.
We use GstD with GstInterpipes to enable pipeline and buffer sharing between applications. The GStreamer daemon has all the pipelines running in the same process, allowing them to share buffers between pipelines using interpipesink and interpipesrc. Different processes can send requests to GstD to create pipelines using one of the GstD clients, and there are special thread-safe calls available for cases where multiple processes need to manipulate the same pipeline.
could you share the whole media pipeline? please refer to the DeepStream sample deepstream-ipc-test in DS7.0 for Jetson. this sample will share NvBufSurface between two processes On Jetson.
I am sorry the tags for the topic were inherited from linked thread.
I don’t use DeepStream, but directly MMAPI including Argus stuff. The reason I am thinking about sharing buffers between processes is that we have issues with nvargus.daemon sometimes it’s crashes due to some issues with streaming in multi-camera environment. I tried to use it as build-in library and as standalone service. I read a lot of posts related to the issue, I applied all Argus/ patches shared here nvscf/nvfusacap/nvargus it works more stable, but anyway sometimes crashes and pulls the entire system down, so seems there is no solid solution yet. The idea is to launch “main” process + N “camera” processes with build-in usage of ArgusLib. “Main” process allocates buffers share them with workers, and worker kind of filling them with data form cameras. If a worker fails/crash I can just restart single camera keeping whole system stable and I will miss only 1 seconds frames from 1 camera.