Sharing CUDA Resources Through Interoperability with NvSciBuf and NvSciSync

Originally published at: https://developer.nvidia.com/blog/sharing-cuda-resources-through-interoperability-with-nvscibuf-and-nvscisync/

Figure 1. Various hardware engines on the NVIDIA embedded platform. There is a growing need among embedded and HPC applications to share resources and control execution for pipelined workflows spanning multiple hardware engines and software applications. The following diagram gives an insight into the number of engines that can be supported on NVIDIA embedded platforms.…

CUDA interoperability with NvSciSync / NvSciBuf has been implemented in CUDA 10.2 release with a focus on usability on safety critical applications and we have seen good performance gains with this as well as elaborated in the blog. If you have any questions or comments, let us know.

As stated in this thread, nvsci does not work properly with the dGPUs of the DriveAGX Pegasus with Drive10. As this issue should be fixed in the next release, I was wondering if there is any information on when this will happen.

hello @rekhamukund,
i want to know if i allocate NvSciBuf on orin, if cpu and gpu can both can access the buffer.
thank you.

Can we use NvSciSync / NvSciBuf to implement CUDA and OpenGL interop? Can we use it in a headless environment like an AWS EC2 instance?

Please advise.

OpenGL is not a supported UMD (User Mode Driver) for NvSciBuf/NvSciSync, so CUDA-OpenGL interop cannot achieved. Other interops which can be used instead are CUDA-OpenGL (CUDA Runtime API :: CUDA Toolkit Documentation) or EGL interop (CUDA Driver API :: CUDA Toolkit Documentation).

CPU access to NvSciBuf allocated buffer can be achieved with NvSciBufObjGetCpuPtr() API as described in https://docs.nvidia.com/drive/drive_os_5.1.6.1L/nvvib_docs/index.html#page/DRIVE_OS_Linux_SDK_Development_Guide/Graphics/nvsci_nvscibuf.html#wwpID0E0OK0HA