Nvidia-l4t-nvsci question

Hardware Platform agx 32G
Software Version
nvidia@tegra-ubuntu:~$ head -n 1 /etc/nv_tegra_release

R35 (release), REVISION: 1.0, GCID: 31250864, BOARD: t186ref, EABI: aarch64, DATE: Thu Aug 11 03:40:29 UTC 2022

where is the nvidia-l4t-nvsci sample code ? Can i use the api of nvsci in jetson_multimedia_api sample ?
i find the lib of libnvscibuf.so,libnvscistream.so,libnvsciipc.so,but not find sample code
nvidia@tegra-ubuntu:/usr/src/jetson_multimedia_api/samples$ sudo dpkg --get-selections |grep nvidia
[sudo] password for nvidia:
nvidia-l4t-3d-core install
nvidia-l4t-apt-source install
nvidia-l4t-bootloader install
nvidia-l4t-camera install
nvidia-l4t-configs install
nvidia-l4t-core install
nvidia-l4t-cuda install
nvidia-l4t-display-kernel install
nvidia-l4t-firmware install
nvidia-l4t-gbm install
nvidia-l4t-gputools install
nvidia-l4t-graphics-demos install
nvidia-l4t-gstreamer install
nvidia-l4t-init install
nvidia-l4t-initrd install
nvidia-l4t-jetson-io install
nvidia-l4t-jetson-multimedia-api install
nvidia-l4t-jetsonpower-gui-tools install
nvidia-l4t-kernel install
nvidia-l4t-kernel-dtbs install
nvidia-l4t-kernel-headers install
nvidia-l4t-libvulkan install
nvidia-l4t-multimedia install
nvidia-l4t-multimedia-utils install
nvidia-l4t-nvfancontrol install
nvidia-l4t-nvpmodel install
nvidia-l4t-nvpmodel-gui-tools install
nvidia-l4t-nvsci install
nvidia-l4t-oem-config install
nvidia-l4t-openwfd install
nvidia-l4t-optee install
nvidia-l4t-pva install
nvidia-l4t-tools install
nvidia-l4t-vulkan-sc install
nvidia-l4t-vulkan-sc-dev install
nvidia-l4t-vulkan-sc-samples install
nvidia-l4t-vulkan-sc-sdk install
nvidia-l4t-wayland install
nvidia-l4t-weston install
nvidia-l4t-x11 install
nvidia-l4t-xusb-firmware install

nvidia@tegra-ubuntu:~$ sudo find / -name nvsci
[sudo] password for nvidia:
find: ‘/run/user/1000/gvfs’: Permission denied
find: ‘/run/user/124/gvfs’: Permission denied

Please download the package:
Jetson Linux 35.1 | NVIDIA Developer

L4T Driver Package (BSP) Sources

thank you! i get the code of nvsci_samples_src.tbz2
but i do not know how to send and recv nv12 camera data,Is there a camera-related sample ?

Argus uses NvBufSurface on Jetpack 5. And passing NvBufSurface between processes is not supported on 5.0.2. We are checking to support it in future release.

I mean, I want to use NvSciIpc’s Inter_Process method to do camera uyvy data transfer.
How does NvSciBuf of rawstream_producer.c relate to NvBufferCreateEx dmabuf_fd (the hardware buffer) ?

In nvsci_samples_src/rawstream/rawstream_producer.c
i modify send 30fps
while (currFrame < totalFrames)
fprintf(stderr, “Producer starting frame %d in buffer %d\n”,currFrame, currBuffer);

scibuf size 128k and 1920x1080x1.5
//uint64_t rawsize = (128 * 1024);
uint64_t rawsize = (192010801.5);
bufKeyValue[4].key = NvSciBufRawBufferAttrKey_Size;
bufKeyValue[4].value = &rawsize;
bufKeyValue[4].len = sizeof(rawsize);

test1 :128x1024 size ,cpu %2;
test2:1920x1080x1.5 size cpu %22
Why does it consume so much CPU?

We will check to support passing camera uyvy date in future release. Currently NvSciStream does not support this use-case.

Will check the issue about CPU usage.


The test2 has >22x in size compared to the test1.
So the CPU usage will be higher.


nvsci_samples_src/rawstream/rawstream_producer.c need to send every frame ?
In the previous topic, you told me “No need to keep sending fd for every frame update”.

NvSciStrean does not support NvBuffer and NvBufSurface, so the condition is different. Please note this.

nvsci_samples_src/rawstream/rawstream_producer.c use the api of setupCudaBuffer to import NvSciBuf into CUDA.
12_camera_v4l2_cuda/camera_v4l2_cuda.cpp How does import NvSciBuf to NvBufSurf dmabuf_fd ?


This is not supported. NvBufSurf cannot be passed across processes on Jetpack 5.0.2.

ok,thank you!
nvsci_samples_src sample of “event_sample_app” and “rawstream”,and cuda-samples-master sample of “4_CUDA_Libraries/cudaNvSci” are all related to CUDA, Can normal malloc memory do inter-process with NvSciBuf and NvSciSync,how can i do malloc memory import to NvSciBufObj? how can i create and set NvSciBufAttr?

just like the nvsci_samples_src\rawstream code sample funcation setupCudaSync ;
if i use normal malloc memory do inter-process with NvSciBuf and NvSciSync,do not use cuda , how do i implement the same function as the “setupCudaSync” function ?

Just as the nvscibuf.h file,NvSciBufType_RawBuffer (for integration with CUDA kernels that will directly access it),so if i do not use cuda,and use normal malloc memory do inter-process with NvSciBuf and NvSciSync,i can only use NvSciBufType_General ?Could you please provide relevant examples?

I notice that there is only NvSciBufObjGetCpuPtr for the CPU virtual address (VA) of the read/write buffer referenced by the NvSciBufObj, but there is no API to map the CPU addresses from malloc to NvSciBufObj?
Whether NvSciBufObj, just like NvBuffer, is a hardware buffer?
Instead of using Drive OS, I use Jetpack 35.1.0. How can I transfer normal CPU memory to NvSciBufType_General using NvSciBuf for inter-process communication? NvSciBufType_General sample was not found, could you provide a reference


It works in a contrary way.

You can allocate a buffer with NvSciBufObjAlloc().
Then get the GPU pointer with cudaImportExternalMemory(), CPU pointer with NvSciBufObjGetCpuPtr().


this question? do not use cuda,do not use drive os


Sorry for the missing.

Please noted that there is an L4T document available in the below link:

For the NvSciBufType, yes, you can use the general type with the NvSciBufGeneralAttrKey_NeedCpuAccess flag.
You can find details in the “Memory Buffer Basics” section.

For the next question, please create the buffer with NvSci first.
Then get the buffer pointer with the corresponding usage.


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.