How to use Argus to obtain nvbuff_fd of camera data

I use the argus SDK to obtain camera yuv data, and the current writing is to obtain an EGLStream:: Image * image=i_frames_ [sensor_id] ->getImage(); Then I need to use VPI to convert this image from YUV to RGB. At this point, I need to copy EGLStream:: Image * image to video memory to generate an nvbuf_fd, and then use the VPI interface to convert it to RGB image. I found that copying EGLStream:: Image to video memory to generate an nvbuf_fd takes about 20ms and consumes a lot of CPU.
How to directly use Argus SDK to obtain nvbuf_fd of camera yuv data? Do you have an example

Suppose YUV sensor can’t use Argus API.

You may check the 12_v4l2_camera_cuda/18_v4l2_camera_cuda_rgb for YUV sensor.

nvidia@nvidia-desktop:/usr/src/jetson_multimedia_api/samples$ ll
total 104
drwxr-xr-x 24 root root 4096 四 22 2025 ./
drwxr-xr-x 7 root root 4096 四 22 2025 ../
drwxr-xr-x 2 root root 4096 四 22 2025 00_video_decode/
drwxr-xr-x 2 root root 4096 四 22 2025 01_video_encode/
drwxr-xr-x 2 root root 4096 四 22 2025 02_video_dec_cuda/
drwxr-xr-x 2 root root 4096 四 22 2025 03_video_cuda_enc/
drwxr-xr-x 2 root root 4096 四 22 2025 04_video_dec_trt/
drwxr-xr-x 2 root root 4096 四 22 2025 05_jpeg_encode/
drwxr-xr-x 2 root root 4096 四 22 2025 06_jpeg_decode/
drwxr-xr-x 2 root root 4096 四 22 2025 07_video_convert/
drwxr-xr-x 2 root root 4096 四 22 2025 08_video_dec_drm/
drwxr-xr-x 2 root root 4096 四 22 2025 09_argus_camera_jpeg/
drwxr-xr-x 2 root root 4096 四 22 2025 10_argus_camera_recording/
drwxr-xr-x 2 root root 4096 四 22 2025 11_video_osd/
drwxr-xr-x 2 root root 4096 四 22 2025 12_v4l2_camera_cuda/
drwxr-xr-x 2 root root 4096 四 22 2025 13_argus_multi_camera/
drwxr-xr-x 2 root root 4096 四 22 2025 14_multivideo_decode/
drwxr-xr-x 2 root root 4096 四 22 2025 15_multivideo_encode/
drwxr-xr-x 2 root root 4096 四 22 2025 16_multivideo_transcode/
drwxr-xr-x 2 root root 4096 四 22 2025 17_frontend/
drwxr-xr-x 2 root root 4096 四 22 2025 18_v4l2_camera_cuda_rgb/
drwxr-xr-x 2 root root 4096 四 22 2025 backend/
drwxr-xr-x 4 root root 4096 四 22 2025 common/
-rw-r–r-- 1 root root 4218 三 4 2025 Rules.mk
drwxr-xr-x 6 root root 4096 四 22 2025 unittest_samples/

now i can use argus api get nv12 image ,but type is EGLStream::Image.
i want use argus api get [nvbuff_fd nv12_image],not [EGLStream::Image nv12_image]

Hi,
You can call createNvBuffer() and copyToNvBuffer() to get NvBufSurface. Please refer to 09 sample:

/usr/src/jetson_multimedia_api/samples/09_argus_camera_jpeg

And refer to the patch for converting to BGR:
Making sure you're not a bot!

now i use createNvBuffer() and copyToNvBuffer(),but will use a lot of cpu when do this,so i want argus api output nvbuffd for Save CPU resources
code:
// use 26ms
bool CodecOrin::vpiImage2RGB(EGLStream::Image* image, uint8_t* rgbBuf) {
VPIImage nv12Image = nullptr;
VPIImageData nvData{};

NV::IImageNativeBuffer* image_native_buf = interface_castNV::IImageNativeBuffer(image);
if (!image_native_buf) return false;

// use9.5ms
nv_fd_ = image_native_buf->createNvBuffer(size_, NVBUF_COLOR_FORMAT_NV12_ER, NVBUF_LAYOUT_PITCH, NV::ROTATION_0);
if (nv_fd_ < 0) {
NvDestroy(nv_fd_);
return false;
}

// use9ms
if (image_native_buf->copyToNvBuffer(nv_fd_) != STATUS_OK) {
NvDestroy(nv_fd_);
return false;
}

// use1.5ms
nvData.bufferType = VPI_IMAGE_BUFFER_NVBUFFER;
nvData.buffer.fd = nv_fd_;

Hi,
The functions createNvBuffer() and copyToNvBuffer() do not use CPU. They use hardware converter VIC engine. Please run $ sudo tegrastats to check the status of hardware engines.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.