How to convert NvBuffer to VPIImage on jeptpack6.2

I have use arugs to capture hawk’s image. I hope to achieve the best image processing performance on the jetson device. So I follow the docVPIImageBufferType and know about VPIImageBufferType support NvBuffer. I have try. but error.

    int m_dmabuf = -1;
    if (m_dmabuf == -1)
    {
        m_dmabuf = iNativeBuffer->createNvBuffer(iSensorMode->getResolution(),
                                                NVBUF_COLOR_FORMAT_BGRA,
                                                NVBUF_LAYOUT_PITCH);
        if (m_dmabuf == -1){
          printf("\tFailed to create NvBuffer\n");
        }
        else{
          printf("\tcreate NvBuffer\n");
          NvBufSurface *nvbuf_surf = 0;
          int ret = 0;
          ret = NvBufSurfaceFromFd (m_dmabuf, (void**)(&nvbuf_surf));
          if (nvbuf_surf->surfaceList[0].colorFormat == NVBUF_COLOR_FORMAT_BGRA){
            VPIImage image   = NULL;
            VPIImageData imgData = {};
            imgData.bufferType = VPI_IMAGE_BUFFER_NVBUFFER;
            imgData.buffer.fd = m_dmabuf;
            CHECK_STATUS(vpiImageCreateWrapper(&imgData,nullptr,VPI_BACKEND_CUDA,&image));
            VPIImageData data;
            CHECK_STATUS(vpiImageLockData(image, VPI_LOCK_READ,VPI_IMAGE_BUFFER_NVBUFFER, &data));
            cv::Mat cvImage;
            CHECK_STATUS(vpiImageDataExportOpenCVMat(imgData, &cvImage));
          }

        }
    }

Error code:

terminate called after throwing an instance of 'std::runtime_error'
  what():  VPI_ERROR_INVALID_ARGUMENT: Can't lock the image and return its contents as VPI_IMAGE_BUFFER_NVBUFFER type
Aborted (core dumped)

I don’t know why?
I hope get this data pipeline like: isp->gpu->vic|pva->gpu|dla.

*** Please note that this reply is generated by LLM automatically ***

To achieve the best image processing performance on the Jetson device, you’re on the right track by exploring the VPI (Vision Programming Interface) library, which is optimized for NVIDIA GPUs. Since you’re working with image processing, I’ll provide a comprehensive overview of the relevant components and pipelines.

VPI and NvBuffer

VPI is a computer vision library that provides a unified API for various compute engines, including NVIDIA GPUs. It supports different image buffer types, including NvBuffer, which is a buffer management system for NVIDIA hardware.

Image Processing Pipeline

The pipeline you described, isp->gpu->vic|vpa->gpu|dla, is a common image processing pipeline on Jetson devices. Here’s a breakdown of each component:

  1. ISP (Image Signal Processor): The ISP is responsible for processing raw image data from the camera sensor. It performs tasks like demosaicing, white balancing, and noise reduction.
  2. GPU: The GPU (Graphics Processing Unit) is a general-purpose computing unit that can be used for various tasks, including image processing.
  3. VIC (Video Image Composer): VIC is a compute engine that can be used for pre-processing and scaling images. It’s a component of the DeepStream SDK.
  4. VPA (Video Processing Accelerator): VPA is an accelerator in Tegra SOC (System on Chip) that’s specialized for image processing and computer vision algorithms with extremely low power consumption.
  5. DLA (Deep Learning Accelerator): DLA is a hardware accelerator designed for deep learning workloads.

Error Troubleshooting

Since you didn’t provide the specific error code or message, I’ll offer some general troubleshooting steps:

  1. Check VPI documentation: Ensure you’re using the correct VPI API calls and parameters for your use case.
  2. Verify NvBuffer configuration: Double-check your NvBuffer configuration to ensure it’s compatible with VPI.
  3. Inspect pipeline configuration: Review your pipeline configuration to ensure it’s correctly set up for the desired image processing flow.

Additional Recommendations

To optimize your image processing pipeline on the Jetson device:

  1. Use VPI’s built-in support for NvBuffer: Leverage VPI’s built-in support for NvBuffer to simplify buffer management and optimize performance.
  2. Explore DeepStream SDK: Consider using the DeepStream SDK, which provides a set of APIs and tools for building computer vision and deep learning applications on NVIDIA hardware.
  3. Optimize GPU usage: Ensure you’re optimizing GPU usage by using the correct GPU architecture and configuring the GPU correctly for your workload.

By following these recommendations and troubleshooting steps, you should be able to resolve the error and achieve the desired image processing performance on your Jetson device.

Hi,

Please find a related sample from MMAPI ↔ API:

Thanks.

1 Like

Hi AastaLLL,

Oh!!! Thank you very very much. It’s so helpful for me.

Hi AastaLLL,
I have another question. Is this camera data pipeline the best performance on jetson device? isp->gpu->vic|vpa->gpu|dla

Hi,

It depends on the use case.

Is this a deep learning pipeline?
If so, you can check our Deepstream SDK, which supports AI pipelines and has been optimized for the Jetson device.

Thanks.

Hi,
Deepstream is a great project, but its flexibility is not enough. We need to make some flexible adjustments based on our project. I used 4 Leopard HAWKs for recognition and tracking, as well as panoramic stitching and video encoding. I hope the performance of my code is good enough, which is a developer’s pursuit. isp->gpu->vic|pva->gpu|dla。 I try my best to keep the data flowing only in GPU space and VIC | PVA hardware for this link. And multiple streams were used for multi-channel processing. I don’t know if there is any other way to do it better, but I don’t think I have caused any unnecessary performance loss. I want to know if the link ISP ->GPU ->VIC | PVA ->GPU | DLA is officially recommended?
Thanks.

Hi,

ISP ->GPU ->VIC | PVA ->GPU | DLA

Yes, the pipeline looks good to us.

You can also double-check with the Nsight System tool to see if any unexpected behavior (ex., dependency) exists between the components.

Thanks.

1 Like

Hi,
Ok. I will do it.
Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.