LibArgus OpenCV live video feed

Hello,

I try to leanr to use libargus API via the sample applications and the documentation. I already found great help on this site from other posts. But I reached a point where not really understand the working of the Frameconsumer. I used the 09_camera_jpeg_capture sample as a base and I was able to convert a frame into a cv::Mat object and display it with the following code:

        Argus::UniqueObj<EGLStream::Frame> frame(iFrameConsumer->acquireFrame(500000000, &status));
        if (!frame) {
            std::cerr << "Failed to acquire frame!" << std::endl;
            // continue;  
        }

        EGLStream::IFrame *iFrame = Argus::interface_cast<EGLStream::IFrame>(frame);
        if (!iFrame) {
            std::cerr << "Failed to get IFrame interface!" << std::endl;
            // continue;
        }

        auto *iNativeBuffer = Argus::interface_cast<EGLStream::NV::IImageNativeBuffer>(iFrame->getImage());
        if (!iNativeBuffer) {
            std::cerr << "IImageNativeBuffer not supported by Image." << std::endl;
            // continue;
        }

        auto m_dmabuf = iNativeBuffer->createNvBuffer(
            iSensorMode->getResolution(),
            NvBufferColorFormat_ABGR32,
            NvBufferLayout_Pitch
        );

        if (m_dmabuf == -1) {
            std::cerr << "Failed to create NvBuffer" << std::endl;
            // continue;
        }

        if (iNativeBuffer->copyToNvBuffer(m_dmabuf) != Argus::STATUS_OK) {
            std::cerr << "Failed to copy frame to NvBuffer." << std::endl;
            // continue;
        }

        void *pdata = nullptr;
        NvBufferParams params;
        NvBufferGetParams(m_dmabuf, &params);
        NvBufferMemMap(m_dmabuf, 0, NvBufferMem_Read, &pdata);
        NvBufferMemSyncForCpu(m_dmabuf, 0, &pdata);

        cv::Mat imgbuf = cv::Mat(
            iSensorMode->getResolution().height(),
            iSensorMode->getResolution().width(),
            CV_8UC4, pdata, params.pitch[0]
        );
        cv::Mat bgr;
        cv::cvtColor(imgbuf, bgr, cv::COLOR_RGBA2BGR);
        cv::imshow("Live Camera Feed", bgr);
        cv::waitKey(0);
        
        NvBufferMemUnMap(m_dmabuf, 0, &pdata);

But when I tried to put this sequence into a loop to show the videofeed with opencv even if it was only 5 frames, the acquireFrame function returned with a failed massage. I reached a point where I have to confess for my self that I am not understand how the frameconsumer works. But the documentation does not provide too much information about these object. I just wanted to make a minimal application which streams the video feed via OpenCV to build features on this stream.

Hi,
It looks like createNvBuffer() is called each time. This keeps allocating NvBuffer. Please call it once in the first time and then call copyToNvBuffer() to re-use the buffer afterward.

1 Like

Thank you for your fast response, I tried your suggestion. But if I remove the NvBuffer creation I also have to remove the acquireFrame function from the loop and that results a loop which shows the same frame.

Hi,
You maty try this patch:
NVBuffer (FD) to opencv Mat - #6 by DaneLLL

and run 13 sample. And use 13 sample as reference to apply the same to 09 sample.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.