OpenCV Mat Object And Saving Image Data 09_camera_jepg_capture

I have a Jetson Nano; not the Orion model, and I’m going through the 09_camera_jpeg_capture example to get a better understanding of how the Argus library works.

I understand that in the ‘bool ConsumerThread::threadExecute()’ function is where the frames are being captured.

But when it comes to saving the frames where the pointer iFrames is pointing at is something I’m completely lost on and could use some help

Hello,

Thanks for using the NVIDIA forums. Your topic will be best served in the Jetson category.

I will move this post over for visibility.

Cheers,
Tom

1 Like

Hi,
For further processing on the frame data, please call createNvBuffer()/copyToNvBuffer() to get fd of the buffer. And you can access it through NvBuffer APIs.

I’m still very confused.

In the 09_camera_jepg_capture program, I know that bool ConsumerThread::threadExecute() captures the incoming video frames and I was under the impression that I could just use/copy m_dmabuf instead of creating another buffer.

Am I correct with that assumption or no?

My second question is that once either the m_dmabuf is either copied again or a second buffer is created what’s the best way to get the height/width of the image that’s saved in the mentioned buffer(s)?

For what it’s worth, here’s where I currently stand with the 09_camera_jpeg_capture example with the bool ConsumerThread::threadExecute() function

bool ConsumerThread::threadExecute()
    {
        IEGLOutputStream *iEglOutputStream = interface_cast<IEGLOutputStream>(m_stream);
        IFrameConsumer *iFrameConsumer = interface_cast<IFrameConsumer>(m_consumer);

        /* Wait until the producer has connected to the stream. */
        CONSUMER_PRINT("Waiting until producer is connected...\n");
        if (iEglOutputStream->waitUntilConnected() != STATUS_OK)
            ORIGINATE_ERROR("Stream failed to connect.");
        CONSUMER_PRINT("Producer has connected; continuing.\n");
        int second_dmabuf = -1;
        while (true)
        {
            /* Acquire a frame. */
            UniqueObj<Frame> frame(iFrameConsumer->acquireFrame());
            IFrame *iFrame = interface_cast<IFrame>(frame);
            if (!iFrame)
                break;

            /* Get the IImageNativeBuffer extension interface. */
            // Get access to image data through iFrame->getImage();
            // Image data can be retrieved and processed through ' IImageNativeBuffer interface.
            NV::IImageNativeBuffer *iNativeBuffer = interface_cast<NV::IImageNativeBuffer>(iFrame->getImage());
            if (!iNativeBuffer)
                ORIGINATE_ERROR("IImageNativeBuffer not supported by Image.");

            /* If we don't already have a buffer, create one from this image.
               Otherwise, just blit to our buffer. */
            if (m_dmabuf == -1)
            {
                /*
                    virtual int EGLStream::NV::IImageNativeBuffer::createNvBuffer(Argus::Size2D<...> size, NvBufferColorFormat format, NvBufferLayout layout, EGLStream::NV::Rotation rotation = EGLStream::NV::ROTATION_0, Argus::Status *status = (Argus::Status *)__null) const

                    Returns -1 on failure
                    Returns valid dmabuf-fd on success

                */
                m_dmabuf = iNativeBuffer->createNvBuffer(iEglOutputStream->getResolution(),
                                                         NvBufferColorFormat_YUV420,
                                                         NvBufferLayout_BlockLinear);

                second_dmabuf = iNativeBuffer->createNvBuffer(iEglOutputStream->getResolution(),
                                                              NvBufferColorFormat_YUV420,
                                                              NvBufferLayout_BlockLinear);

                if (m_dmabuf == -1 || second_dmabuf == -1)
                    CONSUMER_PRINT("\tFailed to create NvBuffer\n");
                else
                    CONSUMER_PRINT("\nBUFFER CREATED\n");
            }
            else if (iNativeBuffer->copyToNvBuffer(m_dmabuf) != STATUS_OK || iNativeBuffer->copyToNvBuffer(second_dmabuf) != STATUS_OK)
            {
                ORIGINATE_ERROR("Failed to copy frame to NvBuffer.");
            }

            /* Process frame to be written. */
            //     bool CaptureConsumerThread::processV4L2Fd(int32_t fd, uint64_t frameNumber)
            
            processV4L2Fd(m_dmabuf, iFrame->getNumber());
        }

        CONSUMER_PRINT("Done.\n");

        requestShutdown();

        return true;
    }

Hi,
Please refer to the patch for mapping NvBuffer to cv::Mat:
NVBuffer (FD) to opencv Mat - #6 by DaneLLL

That patch reference didn’t help out much.

I’m still making the assumption that I can grab the frames that are being captured in bool ConsumerThread::threadExecute() or does that really matter if I do it there or in the processV4L2Fd() function?

In the bool ConsumerThread::threadExecute() function I currently have this and every time I try to use the cv::imwrite() function, I get the following error

The error

[1] 9433 bus error (core dumped) ./captureJPEG

The section of code I’m working on

 bool ConsumerThread::threadExecute()
    {
        IEGLOutputStream *iEglOutputStream = interface_cast<IEGLOutputStream>(m_stream);
        IFrameConsumer *iFrameConsumer = interface_cast<IFrameConsumer>(m_consumer);
        void *pdata = NULL;
        /* Wait until the producer has connected to the stream. */
        CONSUMER_PRINT("Waiting until producer is connected...\n");
        if (iEglOutputStream->waitUntilConnected() != STATUS_OK)
            ORIGINATE_ERROR("Stream failed to connect.");
        CONSUMER_PRINT("Producer has connected; continuing.\n");
        while (true)
        {
            /* Acquire a frame. */
            UniqueObj<Frame> frame(iFrameConsumer->acquireFrame());
            IFrame *iFrame = interface_cast<IFrame>(frame);
            if (!iFrame)
                break;

            /* Get the IImageNativeBuffer extension interface. */
            // Get access to image data through iFrame->getImage();
            // Image data can be retrieved and processed through ' IImageNativeBuffer interface.
            NV::IImageNativeBuffer *iNativeBuffer = interface_cast<NV::IImageNativeBuffer>(iFrame->getImage());
            if (!iNativeBuffer)
                ORIGINATE_ERROR("IImageNativeBuffer not supported by Image.");

            /* If we don't already have a buffer, create one from this image.
               Otherwise, just blit to our buffer. */
            if (m_dmabuf == -1)
            {
                /*
                    virtual int EGLStream::NV::IImageNativeBuffer::createNvBuffer(Argus::Size2D<...> size, NvBufferColorFormat format, NvBufferLayout layout, EGLStream::NV::Rotation rotation = EGLStream::NV::ROTATION_0, Argus::Status *status = (Argus::Status *)__null) const

                    Returns -1 on failure
                    Returns valid dmabuf-fd on success

                */
                m_dmabuf = iNativeBuffer->createNvBuffer(iEglOutputStream->getResolution(),
                                                         NvBufferColorFormat_YUV420,
                                                         NvBufferLayout_BlockLinear);

                if (m_dmabuf == -1)
                    CONSUMER_PRINT("\tFailed to create NvBuffer\n");
            }
            else if (iNativeBuffer->copyToNvBuffer(m_dmabuf) != STATUS_OK)
            {
                ORIGINATE_ERROR("Failed to copy frame to NvBuffer.");
            }
            // Do openCV stuff here

            if (NvBufferMemMap(m_dmabuf, 0, NvBufferMem_Read, &pdata) < 0)
            {
                CONSUMER_PRINT("FAILED to mapp virtual address...");
            }
            if (NvBufferMemSyncForCpu(m_dmabuf, 0, &pdata) < 0)
            {
                CONSUMER_PRINT("FAILED to sync hardware memory cache for CPU...");
            }
            cv::Mat imgbuf = cv::Mat(1920, 1080, CV_8UC3, pdata);

            if (imgbuf.empty())
            {
                CONSUMER_PRINT("imgbuf is empty...");
            }
            // Assuming NvBufferMemUnMap() isn't needed; as referenced below this comment,if I'm using a single camera
            // Reference : https://forums.developer.nvidia.com/t/nvbuffer-fd-to-opencv-mat/83012/6
            // NvBufferMemUnMap(m_dmabuf, 0, &pdata);

            std::string openCVStringWrite = "imgbuf_" + std::to_string(iFrame->getNumber()) + ".jpg";

            // Currently getting this error : [1]    9433 bus error (core dumped)  ./captureJPEG
            cv::imwrite(openCVStringWrite, imgbuf);

            // cv::imshow("imgbuf", imgbuf);

            // cv::waitKey(1);
            /* Process frame to be written. */
            //     bool CaptureConsumerThread::processV4L2Fd(int32_t fd, uint64_t frameNumber)

            processV4L2Fd(m_dmabuf, iFrame->getNumber());
        }

        CONSUMER_PRINT("Done.\n");

        requestShutdown();

        return true;
    }

Hi,
Hardware engines do not support 24-bit formats such as BGR/RGB. Please change to NvBufferColorFormat_ABGR32 pitch linear in createNvBuffer(), and map to CV_8UC4 in cv::Mat.

Made the changes with NvBufferColorFormat_ABGR32 and there was no pitch_linear argument that I could pass in createNvBuffer() so I used NvBufferLayout_Pitch

But I still got a segmentation fault

Recent changes I made to bool ConsumerThread::threadExecute()

 bool ConsumerThread::threadExecute()
    {
        IEGLOutputStream *iEglOutputStream = interface_cast<IEGLOutputStream>(m_stream);
        IFrameConsumer *iFrameConsumer = interface_cast<IFrameConsumer>(m_consumer);
        void *pdata = NULL;
        /* Wait until the producer has connected to the stream. */
        CONSUMER_PRINT("Waiting until producer is connected...\n");
        if (iEglOutputStream->waitUntilConnected() != STATUS_OK)
            ORIGINATE_ERROR("Stream failed to connect.");
        CONSUMER_PRINT("Producer has connected; continuing.\n");
        while (true)
        {
            /* Acquire a frame. */
            UniqueObj<Frame> frame(iFrameConsumer->acquireFrame());
            IFrame *iFrame = interface_cast<IFrame>(frame);
            if (!iFrame)
                break;

            /* Get the IImageNativeBuffer extension interface. */
            // Get access to image data through iFrame->getImage();
            // Image data can be retrieved and processed through ' IImageNativeBuffer interface.
            NV::IImageNativeBuffer *iNativeBuffer = interface_cast<NV::IImageNativeBuffer>(iFrame->getImage());
            if (!iNativeBuffer)
                ORIGINATE_ERROR("IImageNativeBuffer not supported by Image.");

            /* If we don't already have a buffer, create one from this image.
               Otherwise, just blit to our buffer. */
            if (m_dmabuf == -1)
            {
                /*
                    virtual int EGLStream::NV::IImageNativeBuffer::createNvBuffer(Argus::Size2D<...> size, NvBufferColorFormat format, NvBufferLayout layout, EGLStream::NV::Rotation rotation = EGLStream::NV::ROTATION_0, Argus::Status *status = (Argus::Status *)__null) const

                    Returns -1 on failure
                    Returns valid dmabuf-fd on success

                */
                // m_dmabuf = iNativeBuffer->createNvBuffer(iEglOutputStream->getResolution(),
                //                                          NvBufferColorFormat_YUV420,
                //                                          NvBufferLayout_BlockLinear);
                m_dmabuf = iNativeBuffer->createNvBuffer(iEglOutputStream->getResolution(),
                                                         NvBufferColorFormat_ARGB32,
                                                         NvBufferLayout_Pitch);

                if (m_dmabuf == -1)
                    CONSUMER_PRINT("\tFailed to create NvBuffer\n");
            }
            else if (iNativeBuffer->copyToNvBuffer(m_dmabuf) != STATUS_OK)
            {
                ORIGINATE_ERROR("Failed to copy frame to NvBuffer.");
            }
            // Do openCV stuff here

            if (NvBufferMemMap(m_dmabuf, 0, NvBufferMem_Read, &pdata) < 0)
            {
                CONSUMER_PRINT("FAILED to mapp virtual address...");
            }
            if (NvBufferMemSyncForCpu(m_dmabuf, 0, &pdata) < 0)
            {
                CONSUMER_PRINT("FAILED to sync hardware memory cache for CPU...");
            }
            cv::Mat imgbuf = cv::Mat(1920, 1080, CV_8UC4, pdata);

            if (imgbuf.empty())
            {
                CONSUMER_PRINT("imgbuf is empty...");
            }
            // Assuming NvBufferMemUnMap() isn't needed if I'm using a single camera
            // NvBufferMemUnMap(m_dmabuf, 0, &pdata);

            std::string openCVStringWrite = "imgbuf_" + std::to_string(iFrame->getNumber()) + ".jpg";

            cv::imwrite(openCVStringWrite, imgbuf);

            // cv::imshow("imgbuf", imgbuf);

            // cv::waitKey(1);
            /* Process frame to be written. */
            //     bool CaptureConsumerThread::processV4L2Fd(int32_t fd, uint64_t frameNumber)

            processV4L2Fd(m_dmabuf, iFrame->getNumber());
        }

        CONSUMER_PRINT("Done.\n");

        requestShutdown();

        return true;
    }

Terminal output stating segmentation fault

> export DISPLAY=:0 && make && ./captureJPEG
make: 'captureJPEG' is up to date.
[INFO] (NvEglRenderer.cpp:110) <renderer0> Setting Screen width 640 height 480
Number of cameras : 1
PRODUCER: Creating output stream
PRODUCER: Launching consumer thread
CONSUMER: Waiting until producer is connected...
CONSUMER: Waiting until producer is connected...
PRODUCER: Available Sensor modes :
PRODUCER: [0] W=3264 H=2464
PRODUCER: [1] W=3264 H=1848
PRODUCER: [2] W=1920 H=1080
PRODUCER: [3] W=1640 H=1232
PRODUCER: [4] W=1280 H=720
PRODUCER: [5] W=1280 H=720
PRODUCER: Requested FPS out of range. Fall back to 30
PRODUCER: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
CONSUMER: Producer has connected; continuing.
[1]    8979 segmentation fault (core dumped)  ./captureJPEG

Hi,
Please make sure the buffer is in 1920x1080. And try

            cv::Mat imgbuf = cv::Mat(1080, 1920, CV_8UC4, pdata);

If it still does not work, you may apply the patch to 13 sample to make sure it works with the sample.

Still got the segmentation fault despite adding cv::Mat imgbuf = cv::Mat(1080, 1920, CV_8UC4, pdata);

I’ll try out applying the patch that you mentioned here : Argus and OpenCV - #3 by DaneLLL

I tried applying the patch you mentioned here but with all the changes that were made since then; to not include the patch, I’m horribly confused on what exactly needs to get added and removed.

Especially when it came to the static bool execute() function

Hi,
The patch should be clear. It is to have NvBuffer in RGBA pitch linear and map it to cv::Mat. There is one patch to 09 sample and the other patch to 13 sample. Please apply it and try.