Is it possible to obtain image metadata from the IFrame or IImage interfaces?

I’m using a consumer thread to acquire video images using the IFrameConsumer interface. Here is a brief code snippet:

    bool videoCamera::threadExecute() {
        const unsigned int  plane               = 0;
        const std::string   capturedVideoFrames = "Captured video frames";
        const auto          *iStream            = Argus::interface_cast<Argus::IEGLOutputStream>(m_stream);
        const auto          *iEglOutputStream   = Argus::interface_cast<Argus::IEGLOutputStream>(m_stream);
        const Argus::Size2D streamResolution    = iEglOutputStream->getResolution();

        // Wait until the producer has connected to the stream.
        if (iStream->waitUntilConnected() != Argus::STATUS_OK)
            ORIGINATE_ERROR("Stream failed to connect.");

        // Continue until this thread is terminated.
        auto *iFrameConsumer = Argus::interface_cast<EGLStream::IFrameConsumer>(m_consumer);
        while (true) {
            // The absence of a frame indicates this thread has been terminated.
            Argus::UniqueObj<EGLStream::Frame> frame(iFrameConsumer->acquireFrame());
            if (!frame) {

            // Create an IFrame interface to gain access to the Image in the Frame.
            auto *iFrame = Argus::interface_cast<EGLStream::IFrame>(frame);
            if (!iFrame) {
                ORIGINATE_ERROR("Failed to get IFrame interface.");
        return true;

Is it possible to obtain the camera metadata associated with each frame and/or image? I need information such as the exposure time and gain that were used when the image was captured by the camera. It appears that only the IEventCaptureComplete interface provides this information (via the getMetadata() method), but that API appears to be incompatible with the interface I’m using.


Yes, current the getMetadata() is the only way to get the sensor embedded data.



To clarify, if I’m using the EGLStream API, I cannot get the sensor metadata in any way? In my current consumer thread, I’m converting the iFrame images into OpenCV cv::Mat objects and passing those objects to disparate consumers who consume them for various purposes.

My goal is to have one of those consumers implement a fine-grained camera settings control loop, but that becomes much more difficult without having the sensor metadata for each video frame. Do I need a completely separate consumer thread for getting the image capture metadata? If so, is it possible to utilize two independent threads for the same physical sensor?


Please check the yuvJpeg MMAPI sample for the getMetadata()


This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.