Two camera preview with Libargus. How to set frame rate

Hi Folks,

I modified, argus/samples/denoise example to preview output of two cameras on screen. I have been searching for an API to set frame rate of capture. I have been unable to find, one. Please help. Below is my code…

I am not able to find a method in IOutputStreamSettings class to set frame rate.

Thanks,

/*
 * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions
 * are met:
 *  * Redistributions of source code must retain the above copyright
 *    notice, this list of conditions and the following disclaimer.
 *  * Redistributions in binary form must reproduce the above copyright
 *    notice, this list of conditions and the following disclaimer in the
 *    documentation and/or other materials provided with the distribution.
 *  * Neither the name of NVIDIA CORPORATION nor the names of its
 *    contributors may be used to endorse or promote products derived
 *    from this software without specific prior written permission.
 *
 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
 * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
 * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
 * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
 * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
 * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
 * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
 * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 */

#include "Error.h"
#include "EGLGlobal.h"
#include "GLContext.h"
#include "Window.h"
#include "Thread.h"
#include "PreviewConsumer.h"

#include <Argus/Argus.h>
#include <EGLStream/EGLStream.h>

#include <unistd.h>
#include <stdlib.h>

using namespace Argus;

/*
 * This sample outputs capture requests to two streams, one of which has denoise algorithms
 * enabled while the other does not, and then renders them to a split-screen window.
 */

namespace ArgusSamples
{

// Constants.
static const uint32_t       CAPTURE_TIME    = 10; // In seconds.
static const Size           STREAM_SIZE      (1920, 1080);
static const NormalizedRect SOURCE_CLIP_RECT (0.4f, 0.4f, 0.6f, 0.6f);

// Globals.
UniqueObj<CameraProvider> g_cameraProvider;
EGLDisplayHolder g_display;

// Debug print macros.
#define PRODUCER_PRINT(...) printf("PRODUCER: " __VA_ARGS__)

static bool execute()
{
    // Initialize the window and EGL display.
    Window &window = Window::getInstance();
    window.setWindowRect(0, 0, STREAM_SIZE.width, STREAM_SIZE.height);
    PROPAGATE_ERROR(g_display.initialize(window.getEGLNativeDisplay()));

    // Initialize the Argus camera provider.
    UniqueObj<CameraProvider> cameraProvider(CameraProvider::create());
    ICameraProvider *iCameraProvider = interface_cast<ICameraProvider>(cameraProvider);
    if (!iCameraProvider)
        ORIGINATE_ERROR("Failed to get ICameraProvider interface");

    // Create a capture session using the first available device.
    std::vector<CameraDevice*> cameraDevices;
    if (iCameraProvider->getCameraDevices(&cameraDevices) != STATUS_OK)
        ORIGINATE_ERROR("Failed to get CameraDevices");
    if (cameraDevices.size() == 0)
        ORIGINATE_ERROR("No CameraDevices available");


    UniqueObj<CaptureSession> captureSession(
        iCameraProvider->createCaptureSession(cameraDevices[0]));
    ICaptureSession *iCaptureSession = interface_cast<ICaptureSession>(captureSession);
    if (!iCaptureSession)
        ORIGINATE_ERROR("Failed to create CaptureSession");

    UniqueObj<CaptureSession> captureSession1(
        iCameraProvider->createCaptureSession(cameraDevices[1]));
    ICaptureSession *iCaptureSession1 = interface_cast<ICaptureSession>(captureSession1);
    if (!iCaptureSession1)
        ORIGINATE_ERROR("Failed to create CaptureSession");


    // Create two output streams, one for unprocessed preview and one for denoise.
    PRODUCER_PRINT("Creating output streams\n");
    UniqueObj<OutputStreamSettings> streamSettings(iCaptureSession->createOutputStreamSettings());
    IOutputStreamSettings *iStreamSettings = interface_cast<IOutputStreamSettings>(streamSettings);
    if (!iStreamSettings)
        ORIGINATE_ERROR("Failed to create OutputStreamSettings");
    iStreamSettings->setPixelFormat(PIXEL_FMT_YCbCr_420_888);
    iStreamSettings->setResolution(STREAM_SIZE);
    iStreamSettings->setEGLDisplay(g_display.get());
    UniqueObj<OutputStream> previewStream(iCaptureSession->createOutputStream(streamSettings.get()));
    IStream *iPreviewStream = interface_cast<IStream>(previewStream);
    if (!iPreviewStream)
        ORIGINATE_ERROR("Failed to create preview stream");



    UniqueObj<OutputStreamSettings> streamSettings1(iCaptureSession1->createOutputStreamSettings());
    IOutputStreamSettings *iStreamSettings1 = interface_cast<IOutputStreamSettings>(streamSettings1);
    if (!iStreamSettings1)
        ORIGINATE_ERROR("Failed to create OutputStreamSettings");
    iStreamSettings1->setPixelFormat(PIXEL_FMT_YCbCr_420_888);
    iStreamSettings1->setResolution(STREAM_SIZE);
    iStreamSettings1->setEGLDisplay(g_display.get());
    UniqueObj<OutputStream> previewStream1(iCaptureSession1->createOutputStream(streamSettings1.get()));
    IStream *iPreviewStream1 = interface_cast<IStream>(previewStream1);
    if (!iPreviewStream1)
        ORIGINATE_ERROR("Failed to create preview stream");



    //UniqueObj<OutputStream> denoiseStream(iCaptureSession->createOutputStream(streamSettings.get()));
    //IStream *iDenoiseStream = interface_cast<IStream>(denoiseStream);
    //if (!iDenoiseStream)
    //    ORIGINATE_ERROR("Failed to create denoise stream");

    // Connect a PreviewConsumer to the streams to render a split-screen, side-by-side rendering.
    PRODUCER_PRINT("Launching consumer thread\n");
    std::vector<EGLStreamKHR> eglStreams;
    eglStreams.push_back(iPreviewStream->getEGLStream());
    eglStreams.push_back(iPreviewStream1->getEGLStream());
    PreviewConsumerThread consumerThread(g_display.get(), eglStreams,
                                         PreviewConsumerThread::LAYOUT_SPLIT_VERTICAL,
                                         true /* Sync stream frames */);
    PROPAGATE_ERROR(consumerThread.initialize());
    //consumerThread.setLineWidth(1);
    //consumerThread.setLineColor(1.0f, 0.0f, 0.0f);

    // Wait until the consumer is connected to the streams.
    PROPAGATE_ERROR(consumerThread.waitRunning());

    // Create capture request and enable output streams.
    UniqueObj<Request> request(iCaptureSession->createRequest());
    IRequest *iRequest = interface_cast<IRequest>(request);
    if (!iRequest)
        ORIGINATE_ERROR("Failed to create Request");
    iRequest->enableOutputStream(previewStream.get());



    UniqueObj<Request> request1(iCaptureSession1->createRequest());
    IRequest *iRequest1 = interface_cast<IRequest>(request1);
    if (!iRequest1)
        ORIGINATE_ERROR("Failed to create Request");
    iRequest1->enableOutputStream(previewStream1.get());

    // Use small source clip rects to zoom the image to make noise more visible.
    IStreamSettings *previewStreamSettings =
        interface_cast<IStreamSettings>(iRequest->getStreamSettings(previewStream.get()));
    if (!previewStreamSettings)
        ORIGINATE_ERROR("Failed to get preview stream settings interface");
    IStreamSettings *previewStreamSettings1 =
        interface_cast<IStreamSettings>(iRequest1->getStreamSettings(previewStream1.get()));
    if (!previewStreamSettings1)
        ORIGINATE_ERROR("Failed to get denoise stream settings interface");
    //previewStreamSettings->setSourceClipRect(SOURCE_CLIP_RECT);
    //previewStreamSettings1->setSourceClipRect(SOURCE_CLIP_RECT);

    // Enable denoise for the request.
    //IDenoiseSettings *denoiseSettings = interface_cast<IDenoiseSettings>(request);
    //if (!denoiseSettings)
    //    ORIGINATE_ERROR("Failed to get DenoiseSettings interface");
    //denoiseSettings->setDenoiseMode(DENOISE_MODE_FAST);
    //denoiseSettings->setDenoiseStrength(1.0f);

    // Disable all post-processing (including denoise) for the preview stream (enabled by default).
    previewStreamSettings->setPostProcessingEnable(false);
    previewStreamSettings1->setPostProcessingEnable(false);

    // Submit capture requests.
    PRODUCER_PRINT("Starting repeat capture requests.\n");
    if (iCaptureSession->repeat(request.get()) != STATUS_OK)
        ORIGINATE_ERROR("Failed to start repeat capture request");
    if (iCaptureSession1->repeat(request1.get()) != STATUS_OK)
        ORIGINATE_ERROR("Failed to start repeat capture request");


    // Wait for CAPTURE_TIME seconds.
    PROPAGATE_ERROR(window.pollingSleep(CAPTURE_TIME));

    // Stop the repeating request and wait for idle.
    iCaptureSession->stopRepeat();
    iCaptureSession->waitForIdle();
    iCaptureSession1->stopRepeat();
    iCaptureSession1->waitForIdle();

    // Destroy the output streams and wait for the consumer thread to complete.
    previewStream.reset();
    previewStream1.reset();
    //denoiseStream.reset();
    PROPAGATE_ERROR(consumerThread.shutdown());

    // Shut down Argus.
    g_cameraProvider.reset();

    // Shut down the window (destroys window's EGLSurface).
    window.shutdown();

    // Cleanup the EGL display
    PROPAGATE_ERROR(g_display.cleanup());

    PRODUCER_PRINT("Done -- exiting.\n");
    return true;
}

}; // namespace ArgusSamples

int main(int argc, const char *argv[])
{
    if (!ArgusSamples::execute())
        return EXIT_FAILURE;

    return EXIT_SUCCESS;
}

Hi
The setFrameDurationRange() should be the function as your request.
check the sample code tegra_multimedia_api/samples/10_camera_recording

Thanks ShaneCCC, for your help. I am able to set fps to my desired target and get a sense of performance using tegra stats.

My aim is to read the frames for further processing. I would like to ask what is best way to access the frames read using

    UniqueObj<Frame> frame(iFrameConsumer->acquireFrame());

I would like to process the frames using openCV.

Thanks

  1. Actually I explored usage of /home/ubuntu/VisionWorks-1.6-Samples/nvxio/src/NVX/FrameSource/OpenCV/OpenCVVideoFrameSource.hpp. I am not completely clear how would conversion from

     UniqueObj<Frame> frame(iFrameConsumer->acquireFrame());
    

to OpenCV cv::Mat

work, for different format YUV 420, RGB etc ?

  1. What is most optimal way to read camera frames into openCV cv::Mat for further processing ?

a) argus library ?
b) VisionWorks ?

  1. Could you please give me data path of pixels from Camera/ISP to cv::Mat ? For example, I would imagine that ISP writes to buffer in DDR → frame buffer is read / written back to another buffer in DDR for purpose of format conversion → at last it is read back from cv::Mat buffer (in DDR) into CPU (or GPU in case of CUDA) for processing. Is that right understanding ?

Thanks,

Hi,

For visionworks and openCV interpretation, please find information in visionworks documents:

VisionWorks API > NVIDIA Extension API > OpenCV Interoperability API

For example:
int nvx_cv::convertVXMatrixTypeToCVMatType ( vx_enum matrix_type )

Converts from OpenVX Matrix' type to OpenCV Mat's type.

Parameters
  [in]  matrix_type OpenVX Matrix' type.

Returns
  OpenCV Mat's type. 

Definition at line 42 of file nvx_opencv_interop.hpp.

References NVX_TYPE_POINT2F, NVX_TYPE_POINT3F, VX_TYPE_FLOAT32, VX_TYPE_FLOAT64, VX_TYPE_INT16, VX_TYPE_INT32, VX_TYPE_INT8, VX_TYPE_UINT16, VX_TYPE_UINT32, and VX_TYPE_UINT8.

Referenced by nvx_cv::copyCVMatToVXMatrix(), and nvx_cv::copyVXMatrixToCVMat().

Thanks AastaLLL, for your help/pointers.

  1. I understand functionalities offered by nvx_opencv_interop.hpp. I am still trying to dig details out about how to go from “Frame” (as in UniqueObj frame(iFrameConsumer->acquireFrame());) to cv::Mat ?

The class - VXImageToCVMatMapper(vx_image image, …) needs vx_image as input. What I read from camera is Frame object. From there I would go to IFrame object, and subsequently after that I’m still trying to find my way to vx_image object . If you know, the right way , please suggest or point to an example.

  1. Could you please respond to #2 in post #4 ?

  2. Could you please respond to #3 in post #4 ?

Thanks,

Hi,

Visionworks doesn’t read camera directly. Instead, it use low-level api(Ex.v4l or gstreamer) and then wrap the buffer into vx_image.

For argus v.s. Visionworks, or more precisely, for argus v.s. gstreamer,
they are two way to get camera source on tegra and the low-level implementation is the same(all nvcamera).

For data path, please check this comment:
[url]Finding the bottleneck in video stitching application - Jetson TX1 - NVIDIA Developer Forums

Thanks.