Two camera with libargus. Encode + OpenCVConsumer, each camera.

Hi Folks,

We have two cameras with our Tx1. I am looking to record and process recorded frames with openCV, from each camera. I followed argus/samples/gstVideoEncode, as and example. I am able to record and read frames in an ‘OCVConsumer’ class, for processing, with one camera. I am having difficulty operating encode and OCVConsumer pipes with two cameras concurrently.

Following is my code.

#include <Argus/Argus.h>
#include <gst/gst.h>
#include <stdlib.h>
#include <unistd.h>
#include "Error.h"
#include "PreviewConsumer.h"
#include "OCVConsumer.h"

#include <opencv2/opencv.hpp>
static const Argus::Size PREVIEW_STREAM_SIZE(640, 480);

namespace ArgusSamples
{

// Constants.

static const int32_t     FRAMERATE = 30;
static const int32_t     BITRATE   = 14000000;
static const char*       ENCODER   = "omxh264enc";
static const char*       MUXER     = "qtmux";
static const char*       OUTPUT    = "argus_gstvideoencode_out.mp4";
static const uint32_t    LENGTH    = 10; // in seconds.

// Globals.
EGLDisplayHolder g_display;

/**
 * Class to initialize and control GStreamer video encoding from an EGLStream.
 */
class GstVideoEncoder
{
public:
    GstVideoEncoder()
        : m_state(GST_STATE_NULL)
        , m_pipeline(NULL)
        , m_videoEncoder(NULL)
    {
    }

    ~GstVideoEncoder()
    {
        shutdown();
    }

    /**
     * Initialize the GStreamer video encoder pipeline.
     * @param[in] eglStream The EGLStream to consume frames from.
     * @param[in] resolution The resolution of the video.
     * @param[in] framerate The framerate of the video (in frames per second).
     * @param[in] encoder The encoder to use for encoding. Options include:
     *                    avenc_h263, omxh264enc, omxh265enc, omxvp8enc, avenc_mpeg4
     * @param[in] muxer The muxer/container to use. Options include:
     *                    qtmux (MP4), 3gppmux (3GP), avimux (AVI), identity (H265)
     * @param[in] output The filename/path for the encoded output.
     */
    bool initialize(EGLStreamKHR eglStream, Argus::Size resolution,
                    int32_t framerate, int32_t bitrate,
                    const char* encoder, const char* muxer, const char* output, int camindex)
    {
        // Initialize GStreamer.
        gst_init(NULL, NULL);

        // Create pipeline.
        m_pipeline = gst_pipeline_new("video_pipeline");
        if (!m_pipeline)
            ORIGINATE_ERROR("Failed to create video pipeline");

        // Create EGLStream video source.
        GstElement *videoSource = gst_element_factory_make("nveglstreamsrc", NULL);
        if (!videoSource)
            ORIGINATE_ERROR("Failed to create EGLStream video source");
        if (!gst_bin_add(GST_BIN(m_pipeline), videoSource))
        {
            gst_object_unref(videoSource);
            ORIGINATE_ERROR("Failed to add video source to pipeline");
        }
        g_object_set(G_OBJECT(videoSource), "display", g_display.get(), NULL);
        g_object_set(G_OBJECT(videoSource), "eglstream", eglStream, NULL);

        // Create queue.
        GstElement *queue = gst_element_factory_make("queue", NULL);
        if (!queue)
            ORIGINATE_ERROR("Failed to create queue");
        if (!gst_bin_add(GST_BIN(m_pipeline), queue))
        {
            gst_object_unref(queue);
            ORIGINATE_ERROR("Failed to add queue to pipeline");
        }

        // Create encoder.
        m_videoEncoder = gst_element_factory_make(encoder, NULL);
        if (!m_videoEncoder)
            ORIGINATE_ERROR("Failed to create video encoder");
        if (!gst_bin_add(GST_BIN(m_pipeline), m_videoEncoder))
        {
            gst_object_unref(m_videoEncoder);
            ORIGINATE_ERROR("Failed to add video encoder to pipeline");
        }
        g_object_set(G_OBJECT(m_videoEncoder), "bitrate", bitrate, NULL);

        // Create muxer.
        GstElement *videoMuxer = gst_element_factory_make(muxer, NULL);
        if (!videoMuxer)
            ORIGINATE_ERROR("Failed to create video muxer");
        if (!gst_bin_add(GST_BIN(m_pipeline), videoMuxer))
        {
            gst_object_unref(videoMuxer);
            ORIGINATE_ERROR("Failed to add video muxer to pipeline");
        }

        // Create file sink.
        GstElement *fileSink = gst_element_factory_make("filesink", NULL);
        if (!fileSink)
            ORIGINATE_ERROR("Failed to create file sink");
        if (!gst_bin_add(GST_BIN(m_pipeline), fileSink))
        {
            gst_object_unref(fileSink);
            ORIGINATE_ERROR("Failed to add file sink to pipeline");
        }
        std::ostringstream fileName;
        fileName << "encodedStream" << camindex << ".mp4";
        g_object_set(G_OBJECT(fileSink), "location", fileName.str().c_str(), NULL);

        // Create caps filter to describe EGLStream image format.
        GstCaps *caps = gst_caps_new_simple("video/x-raw",
                                            "format", G_TYPE_STRING, "I420",
                                            "width", G_TYPE_INT, resolution.width,
                                            "height", G_TYPE_INT, resolution.height,
                                            "framerate", GST_TYPE_FRACTION, framerate, 1,
                                            NULL);
        if (!caps)
            ORIGINATE_ERROR("Failed to create caps");
        GstCapsFeatures *features = gst_caps_features_new("memory:NVMM", NULL);
        if (!features)
        {
            gst_caps_unref(caps);
            ORIGINATE_ERROR("Failed to create caps feature");
        }
        gst_caps_set_features(caps, 0, features);

        // Link EGLStream source to queue via caps filter.
        if (!gst_element_link_filtered(videoSource, queue, caps))
        {
            gst_caps_unref(caps);
            ORIGINATE_ERROR("Failed to link EGLStream source to queue");
        }
        gst_caps_unref(caps);

        // Link queue to encoder
        if (!gst_element_link(queue, m_videoEncoder))
            ORIGINATE_ERROR("Failed to link queue to encoder");

        // Link encoder to muxer pad.
        if (!gst_element_link_pads(m_videoEncoder, "src", videoMuxer, "video_%u"))
            ORIGINATE_ERROR("Failed to link encoder to muxer pad");

        // Link muxer to sink.
        if (!gst_element_link(videoMuxer, fileSink))
            ORIGINATE_ERROR("Failed to link muxer to sink");

        return true;
    }

    /**
     * Shutdown the GStreamer pipeline.
     */
    void shutdown()
    {
        if (m_state == GST_STATE_PLAYING)
            stopRecording();

        if (m_pipeline)
            gst_object_unref(GST_OBJECT(m_pipeline));
        m_pipeline = NULL;
    }

    /**
     * Start recording video.
     */
    bool startRecording()
    {
        if (!m_pipeline || !m_videoEncoder)
            ORIGINATE_ERROR("Video encoder not initialized");

        if (m_state != GST_STATE_NULL)
            ORIGINATE_ERROR("Video encoder already recording");

        // Start the pipeline.
        if (gst_element_set_state(m_pipeline, GST_STATE_PLAYING) == GST_STATE_CHANGE_FAILURE)
            ORIGINATE_ERROR("Failed to start recording.");

        m_state = GST_STATE_PLAYING;
        return true;
    }

    /**
     * Stop recording video.
     */
    bool stopRecording()
    {
        if (!m_pipeline || !m_videoEncoder)
            ORIGINATE_ERROR("Video encoder not initialized");

        if (m_state != GST_STATE_PLAYING)
            ORIGINATE_ERROR("Video encoder not recording");

        // Send the end-of-stream event.
        GstPad *pad = gst_element_get_static_pad(m_videoEncoder, "sink");
        if (!pad)
            ORIGINATE_ERROR("Failed to get 'sink' pad");
        bool result = gst_pad_send_event(pad, gst_event_new_eos());
        gst_object_unref(pad);
        if (!result)
            ORIGINATE_ERROR("Failed to send end of stream event to encoder");

        // Wait for the event to complete.
        GstBus *bus = gst_pipeline_get_bus(GST_PIPELINE(m_pipeline));
        if (!bus)
            ORIGINATE_ERROR("Failed to get bus");
        result = gst_bus_poll(bus, GST_MESSAGE_EOS, GST_CLOCK_TIME_NONE);
        gst_object_unref(bus);
        if (!result)
            ORIGINATE_ERROR("Failed to wait for the EOF event");

        // Stop the pipeline.
        if (gst_element_set_state(m_pipeline, GST_STATE_NULL) == GST_STATE_CHANGE_FAILURE)
            ORIGINATE_ERROR("Failed to stop recording.");

        m_state = GST_STATE_NULL;
        return true;
    }

protected:
    GstState m_state;
    GstElement *m_pipeline;
    GstElement *m_videoEncoder;
};

class aaCameraInterface
{
public:
    aaCameraInterface(int idx)
       : m_camindex(idx)
    {
    }  
    ~aaCameraInterface()
    {

    }
    bool initialize (CameraDevice *cd, NvEglRenderer *renderer, Window & window)     
    {


    // Create CameraProvider.
    UniqueObj<CameraProvider> cameraProvider(CameraProvider::create());
    ICameraProvider *iCameraProvider = interface_cast<ICameraProvider>(cameraProvider);
    if (!iCameraProvider)
        ORIGINATE_ERROR("Failed to open CameraProvider");

    // Get/use the first available CameraDevice.
    std::vector<CameraDevice*> cameraDevices;
    if (iCameraProvider->getCameraDevices(&cameraDevices) != STATUS_OK)
        ORIGINATE_ERROR("Failed to get CameraDevices");
    if (cameraDevices.size() == 0)
        ORIGINATE_ERROR("No CameraDevices available");
    CameraDevice *cameraDevice = cameraDevices[m_camindex];
    ICameraProperties *iCameraProperties = interface_cast<ICameraProperties>(cameraDevice);
    if (!iCameraProperties)
        ORIGINATE_ERROR("Failed to get ICameraProperties interface");

    // Create CaptureSession.
    UniqueObj<CaptureSession> captureSession(iCameraProvider->createCaptureSession(cameraDevice));
    ICaptureSession *iSession = interface_cast<ICaptureSession>(captureSession);
    if (!iSession)
        ORIGINATE_ERROR("Failed to create CaptureSession");


    // Set common output stream settings.
    UniqueObj<OutputStreamSettings> streamSettings(iSession->createOutputStreamSettings());
    IOutputStreamSettings *iStreamSettings = interface_cast<IOutputStreamSettings>(streamSettings);
    if (!iStreamSettings)
        ORIGINATE_ERROR("Failed to create OutputStreamSettings");
    iStreamSettings->setPixelFormat(PIXEL_FMT_YCbCr_420_888);
    iStreamSettings->setEGLDisplay(g_display.get());
    //iStreamSettings->setEGLDisplay(renderer->getEGLDisplay());

    // Create video encoder stream.
    iStreamSettings->setResolution(PREVIEW_STREAM_SIZE);
    UniqueObj<OutputStream> videoStream(iSession->createOutputStream(streamSettings.get()));
    IStream *iVideoStream = interface_cast<IStream>(videoStream);
    if (!iVideoStream)
        ORIGINATE_ERROR("Failed to create video stream");


    UniqueObj<OutputStream> ocvStream(iSession->createOutputStream(streamSettings.get()));
    if (!ocvStream.get())
        ORIGINATE_ERROR("Failed to create StorageStream");



    // Initialize the GStreamer video encoder consumer.
    GstVideoEncoder gstVideoEncoder;
    if (!gstVideoEncoder.initialize(iVideoStream->getEGLStream(), PREVIEW_STREAM_SIZE,
                                    FRAMERATE, BITRATE, ENCODER, MUXER, OUTPUT, m_camindex))
        ORIGINATE_ERROR("Failed to initialize GstVideoEncoder EGLStream consumer");
    if (!gstVideoEncoder.startRecording())
        ORIGINATE_ERROR("Failed to start video recording");

    // Initialize the ocv consumer.
    OCVConsumerThread ocvConsumer(ocvStream.get(), PREVIEW_STREAM_SIZE, renderer);
    PROPAGATE_ERROR(ocvConsumer.initialize());
    PROPAGATE_ERROR(ocvConsumer.waitRunning());

    // Create capture Request and enable the streams in the Request.
    UniqueObj<Request> request(iSession->createRequest(CAPTURE_INTENT_VIDEO_RECORD));
    IRequest *iRequest = interface_cast<IRequest>(request);
    if (!iRequest)
        ORIGINATE_ERROR("Failed to create Request");
    if (iRequest->enableOutputStream(videoStream.get()) != STATUS_OK)
        ORIGINATE_ERROR("Failed to enable video stream in Request");
    //if (iRequest->enableOutputStream(previewStream.get()) != STATUS_OK)
    //    ORIGINATE_ERROR("Failed to enable preview stream in Request");
    if (iRequest->enableOutputStream(ocvStream.get()) != STATUS_OK)
        ORIGINATE_ERROR("Failed to enable preview stream in Request");


    // Perform repeat capture requests for LENGTH seconds.
    if (iSession->repeat(request.get()) != STATUS_OK)
        ORIGINATE_ERROR("Failed to start repeat capture requests");

    PROPAGATE_ERROR(window.pollingSleep(LENGTH));

    iSession->stopRepeat();

    // Wait until all frames have completed before stopping recording.
    /// @todo: Not doing this may cause a deadlock.
    iSession->waitForIdle();

    // Stop video recording.
    if (!gstVideoEncoder.stopRecording())
        ORIGINATE_ERROR("Failed to stop video recording");
    gstVideoEncoder.shutdown();
    videoStream.reset();

    // Stop ocv consumer.
    ocvStream.reset();
    PROPAGATE_ERROR(ocvConsumer.shutdown());

            return true;

    }


protected:
    CameraDevice    *m_cameraDevice;
    NvEglRenderer   *m_renderer;
    ICaptureSession *m_iSession;
    GstVideoEncoder  m_gstVideoEncoder;
    OCVConsumerThread m_ocvConsumer;

    int              m_camindex;

};

static bool execute()
{

    using namespace Argus;
    aaCameraInterface aaCamInterface0(0);
    aaCameraInterface aaCamInterface1(1);

    // Initialize the preview window and EGL display.
    Window &window = Window::getInstance();
    PROPAGATE_ERROR(g_display.initialize(window.getEGLNativeDisplay()));

    aaCamInterface0.initialize(NULL,NULL,window);
    aaCamInterface1.initialize(NULL,NULL,window);
    return true;
}

}; // namespace ArgusSamples

int main(int argc, const char *argv[])
{
 //   NvEglRenderer *renderer = NvEglRenderer::createEglRenderer("renderer0", PREVIEW_STREAM_SIZE.width,
 //                                           PREVIEW_STREAM_SIZE.height, 0, 0);
 //   if (!renderer)
 //       ORIGINATE_ERROR("Failed to create EGLRenderer.");

    if (!ArgusSamples::execute())
        return EXIT_FAILURE;

    return EXIT_SUCCESS;
}

WHen I run this, I get encoded output from first camera but nothing from second. Following are the on screen messages. It seems there is some problem related to launching / running threads. Could someone please spot the basic issue about thread, which I seem to have overlooked ?

Thanks,

ubuntu@tegra-ubuntu:~/Downloads/argus/build/samples/gstVideoEncode$ ./argus_gstvideoencode 
Inside NvxLiteH264DecoderLowLatencyInitNvxLiteH264DecoderLowLatencyInit set DPB and MjstreamingInside NvxLiteH265DecoderLowLatencyInitNvxLiteH265DecoderLowLatencyInit set DPB and MjstreamingOCV CONSUMER: Waiting until producer is connected...
Framerate set to : 30 at NvxVideoEncoderSetParameterNvMMLiteOpen : Block : BlockType = 4 
===== MSENC =====
OCV CONSUMER: Producer has connected; continuing.
NvMMLiteBlockCreate : Block : BlockType = 4 
OCV CONSUMER: No more frames. Cleaning up.
OCV CONSUMER: Done.
(Argus) Error InvalidState: Receive thread is not running cannot send. (in src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 94)
(Argus) Error InvalidState:  (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 101)
(Argus) Error InvalidState: Receive thread is not running cannot send. (in src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 94)
(Argus) Error InvalidState:  (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 101)
Error generated. /home/ubuntu/Downloads/argus/samples/gstVideoEncode/main.cpp, initialize:286 Failed to open CameraProvider
(Argus) Error InvalidState: Receive thread is not running cannot send. (in src/rpc/socket/client/ClientSocketManager.cpp, function send(), line 94)
(Argus) Error InvalidState:  (propagating from src/rpc/socket/client/SocketClientDispatch.cpp, function dispatch(), line 101)

Hi,

Please check sample ‘tegra_multimedia_api/argus/sample/syncSensor’ for multiple source usage.
Thanks.

Hi AastaLLL,
Looking at syncSensor example, I feel that it has two cameras feeding to same ‘consumer’ (i.e. StereoDisparityConsumerThread disparityConsumer(iStreamLeft, iStreamRight) ). It does not seem to resemble my case.

In my case, I have -

  1. Two cameras
  2. Each camera feeding to two independent thread (encoder and Opencv consumer)

Thus there are 4 independent consumer threads which are receiving data from two producers. Looking at the error I am getting , it seems issues is related to mutiple threads. I’m not able to get to root cause of - “Receive thread is not running cannot send”.

THanks,

Hi AastaLLL

I tried tegra_multimedia_api/argus/sample/syncSensor example. I am looking to connect each camera in our system to OCVConsumerThread.

I instantiate two OCVConsumerThread, which seems to give problems…

PRODUCER_PRINT("Launching OcvConsumer 1\n");
    OCVConsumerThread ocvConsumer_left(streamLeft.get(), PREVIEW_STREAM_SIZE, NULL,0);
    PRODUCER_PRINT("Launching OcvConsumer 2\n");
    OCVConsumerThread ocvConsumer_right(streamRight.get(), PREVIEW_STREAM_SIZE, NULL,1);

I get segfault while reading frame in OCVConsumerThread. If I use just one camera it does not segfault.

Following is my code. I copied OCVConsumerThread, from JPEGConsumerThread.

bool OCVConsumerThread::threadExecute()
{
    IStream *iStream = interface_cast<IStream>(m_stream);
    IFrameConsumer *iFrameConsumer = interface_cast<IFrameConsumer>(m_consumer);
    Argus::Status status;
    int ret;

    // Wait until the producer has connected to the stream.
    OCV_CONSUMER_PRINT("Waiting until producer is connected...%d\n", m_id);
    if (iStream->waitUntilConnected() != STATUS_OK)
        ORIGINATE_ERROR("Stream failed to connect.");
    OCV_CONSUMER_PRINT("Producer has connected; continuing. %d\n", m_id);

    int frameCount = 0;
    while (true)
    {
        // Acquire a Frame.
        UniqueObj<Frame> frame(iFrameConsumer->acquireFrame());
        IFrame *iFrame = interface_cast<IFrame>(frame);
        if (!iFrame)
            break;

        // Get the Frame's Image.
        Image *image = iFrame->getImage();
        EGLStream::NV::IImageNativeBuffer *iImageNativeBuffer
              = interface_cast<EGLStream::NV::IImageNativeBuffer>(image);
        TEST_ERROR_RETURN(!iImageNativeBuffer, "Failed to create an IImageNativeBuffer");

        int fd = iImageNativeBuffer->createNvBuffer(Size {m_framesize.width, m_framesize.height},
               NvBufferColorFormat_YUV420, NvBufferLayout_Pitch, &status);
        if (status != STATUS_OK)
               TEST_ERROR_RETURN(status != STATUS_OK, "Failed to create a native buffer");


        // Read camera frame into cv::Mat. 
        // this causes segfault, when this OCVConsumer class is instantiated twice
        NvBufferParams params;
        NvBufferGetParams(fd, &params);

        int fsize = params.pitch[0] * m_framesize.height ;
        char *data_mem = (char*)mmap(NULL, fsize, PROT_READ | PROT_WRITE, MAP_SHARED, fd, params.offset[0]);
        if (data_mem == MAP_FAILED)
           printf("mmap failed : %s\n", strerror(errno));

        cv::Mat imgbuf = cv::Mat(m_framesize.height, m_framesize.width, CV_8UC1, data_mem,params.pitch[0]);
        cv::imshow("img", imgbuf);
        cv::waitKey(1);
        NvBufferDestroy(fd);

        //OCV_CONSUMER_PRINT("Acquired frame no. %d %d %d %d %d\n", params.width[0], params.height[0], params.pitch[0],m_framesize.width, m_framesize.height);
    }

    OCV_CONSUMER_PRINT("No more frames. Cleaning up.\n");

    PROPAGATE_ERROR(requestShutdown());

    return true;
}

Hi,

Could you check if you can open two camera sessions first?

Use sample ‘~/tegra_multimedia_api/argus/samples/multiSensor’
This sample open two independent camera sessions and use first one to display and second one for jpeg snapshot.
This may be closer to your use-case.

Hi AastaLLL

I am able to open open two camera sessions first. I checked this by running multiSensor example. Furthermore please check my code below.

static bool execute()
{
    // Initialize EGL.
    PROPAGATE_ERROR(g_display.initialize());

    // Initialize the Argus camera provider.
    UniqueObj<CameraProvider> cameraProvider(CameraProvider::create());

    // Get the ICameraProvider interface from the global CameraProvider.
    ICameraProvider *iCameraProvider = interface_cast<ICameraProvider>(cameraProvider);
    if (!iCameraProvider)
        ORIGINATE_ERROR("Failed to get ICameraProvider interface");

    // Get the camera devices.
    std::vector<CameraDevice*> cameraDevices;
    iCameraProvider->getCameraDevices(&cameraDevices);
    if (cameraDevices.size() < 2)
        ORIGINATE_ERROR("Must have at least 2 sensors available");

    std::vector <CameraDevice*> lrCameras;
    lrCameras.push_back(cameraDevices[0]); // Left Camera (the 1st camera will be used for AC)
    lrCameras.push_back(cameraDevices[1]); // Right Camera

    // Create the capture session, AutoControl will be based on what the 1st device sees.
    UniqueObj<CaptureSession> captureSession(iCameraProvider->createCaptureSession(lrCameras));
    ICaptureSession *iCaptureSession = interface_cast<ICaptureSession>(captureSession);
    if (!iCaptureSession)
        ORIGINATE_ERROR("Failed to get capture session interface");

    // Create stream settings object and set settings common to both streams.
    UniqueObj<OutputStreamSettings> streamSettings(iCaptureSession->createOutputStreamSettings());
    IOutputStreamSettings *iStreamSettings = interface_cast<IOutputStreamSettings>(streamSettings);
    if (!iStreamSettings)
        ORIGINATE_ERROR("Failed to create OutputStreamSettings");
    iStreamSettings->setPixelFormat(PIXEL_FMT_YCbCr_420_888);
    iStreamSettings->setResolution(STREAM_SIZE);
    //iStreamSettings->setEGLDisplay(g_display.get());

    // Create egl streams
    PRODUCER_PRINT("Creating left stream.\n");
    iStreamSettings->setCameraDevice(lrCameras[0]);
    UniqueObj<OutputStream> streamLeft(iCaptureSession->createOutputStream(streamSettings.get()));
    IStream *iStreamLeft = interface_cast<IStream>(streamLeft);
    if (!iStreamLeft)
        ORIGINATE_ERROR("Failed to create left stream");

    PRODUCER_PRINT("Creating right stream.\n");
    iStreamSettings->setCameraDevice(lrCameras[1]);
    UniqueObj<OutputStream> streamRight(iCaptureSession->createOutputStream(streamSettings.get()));
    IStream *iStreamRight = interface_cast<IStream>(streamRight);
    if (!iStreamRight)
        ORIGINATE_ERROR("Failed to create right stream");

    //PRODUCER_PRINT("Launching disparity checking consumer\n");
    //StereoDisparityConsumerThread disparityConsumer(iStreamLeft, iStreamRight);

    PRODUCER_PRINT("Launching OcvConsumer 1\n");
    OCVConsumerThread ocvConsumer_left(streamLeft.get(), PREVIEW_STREAM_SIZE, NULL,0);
    PRODUCER_PRINT("Launching OcvConsumer 2\n");
    OCVConsumerThread ocvConsumer_right(streamRight.get(), PREVIEW_STREAM_SIZE, NULL,1);

    PROPAGATE_ERROR(ocvConsumer_left.initialize());
    PROPAGATE_ERROR(ocvConsumer_left.waitRunning());
    PROPAGATE_ERROR(ocvConsumer_right.initialize());
    PROPAGATE_ERROR(ocvConsumer_right.waitRunning());

    // Create a request
    UniqueObj<Request> request(iCaptureSession->createRequest());
    IRequest *iRequest = interface_cast<IRequest>(request);
    if (!iRequest)
        ORIGINATE_ERROR("Failed to create Request");

    // Enable both streams in the request.
    
    iRequest->enableOutputStream(streamLeft.get());
    iRequest->enableOutputStream(streamRight.get());

    // Submit capture for the specified time.
    PRODUCER_PRINT("Starting repeat capture requests.\n");
    if (iCaptureSession->repeat(request.get()) != STATUS_OK)
        ORIGINATE_ERROR("Failed to start repeat capture request for preview");


    sleep(CAPTURE_TIME);

    //PROPAGATE_ERROR(window.pollingSleep(LENGTH));

    // Stop the capture requests and wait until they are complete.
    iCaptureSession->stopRepeat();
    iCaptureSession->waitForIdle();

    // Disconnect Argus producer from the EGLStreams (and unblock consumer acquire).
    PRODUCER_PRINT("Captures complete, disconnecting producer.\n");
    iStreamLeft->disconnect();
    iStreamRight->disconnect();

    // Wait for the consumer thread to complete.
    // PROPAGATE_ERROR(disparityConsumer.shutdown());
    PROPAGATE_ERROR(ocvConsumer_left.shutdown());
    PROPAGATE_ERROR(ocvConsumer_right.shutdown());

    // Shut down Argus.
    cameraProvider.reset();

    // Cleanup the EGL display
    PROPAGATE_ERROR(g_display.cleanup());

    PRODUCER_PRINT("Done -- exiting.\n");
    return true;
}

I am able to get to this point. After launching two independent session, one for each camera.

PRODUCER_PRINT("Launching OcvConsumer 1\n");
    OCVConsumerThread ocvConsumer_left(streamLeft.get(), PREVIEW_STREAM_SIZE, NULL,0);
    PRODUCER_PRINT("Launching OcvConsumer 2\n");
    OCVConsumerThread ocvConsumer_right(streamRight.get(), PREVIEW_STREAM_SIZE, NULL,1);

Thanks,

Hi,

I write a small sample from multiSensor.

  1. Two camera session
  2. Two JPEG consumer

Please modified your use-case from this sample.
I think you should duplicate all the pipeline components.
For example, iRequest.

(Sorry for the the hard-code style since it’s better for understanding.)

/*
 * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions
 * are met:
 *  * Redistributions of source code must retain the above copyright
 *    notice, this list of conditions and the following disclaimer.
 *  * Redistributions in binary form must reproduce the above copyright
 *    notice, this list of conditions and the following disclaimer in the
 *    documentation and/or other materials provided with the distribution.
 *  * Neither the name of NVIDIA CORPORATION nor the names of its
 *    contributors may be used to endorse or promote products derived
 *    from this software without specific prior written permission.
 *
 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
 * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
 * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
 * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
 * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
 * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
 * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
 * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 */

#include "Error.h"
#include "EGLGlobal.h"
#include "GLContext.h"
#include "JPEGConsumer.h"
#include "PreviewConsumer.h"
#include "Window.h"
#include "Thread.h"

#include <Argus/Argus.h>

#include <unistd.h>
#include <stdlib.h>
#include <sstream>
#include <iomanip>

using namespace Argus;

/*
 * This sample opens two independent camera sessions using 2 sensors it then uses the first sensor
 * to display a preview on the screen, while taking jpeg snapshots every second from the second
 * sensor. The Jpeg saving and Preview consumption happen on two consumer threads in the
 * PreviewConsumerThread and JPEGConsumerThread classes, located in the util folder.
 */

namespace ArgusSamples
{
// Constants.
static const uint32_t CAPTURE_TIME  = 5; // In seconds.
static const uint32_t NUMBER_SESSIONS = 2;

// Globals and derived constants.
UniqueObj<CameraProvider> g_cameraProvider;

// Debug print macros.
#define PRODUCER_PRINT(...) printf("PRODUCER: " __VA_ARGS__)

static bool execute()
{
    Window &window = Window::getInstance();

    // Initialize the Argus camera provider.
    UniqueObj<CameraProvider> cameraProvider(CameraProvider::create());

    // Get the ICameraProvider interface from the global CameraProvider.
    ICameraProvider *iCameraProvider = interface_cast<ICameraProvider>(cameraProvider);
    if (!iCameraProvider)
        ORIGINATE_ERROR("Failed to get ICameraProvider interface");

    // Get the camera devices.
    std::vector<CameraDevice*> cameraDevices;
    iCameraProvider->getCameraDevices(&cameraDevices);
    if (cameraDevices.size() == 0)
        ORIGINATE_ERROR("No cameras available");

    if (cameraDevices.size() < NUMBER_SESSIONS)
    {
        ORIGINATE_ERROR("Insufficient number of sensors present cannot run multisensor sample");
    }

    // Get the second cameras properties since it is used for the storage session.
    ICameraProperties *iCameraDevice = interface_cast<ICameraProperties>(cameraDevices[1]);
    if (!iCameraDevice)
    {
        ORIGINATE_ERROR("Failed to get the camera device.");
    }

    std::vector<Argus::SensorMode*> sensorModes;
    iCameraDevice->getSensorModes(&sensorModes);
    if (!sensorModes.size())
    {
        ORIGINATE_ERROR("Failed to get valid sensor mode list.");
    }

    // Create the capture sessions, one will be for storing images and one for preview.
    UniqueObj<CaptureSession> captureSessions[NUMBER_SESSIONS];
    for (uint32_t i = 0; i < NUMBER_SESSIONS; i++)
    {
        captureSessions[i] =
                UniqueObj<CaptureSession>(iCameraProvider->createCaptureSession(cameraDevices[i]));

        if (!captureSessions[i])
            ORIGINATE_ERROR("Failed to create CaptureSession with device %d.", i);
    }

    ICaptureSession *iStorageCaptureSession1 = interface_cast<ICaptureSession>(captureSessions[0]);
    ICaptureSession *iStorageCaptureSession2 = interface_cast<ICaptureSession>(captureSessions[1]);

    if (!iStorageCaptureSession1 || !iStorageCaptureSession2)
        ORIGINATE_ERROR("Failed to get capture session interfaces");

    // Use the 1st sensor mode as the size we want to store.
    ISensorMode *iMode = interface_cast<ISensorMode>(sensorModes[0]);
    if (!iMode)
        ORIGINATE_ERROR("Failed to get the sensor mode.");

    // Create streams.
    PRODUCER_PRINT("Creating the storage stream.\n");
    UniqueObj<OutputStreamSettings> storageSettings1(
        iStorageCaptureSession1->createOutputStreamSettings());
    UniqueObj<OutputStreamSettings> storageSettings2(
        iStorageCaptureSession2->createOutputStreamSettings());
    IOutputStreamSettings *iStorageSettings1 =
        interface_cast<IOutputStreamSettings>(storageSettings1);
    IOutputStreamSettings *iStorageSettings2 =
        interface_cast<IOutputStreamSettings>(storageSettings2);
    if (iStorageSettings1)
    {
        iStorageSettings1->setPixelFormat(PIXEL_FMT_YCbCr_420_888);
        iStorageSettings1->setResolution(iMode->getResolution());
    }
    if (iStorageSettings2)
    {
        iStorageSettings2->setPixelFormat(PIXEL_FMT_YCbCr_420_888);
        iStorageSettings2->setResolution(iMode->getResolution());
    }
    UniqueObj<OutputStream> storageStream1(
            iStorageCaptureSession1->createOutputStream(storageSettings1.get()));
    UniqueObj<OutputStream> storageStream2(
            iStorageCaptureSession2->createOutputStream(storageSettings2.get()));
    if (!storageStream1.get())
        ORIGINATE_ERROR("Failed to create StorageStream");

    if (!storageStream2.get())
        ORIGINATE_ERROR("Failed to create StorageStream");

    JPEGConsumerThread jpegConsumer1(storageStream1.get());
    JPEGConsumerThread jpegConsumer2(storageStream2.get());
    PROPAGATE_ERROR(jpegConsumer1.initialize());
    PROPAGATE_ERROR(jpegConsumer2.initialize());
    PROPAGATE_ERROR(jpegConsumer1.threadSetPrefix("Camera1_"));
    PROPAGATE_ERROR(jpegConsumer2.threadSetPrefix("Camera2_"));
    PROPAGATE_ERROR(jpegConsumer1.waitRunning());
    PROPAGATE_ERROR(jpegConsumer2.waitRunning());

    // Create the two requests
    UniqueObj<Request> storageRequest1(iStorageCaptureSession1->createRequest());
    UniqueObj<Request> storageRequest2(iStorageCaptureSession2->createRequest());
    if (!storageRequest1 || !storageRequest2)
        ORIGINATE_ERROR("Failed to create Request");

    IRequest *iStorageRequest1 = interface_cast<IRequest>(storageRequest1);
    IRequest *iStorageRequest2 = interface_cast<IRequest>(storageRequest2);
    if (!iStorageRequest1 || !iStorageRequest2)
        ORIGINATE_ERROR("Failed to create Request interface");

    iStorageRequest1->enableOutputStream(storageStream1.get());
    iStorageRequest2->enableOutputStream(storageStream2.get());

    // Wait for CAPTURE_TIME seconds and do a storage capture every second.
    for (uint32_t i = 0; i < CAPTURE_TIME; i++)
    {
        if (iStorageCaptureSession1->repeat(storageRequest1.get()) != STATUS_OK)
            ORIGINATE_ERROR("Failed to start repeat capture request for jpg");
        if (iStorageCaptureSession2->repeat(storageRequest2.get()) != STATUS_OK)
            ORIGINATE_ERROR("Failed to start repeat capture request for jpg");
        sleep(1);
    }

    // all done shut down
    iStorageCaptureSession1->stopRepeat();
    iStorageCaptureSession2->stopRepeat();
    iStorageCaptureSession1->waitForIdle();
    iStorageCaptureSession2->waitForIdle();

    storageStream1.reset();
    storageStream2.reset();

    // Wait for the consumer threads to complete.
    PROPAGATE_ERROR(jpegConsumer1.shutdown());
    PROPAGATE_ERROR(jpegConsumer2.shutdown());

    // Shut down Argus.
    cameraProvider.reset();

    PRODUCER_PRINT("Done -- exiting.\n");
    return true;
}

}; // namespace ArgusSamples

int main(int argc, const char *argv[])
{
    if (!ArgusSamples::execute())
        return EXIT_FAILURE;

    return EXIT_SUCCESS;
}

Thanks AastaLLL, for your help. I was not duplicating request component. Now I can capture from two cameras and feed frames to OpencvConsumer where those frame are displayed on screen.

Thanks,

Hi George,

I’m interested in what you’re doing in this post as I also need to do a multiview from multicameras system on my TX2. I would like to ask, where did you downloaded the “OCVConsumer.h” header file and what is your Argus version? I’m currently on Argus Version: 0.96.2 and I have only JPEGConsumer.h and PreviewConsumer.h in my utils folder.

Thanks in advance

You may check this github repository.

Thank you Honey_Patouceul !

I will study this.

Hi Honey_Patouceul,

I put the folder of the example I downloaded from git under my ~/tegra_multimedia_api/samples and not under ~/tegra_multimedia_api/argus/samples.

I managed to create a MakeFile after launching cmake. But when I do “make” it stops at :

“[ 95%] Building CXX object CMakeFiles/hardwareOpt.dir/home/nvidia/tegra_multimedia_api/samples/common/classes/NvVideoEncoder.cpp.o
make[2]: *** No rule to make target ‘/home/nvidia/tegra_multimedia_api/argus/build/samples/utils/libargussampleutils.a’, needed by ‘hardwareOpt’. Stop.
CMakeFiles/Makefile2:68: recipe for target ‘CMakeFiles/hardwareOpt.dir/all’ failed”

Does it cause by the place where I put the folder? Or did I miss something during the build of Argus itself?

Thanks in advance

Have you built inside ~/tegra_multimedia_api/argus ? I think you are missing libargussampleutils.a.

Thanks