How to capture audio on Tx2

Hi Nvidia,

I am using Tegra-Multimedia API’s to {HDMI TC358840}capture + scaling video and its working well.

Questions:

  1. but how to capture audio ? and next step will be muxing video & audio ?.

  2. I am using Tegra-Multimedia API code for capturing video, so i need any way to capture audio using API calls.
    so that i can mux later with video. ?

  3. I tried to find Gstreamer API’s code examples but i could found only Gstreamer command line examples.
    Kindly help!

Below is the core code snippet for encoded data.

VCE_EncoderCapturePlaneDqCallback(struct v4l2_buffer *v4l2_buf, NvBuffer * buffer,
		NvBuffer * shared_buffer, void *arg)
{
	Enccontext_t *ctx = (Enccontext_t *) arg;
	NvVideoEncoder *enc = ctx->enc;

	if (v4l2_buf == NULL)
	{
		cout << "Error while dequeing buffer from output plane" << endl;
		VCE_Abort(ctx);
		return false;
	}

	// GOT EOS from encoder. Stop dqthread.
	if (buffer->planes[0].bytesused == 0)
	{
		return false;
	}

	VCE_WriteEncoderOutputFrame(ctx->out_file, buffer);

	if (enc->capture_plane.qBuffer(*v4l2_buf, NULL) < 0)
	{
		cerr << "Error while Qing buffer at capture plane" << endl;
		VCE_Abort(ctx);
		return false;
	}
	return true;
}

Hi meRaza, for audio , you have to leverage gstreamer. There are two posts for your reference.
https://devtalk.nvidia.com/default/topic/1025376/jetson-tx2/video-and-audio-inputs-test-cmd/post/5215353/#5215353
https://devtalk.nvidia.com/default/topic/1028387/jetson-tx1/closed-gst-encoding-pipeline-with-frame-processing-using-cuda-and-libargus/post/5232036/#5232036

Thanks Danell,

The “2nd link” is quite similar to what i wanted to achieve , actually i am using 12_camera_v4l2_cuda example code + added encoding code in same example.

I will try to add audio part & mux part from the provided “2nd link” and get back with results /queries.

Hi Danell,

I tired the 2nd link and added patch to example 10_camera_recording but getting below error.
I have added all possible libs and includes.
Kindly HELP!

I have attached the
a) Code
b) Error
c) Settings

CODE:

/*
 * Copyright (c) 2016-2017, NVIDIA CORPORATION. All rights reserved.
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions
 * are met:
 *  * Redistributions of source code must retain the above copyright
 *    notice, this list of conditions and the following disclaimer.
 *  * Redistributions in binary form must reproduce the above copyright
 *    notice, this list of conditions and the following disclaimer in the
 *    documentation and/or other materials provided with the distribution.
 *  * Neither the name of NVIDIA CORPORATION nor the names of its
 *    contributors may be used to endorse or promote products derived
 *    from this software without specific prior written permission.
 *
 * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
 * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
 * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
 * PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
 * CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
 * EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
 * PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
 * PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
 * OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
 * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
 * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 */

#include "Error.h"
#include "Thread.h"

#include <gst/gst.h>
#include <gst/app/gstappsrc.h>
#include <gst/base/gstpushsrc.h>
#include <Argus/Argus.h>
#include <EGLStream/EGLStream.h>
#include <EGLStream/NV/ImageNativeBuffer.h>

#include <NvVideoEncoder.h>
#include <NvApplicationProfiler.h>

#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <iostream>
#include <fstream>

#include <iostream>
 #include <fstream>

#include <gst/app/gstappsrc.h>
#include "NvCudaProc.h"

using namespace Argus;
using namespace EGLStream;

// Constant configuration.
static const int    MAX_ENCODER_FRAMES = 5;
static const int    DEFAULT_FPS        = 30;

// Configurations which can be overrided by cmdline
static int          CAPTURE_TIME = 1; // In seconds.
static Size2D<uint32_t> STREAM_SIZE (640, 480);
static std::string  OUTPUT_FILENAME ("output.h264");
static uint32_t     ENCODER_PIXFMT = V4L2_PIX_FMT_H264;
static bool         DO_STAT = false;
static bool         VERBOSE_ENABLE = false;

// Debug print macros.
#define PRODUCER_PRINT(...) printf("PRODUCER: " __VA_ARGS__)
#define CONSUMER_PRINT(...) printf("CONSUMER: " __VA_ARGS__)
#define CHECK_ERROR(expr) \
    do { \
        if ((expr) < 0) { \
            abort(); \
            ORIGINATE_ERROR(#expr " failed"); \
        } \
    } while (0);

namespace ArgusSamples
{

/*******************************************************************************
 * FrameConsumer thread:
 *   Creates an EGLStream::FrameConsumer object to read frames from the stream
 *   and create NvBuffers (dmabufs) from acquired frames before providing the
 *   buffers to V4L2 for video encoding. The encoder will save the encoded
 *   stream to disk.
 ******************************************************************************/
class ConsumerThread : public Thread
{
public:
    //explicit ConsumerThread(OutputStream* stream);
    explicit ConsumerThread(OutputStream* stream, GstElement *appsrc_);
    ~ConsumerThread();

    bool isInError()
    {
        return m_gotError;
    }

private:
    /** @name Thread methods */
    /**@{*/
    virtual bool threadInitialize();
    virtual bool threadExecute();
    virtual bool threadShutdown();
    /**@}*/

    bool createVideoEncoder();
    void abort();

    static bool encoderCapturePlaneDqCallback(
            struct v4l2_buffer *v4l2_buf,
            NvBuffer *buffer,
            NvBuffer *shared_buffer,
            void *arg);

    OutputStream* m_stream;
    UniqueObj<FrameConsumer> m_consumer;
    NvVideoEncoder *m_VideoEncoder;
    std::ofstream *m_outputFile;
    bool m_gotError;

    GstElement *m_appsrc_;
    GstClockTime timestamp;
    EGLDisplay egl_display;
};

//ConsumerThread::ConsumerThread(OutputStream* stream) :
ConsumerThread::ConsumerThread(OutputStream* stream, GstElement *appsrc_) :
        m_stream(stream),
        m_VideoEncoder(NULL),
        m_outputFile(NULL),
        m_gotError(false),
 	m_appsrc_(appsrc_),
        timestamp(0)
{
    egl_display = eglGetDisplay(EGL_DEFAULT_DISPLAY);
    eglInitialize(egl_display, NULL, NULL);
}

ConsumerThread::~ConsumerThread()
{
    if (m_VideoEncoder)
    {
        if (DO_STAT)
             m_VideoEncoder->printProfilingStats(std::cout);
        delete m_VideoEncoder;
    }

    if (m_outputFile)
        delete m_outputFile;

    eglTerminate(egl_display);
}

bool ConsumerThread::threadInitialize()
{
    // Create the FrameConsumer.
    m_consumer = UniqueObj<FrameConsumer>(FrameConsumer::create(m_stream));
    if (!m_consumer)
        ORIGINATE_ERROR("Failed to create FrameConsumer");

    // Create Video Encoder
    if (!createVideoEncoder())
        ORIGINATE_ERROR("Failed to create video m_VideoEncoderoder");

    // Create output file
    m_outputFile = new std::ofstream(OUTPUT_FILENAME.c_str());
    if (!m_outputFile)
        ORIGINATE_ERROR("Failed to open output file.");

    // Stream on
    int e = m_VideoEncoder->output_plane.setStreamStatus(true);
    if (e < 0)
        ORIGINATE_ERROR("Failed to stream on output plane");
    e = m_VideoEncoder->capture_plane.setStreamStatus(true);
    if (e < 0)
        ORIGINATE_ERROR("Failed to stream on capture plane");

    // Set video encoder callback
    m_VideoEncoder->capture_plane.setDQThreadCallback(encoderCapturePlaneDqCallback);

    // startDQThread starts a thread internally which calls the
    // encoderCapturePlaneDqCallback whenever a buffer is dequeued
    // on the plane
    m_VideoEncoder->capture_plane.startDQThread(this);

    // Enqueue all the empty capture plane buffers
    for (uint32_t i = 0; i < m_VideoEncoder->capture_plane.getNumBuffers(); i++)
    {
        struct v4l2_buffer v4l2_buf;
        struct v4l2_plane planes[MAX_PLANES];

        memset(&v4l2_buf, 0, sizeof(v4l2_buf));
        memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));

        v4l2_buf.index = i;
        v4l2_buf.m.planes = planes;

        CHECK_ERROR(m_VideoEncoder->capture_plane.qBuffer(v4l2_buf, NULL));
    }

    return true;
}

bool ConsumerThread::threadExecute()
{
    IStream *iStream = interface_cast<IStream>(m_stream);
    IFrameConsumer *iFrameConsumer = interface_cast<IFrameConsumer>(m_consumer);

    // Wait until the producer has connected to the stream.
    CONSUMER_PRINT("Waiting until producer is connected...\n");
    if (iStream->waitUntilConnected() != STATUS_OK)
        ORIGINATE_ERROR("Stream failed to connect.");
    CONSUMER_PRINT("Producer has connected; continuing.\n");

    int bufferIndex;

    bufferIndex = 0;

    // Keep acquire frames and queue into encoder
    while (!m_gotError)
    {
        NvBuffer *buffer;
        int fd = -1;

        struct v4l2_buffer v4l2_buf;
        struct v4l2_plane planes[MAX_PLANES];

        memset(&v4l2_buf, 0, sizeof(v4l2_buf));
        memset(planes, 0, MAX_PLANES * sizeof(struct v4l2_plane));

        v4l2_buf.m.planes = planes;

        // Check if we need dqBuffer first
        if (bufferIndex < MAX_ENCODER_FRAMES &&
            m_VideoEncoder->output_plane.getNumQueuedBuffers() <
            m_VideoEncoder->output_plane.getNumBuffers())
        {
            // The queue is not full, no need to dqBuffer
            // Prepare buffer index for the following qBuffer
            v4l2_buf.index = bufferIndex++;
        }
        else
        {
            // Output plane full or max outstanding number reached
            CHECK_ERROR(m_VideoEncoder->output_plane.dqBuffer(v4l2_buf, &buffer,
                                                              NULL, 10));
            // Release the frame.
            fd = v4l2_buf.m.planes[0].m.fd;
            NvBufferDestroy(fd);
            if (VERBOSE_ENABLE)
                CONSUMER_PRINT("Released frame. %d\n", fd);
        }

        // Acquire a frame.
        UniqueObj<Frame> frame(iFrameConsumer->acquireFrame());
        IFrame *iFrame = interface_cast<IFrame>(frame);
        if (!iFrame)
        {
            // Send EOS
            v4l2_buf.m.planes[0].m.fd = fd;
            v4l2_buf.m.planes[0].bytesused = 0;
            CHECK_ERROR(m_VideoEncoder->output_plane.qBuffer(v4l2_buf, NULL));
            break;
        }

        // Get the IImageNativeBuffer extension interface and create the fd.
        NV::IImageNativeBuffer *iNativeBuffer =
            interface_cast<NV::IImageNativeBuffer>(iFrame->getImage());
        if (!iNativeBuffer)
            ORIGINATE_ERROR("IImageNativeBuffer not supported by Image.");
        fd = iNativeBuffer->createNvBuffer(STREAM_SIZE,
                                           NvBufferColorFormat_YUV420,
                                           //NvBufferLayout_BlockLinear,
 					   NvBufferLayout_Pitch);
        if (VERBOSE_ENABLE)
            CONSUMER_PRINT("Acquired Frame. %d\n", fd);

	EGLImageKHR egl_image = NULL;
        egl_image = NvEGLImageFromFd(egl_display, fd);
        if (egl_image == NULL)
        {
            fprintf(stderr, "Error while mapping dmabuf fd (0x%X) to EGLImage\n",
                     fd);
        }
        HandleEGLImage(&egl_image);
        NvDestroyEGLImage(egl_display, egl_image);

// Push the frame into V4L2.
        v4l2_buf.m.planes[0].m.fd = fd;
        v4l2_buf.m.planes[0].bytesused = 1; // byteused must be non-zero
        CHECK_ERROR(m_VideoEncoder->output_plane.qBuffer(v4l2_buf, NULL));
    }

    // Wait till capture plane DQ Thread finishes
    // i.e. all the capture plane buffers are dequeued
    m_VideoEncoder->capture_plane.waitForDQThread(2000);

    CONSUMER_PRINT("Done.\n");

    requestShutdown();

    return true;
}

bool ConsumerThread::threadShutdown()
{
    return true;
}

bool ConsumerThread::createVideoEncoder()
{
    int ret = 0;

    m_VideoEncoder = NvVideoEncoder::createVideoEncoder("enc0");
    if (!m_VideoEncoder)
        ORIGINATE_ERROR("Could not create m_VideoEncoderoder");

    if (DO_STAT)
        m_VideoEncoder->enableProfiling();

    ret = m_VideoEncoder->setCapturePlaneFormat(ENCODER_PIXFMT, STREAM_SIZE.width(),
                                    STREAM_SIZE.height(), 2 * 1024 * 1024);
    if (ret < 0)
        ORIGINATE_ERROR("Could not set capture plane format");

    ret = m_VideoEncoder->setOutputPlaneFormat(V4L2_PIX_FMT_YUV420M, STREAM_SIZE.width(),
                                    STREAM_SIZE.height());
    if (ret < 0)
        ORIGINATE_ERROR("Could not set output plane format");

    ret = m_VideoEncoder->setBitrate(4 * 1024 * 1024);
    if (ret < 0)
        ORIGINATE_ERROR("Could not set bitrate");

    if (ENCODER_PIXFMT == V4L2_PIX_FMT_H264)
    {
        ret = m_VideoEncoder->setProfile(V4L2_MPEG_VIDEO_H264_PROFILE_HIGH);
    }
    else
    {
        ret = m_VideoEncoder->setProfile(V4L2_MPEG_VIDEO_H265_PROFILE_MAIN);
    }
    if (ret < 0)
        ORIGINATE_ERROR("Could not set m_VideoEncoderoder profile");

    if (ENCODER_PIXFMT == V4L2_PIX_FMT_H264)
    {
        ret = m_VideoEncoder->setLevel(V4L2_MPEG_VIDEO_H264_LEVEL_5_0);
        if (ret < 0)
            ORIGINATE_ERROR("Could not set m_VideoEncoderoder level");
    }

    ret = m_VideoEncoder->setRateControlMode(V4L2_MPEG_VIDEO_BITRATE_MODE_CBR);
    if (ret < 0)
        ORIGINATE_ERROR("Could not set rate control mode");

    ret = m_VideoEncoder->setIFrameInterval(30);
    if (ret < 0)
        ORIGINATE_ERROR("Could not set I-frame interval");

    ret = m_VideoEncoder->setFrameRate(30, 1);
    if (ret < 0)
        ORIGINATE_ERROR("Could not set m_VideoEncoderoder framerate");

    ret = m_VideoEncoder->setHWPresetType(V4L2_ENC_HW_PRESET_ULTRAFAST);
    if (ret < 0)
        ORIGINATE_ERROR("Could not set m_VideoEncoderoder HW Preset");

    // Query, Export and Map the output plane buffers so that we can read
    // raw data into the buffers
    ret = m_VideoEncoder->output_plane.setupPlane(V4L2_MEMORY_DMABUF, 10, true, false);
    if (ret < 0)
        ORIGINATE_ERROR("Could not setup output plane");

    // Query, Export and Map the output plane buffers so that we can write
    // m_VideoEncoderoded data from the buffers
    ret = m_VideoEncoder->capture_plane.setupPlane(V4L2_MEMORY_MMAP, 10, true, false);
    if (ret < 0)
        ORIGINATE_ERROR("Could not setup capture plane");

    printf("create video encoder return true\n");
    return true;
}

void ConsumerThread::abort()
{
    m_VideoEncoder->abort();
    m_gotError = true;
}

bool ConsumerThread::encoderCapturePlaneDqCallback(struct v4l2_buffer *v4l2_buf,
                                                   NvBuffer * buffer,
                                                   NvBuffer * shared_buffer,
                                                   void *arg)
{
    ConsumerThread *thiz = (ConsumerThread*)arg;

    if (!v4l2_buf)
    {
        thiz->abort();
        ORIGINATE_ERROR("Failed to dequeue buffer from encoder capture plane");
    }

#if 1
    if (buffer->planes[0].bytesused > 0)
    {
        GstBuffer *gstbuf;
        GstMapInfo map = {0};
        GstFlowReturn ret;
        gstbuf = gst_buffer_new_allocate (NULL, buffer->planes[0].bytesused, NULL);
        gstbuf->pts = thiz->timestamp;
        thiz->timestamp += 33333333; // ns

        gst_buffer_map (gstbuf, &map, GST_MAP_WRITE);
        memcpy(map.data, buffer->planes[0].data , buffer->planes[0].bytesused);
        gst_buffer_unmap(gstbuf, &map);

        g_signal_emit_by_name (thiz->m_appsrc_, "push-buffer", gstbuf, &ret);
        gst_buffer_unref(gstbuf);
    }
    else
    {
        gst_app_src_end_of_stream((GstAppSrc *)thiz->m_appsrc_);
        sleep(1);
    }
#else
     thiz->m_outputFile->write((char *) buffer->planes[0].data,
                               buffer->planes[0].bytesused);

#endif

    if (thiz->m_VideoEncoder->capture_plane.qBuffer(*v4l2_buf, NULL) < 0)
    {
        thiz->abort();
        ORIGINATE_ERROR("Failed to enqueue buffer to encoder capture plane");
        return false;
    }

    // GOT EOS from m_VideoEncoderoder. Stop dqthread.
    if (buffer->planes[0].bytesused == 0)
    {
        CONSUMER_PRINT("Got EOS, exiting...\n");
        return false;
    }

    return true;
}

/*******************************************************************************
 * Argus Producer thread:
 *   Opens the Argus camera driver, creates an OutputStream to output to a
 *   FrameConsumer, then performs repeating capture requests for CAPTURE_TIME
 *   seconds before closing the producer and Argus driver.
 ******************************************************************************/
static bool execute()
{
    GMainLoop *main_loop;
    GstPipeline *gst_pipeline = NULL;
    GError *err = NULL;
    GstElement *appsrc_;

    gst_init (0, NULL);
    main_loop = g_main_loop_new (NULL, FALSE);
    char launch_string_[1024];

    sprintf(launch_string_,
            "appsrc name=mysource ! video/x-h264,width=%d,height=%d,stream-format=byte-stream !",
            STREAM_SIZE.width(), STREAM_SIZE.height());
    sprintf(launch_string_ + strlen(launch_string_),
                " h264parse ! qtmux ! filesink location=a.mp4 ");
    gst_pipeline = (GstPipeline*)gst_parse_launch(launch_string_, &err);
    appsrc_ = gst_bin_get_by_name(GST_BIN(gst_pipeline), "mysource");
    gst_app_src_set_stream_type(GST_APP_SRC(appsrc_), GST_APP_STREAM_TYPE_STREAM);
    gst_element_set_state((GstElement*)gst_pipeline, GST_STATE_PLAYING);

 // Create the CameraProvider object and get the core interface.
    UniqueObj<CameraProvider> cameraProvider = UniqueObj<CameraProvider>(CameraProvider::create());
    ICameraProvider *iCameraProvider = interface_cast<ICameraProvider>(cameraProvider);
    if (!iCameraProvider)
        ORIGINATE_ERROR("Failed to create CameraProvider");

    // Get the camera devices.
    std::vector<CameraDevice*> cameraDevices;
    iCameraProvider->getCameraDevices(&cameraDevices);
    if (cameraDevices.size() == 0)
        ORIGINATE_ERROR("No cameras available");

    // Create the capture session using the first device and get the core interface.
    UniqueObj<CaptureSession> captureSession(
            iCameraProvider->createCaptureSession(cameraDevices[0]));
    ICaptureSession *iCaptureSession = interface_cast<ICaptureSession>(captureSession);
    if (!iCaptureSession)
        ORIGINATE_ERROR("Failed to get ICaptureSession interface");

    // Create the OutputStream.
    PRODUCER_PRINT("Creating output stream\n");
    UniqueObj<OutputStreamSettings> streamSettings(iCaptureSession->createOutputStreamSettings());
    IOutputStreamSettings *iStreamSettings = interface_cast<IOutputStreamSettings>(streamSettings);
    if (!iStreamSettings)
        ORIGINATE_ERROR("Failed to get IOutputStreamSettings interface");

    iStreamSettings->setPixelFormat(PIXEL_FMT_YCbCr_420_888);
    iStreamSettings->setResolution(STREAM_SIZE);
    UniqueObj<OutputStream> outputStream(iCaptureSession->createOutputStream(streamSettings.get()));

    // Launch the FrameConsumer thread to consume frames from the OutputStream.
    PRODUCER_PRINT("Launching consumer thread\n");
    ConsumerThread frameConsumerThread(outputStream.get(), appsrc_);    
	//ConsumerThread frameConsumerThread(outputStream.get());
    PROPAGATE_ERROR(frameConsumerThread.initialize());

    // Wait until the consumer is connected to the stream.
    PROPAGATE_ERROR(frameConsumerThread.waitRunning());

    // Create capture request and enable output stream.
    UniqueObj<Request> request(iCaptureSession->createRequest());
    IRequest *iRequest = interface_cast<IRequest>(request);
    if (!iRequest)
        ORIGINATE_ERROR("Failed to create Request");
    iRequest->enableOutputStream(outputStream.get());

    ISourceSettings *iSourceSettings = interface_cast<ISourceSettings>(iRequest->getSourceSettings());
    if (!iSourceSettings)
        ORIGINATE_ERROR("Failed to get ISourceSettings interface");
    iSourceSettings->setFrameDurationRange(Range<uint64_t>(1e9/DEFAULT_FPS));

    // Submit capture requests.
    PRODUCER_PRINT("Starting repeat capture requests.\n");
    if (iCaptureSession->repeat(request.get()) != STATUS_OK)
        ORIGINATE_ERROR("Failed to start repeat capture request");

    // Wait for CAPTURE_TIME seconds.
    for (int i = 0; i < CAPTURE_TIME && !frameConsumerThread.isInError(); i++)
        sleep(1);

    // Stop the repeating request and wait for idle.
    iCaptureSession->stopRepeat();
    iCaptureSession->waitForIdle();

    // Destroy the output stream to end the consumer thread.
    outputStream.reset();

    // Wait for the consumer thread to complete.
    PROPAGATE_ERROR(frameConsumerThread.shutdown());

     gst_element_set_state((GstElement*)gst_pipeline, GST_STATE_NULL);
     gst_object_unref(GST_OBJECT(gst_pipeline));
     g_main_loop_unref(main_loop);
     gst_deinit();

    PRODUCER_PRINT("Done -- exiting.\n");

    return true;
}

}; // namespace ArgusSamples

static void printHelp()
{
    printf("Usage: camera_recording [OPTIONS]\n"
           "Options:\n"
           "  -r        Set output resolution WxH [Default 640x480]\n"
           "  -f        Set output filename [Default output.h264]\n"
           "  -t        Set encoder type H264 or H265 [Default H264]\n"
           "  -d        Set capture duration [Default 5 seconds]\n"
           "  -s        Enable profiling\n"
           "  -v        Enable verbose message\n"
           "  -h        Print this help\n");
}

static bool parseCmdline(int argc, char **argv)
{
    int c, w, h;
    bool haveFilename = false;
    while ((c = getopt(argc, argv, "r:f:t:d:s::v::h")) != -1)
    {
        switch (c)
        {
            case 'r':
                if (sscanf(optarg, "%dx%d", &w, &h) != 2)
                    return false;
                STREAM_SIZE.width() = w;
                STREAM_SIZE.height() = h;
                break;
            case 'f':
                OUTPUT_FILENAME = optarg;
                haveFilename = true;
                break;
            case 't':
                if (strcmp(optarg, "H264") == 0)
                    ENCODER_PIXFMT = V4L2_PIX_FMT_H264;
                else if (strcmp(optarg, "H265") == 0)
                {
                    ENCODER_PIXFMT = V4L2_PIX_FMT_H265;
                    if (!haveFilename)
                        OUTPUT_FILENAME = "output.h265";
                }
                else
                    return false;
                break;
            case 'd':
                CAPTURE_TIME = atoi(optarg);
                break;
            case 's':
                DO_STAT = true;
                break;
            case 'v':
                VERBOSE_ENABLE = true;
                break;
            default:
                return false;
        }
    }
    return true;
}

int main(int argc, char *argv[])
{
    if (!parseCmdline(argc, argv))
    {
        printHelp();
        return EXIT_FAILURE;
    }

    NvApplicationProfiler &profiler = NvApplicationProfiler::getProfilerInstance();

    if (!ArgusSamples::execute())
        return EXIT_FAILURE;

    profiler.stop();
    profiler.printProfilerData(std::cout);

    return EXIT_SUCCESS;
}

ERROR

Building target: gstreamerTest01
Invoking: NVCC Linker
/usr/local/cuda-9.0/bin/nvcc --cudart static -L/usr/lib/aarch64-linux-gnu -L/usr/local/cuda/targets/aarch64-linux/lib -L/usr/lib/aarch64-linux-gnu/tegra --relocatable-device-code=false -link -o  "gstreamerTest01"  ./main.o ./main1.o  ./common/NvAnalysis.o ./common/NvApplicationProfiler.o ./common/NvBuffer.o ./common/NvCudaProc.o ./common/NvDrmRenderer.o ./common/NvEglRenderer.o ./common/NvElement.o ./common/NvElementProfiler.o ./common/NvJpegDecoder.o ./common/NvJpegEncoder.o ./common/NvLogging.o ./common/NvUtils.o ./common/NvV4l2Element.o ./common/NvV4l2ElementPlane.o ./common/NvVideoConverter.o ./common/NvVideoDecoder.o ./common/NvVideoEncoder.o ./common/Thread.o   -lgstreamer-1.0  -lgstbadvideo-1.0 -lgstbadbase-1.0 -lgstbadaudio-1.0 -lgstreamer-0.10 -lnvjpeg -lnvosd -lnvbuf_utils -lX11 -lGLESv2 -lEGL -lv4l2 -lcudart -lcuda -ldrm -largus -lnveglstream_camconsumer -lpthread -lgobject-2.0  -lglib-2.0 
./main.o: In function `ArgusSamples::ConsumerThread::encoderCapturePlaneDqCallback(v4l2_buffer*, NvBuffer*, NvBuffer*, void*)':
makefile:59: recipe for target 'gstreamerTest01' failed
make: Leaving directory '/home/nvidia/Desktop/aasim/Gstreamer1/Debug'
/home/nvidia/Desktop/aasim/Gstreamer1/Debug/../main.cpp:431: undefined reference to `gst_app_src_end_of_stream'
./main.o: In function `ArgusSamples::execute()':
/home/nvidia/Desktop/aasim/Gstreamer1/Debug/../main.cpp:481: undefined reference to `gst_app_src_get_type'
/home/nvidia/Desktop/aasim/Gstreamer1/Debug/../main.cpp:481: undefined reference to `gst_app_src_set_stream_type'
collect2: error: ld returned 1 exit status
make: *** [gstreamerTest01] Error 1
> Shell Completed (exit code = 2)

SETTINGS:

INCLUDES----->

/usr/include/gstreamer-1.0
/usr/include/aarch64-linux-gnu
/home/nvidia/tegra_multimedia_api/samples/common/classes
/home/nvidia/tegra_multimedia_api/include/libjpeg-8b
/usr/include/libdrm/
/home/nvidia/tegra_multimedia_api/samples/common/algorithm/cuda
/home/nvidia/tegra_multimedia_api/argus/samples/utils
/home/nvidia/tegra_multimedia_api/include
/usr/include/gstreamer-1.0/gst/
/usr/include/gstreamer-1.0/gst/app
/usr/include/glib-2.0
/usr/lib/x86_64-linux-gnu/glib-2.0/include

LIBS---->

gstreamer-1.0
gstbadvideo-1.0
gstbadbase-1.0
gstbadaudio-1.0
gstreamer-0.10
nvjpeg
nvosd
nvbuf_utils
X11
GLESv2
EGL
v4l2
cudart
cuda
drm
argus
nveglstream_camconsumer
pthread
gobject-2.0
glib-2.0

/usr/lib/aarch64-linux-gnu
/usr/local/cuda/targets/aarch64-linux/lib
/usr/lib/aarch64-linux-gnu/tegra

Hi ,

I am able to solve the compilation problem by adding gstapp-1.0 …

Hi danell,

This example works well, but it generate directly file .mp4.
In my application i have return buffer with muxed output.

Can you help me in getting muxed data in normal buffer , maybe using some memcpy or buffer passing as argument or function returning memory pointer.


Kindly Help

something like this

char  MuxedBuff[20000];
MuxedBuff = gst_xxxx();
static bool execute()
{
    GMainLoop *main_loop;
    GstPipeline *gst_pipeline = NULL;
    GError *err = NULL;
    GstElement *appsrc_;

    gst_init (0, NULL);
    main_loop = g_main_loop_new (NULL, FALSE);
    char launch_string_[1024];

    sprintf(launch_string_,
            "appsrc name=mysource ! video/x-h264,width=%d,height=%d,stream-format=byte-stream !",
            STREAM_SIZE.width(), STREAM_SIZE.height());
    sprintf(launch_string_ + strlen(launch_string_),
                " h264parse ! qtmux ! filesink location=a.mp4 ");
    gst_pipeline = (GstPipeline*)gst_parse_launch(launch_string_, &err);
    appsrc_ = gst_bin_get_by_name(GST_BIN(gst_pipeline), "mysource");
    gst_app_src_set_stream_type(GST_APP_SRC(appsrc_), GST_APP_STREAM_TYPE_STREAM);
    gst_element_set_state((GstElement*)gst_pipeline, GST_STATE_PLAYING);
bool ConsumerThread::encoderCapturePlaneDqCallback(struct v4l2_buffer *v4l2_buf,
                                                   NvBuffer * buffer,
                                                   NvBuffer * shared_buffer,
                                                   void *arg)
{
    ConsumerThread *thiz = (ConsumerThread*)arg;

    if (!v4l2_buf)
    {
        thiz->abort();
        ORIGINATE_ERROR("Failed to dequeue buffer from encoder capture plane");
    }

#if 1
    if (buffer->planes[0].bytesused > 0)
    {
        GstBuffer *gstbuf;
        GstMapInfo map = {0};
        GstFlowReturn ret;
        gstbuf = gst_buffer_new_allocate (NULL, buffer->planes[0].bytesused, NULL);
        gstbuf->pts = thiz->timestamp;
        thiz->timestamp += 33333333; // ns

        gst_buffer_map (gstbuf, &map, GST_MAP_WRITE);
        memcpy(map.data, buffer->planes[0].data , buffer->planes[0].bytesused);
        gst_buffer_unmap(gstbuf, &map);

        g_signal_emit_by_name (thiz->m_appsrc_, "push-buffer", gstbuf, &ret);
        gst_buffer_unref(gstbuf);
    }
    else
    {
        gst_app_src_end_of_stream((GstAppSrc *)thiz->m_appsrc_);
        sleep(1);
    }
#else
     thiz->m_outputFile->write((char *) buffer->planes[0].data,
                               buffer->planes[0].bytesused);

#endif

It looks to be about gstreamer programming.

Please get help from gstreamer forum http://gstreamer-devel.966125.n4.nabble.com/
Or @vsw may share experience about
https://devtalk.nvidia.com/default/topic/1028387/jetson-tx1/closed-gst-encoding-pipeline-with-frame-processing-using-cuda-and-libargus/post/5232064/#5232064

Sure thanks for pointing me in right path. i will post and get back with results

HI Danel,

Its done, i missed this line for capturing audio and in the same function getting back muxed data in local buffer

g_signal_connect(sink, "new-sample", G_CALLBACK(on_new_sample_from_sink), NULL);

closed