Encode and Preview code

Hi Folks,

I need to record a from my camera and concurrently, preview the video.

Following command line is working for me.

gst-launch-1.0 -vv nvcamerasrc fpsRange="30.0 30.0" ! 'video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1' !  tee name=t ! queue leaky=1 ! nvoverlaysink -e t. ! queue ! omxh265enc iframeinterval=24 bitrate=10000000 ! h265parse ! queue name=queenc ! matroskamux name=mux ! filesink location=/home/ubuntu/cameracapture2.mkv -e

I would like to convert this to code, so that I can extract and process the frames being recorded.

Following is what I came up with. This code is not fully working - I see image on preview screen and then everything freezes. The output encoded video file (/home/ubuntu/cameracapture2.mkv) is not generated. It seems that some queue/buffer is getting choked.

Please help with any potential issue in my code…

#include <gst/gst.h>

int main(int argc, char *argv[]) {
  GstElement *pipeline, *source, *caps, *sink, *fsink;
  GstBus *bus;
  GstCaps *filtercaps;
  GstElement *tee, *encoder_q,*encoder_qmux, *vq1, *vq2;
  GstElement *encoder;
  GstElement *parser;
  GstElement *mux;
  GstMessage *msg;
  GstBin     *recorder;
  GstStateChangeReturn ret;
  GstPad      *srcpad,*sinkpad; 

  /* Initialize GStreamer */
  gst_init (&argc, &argv);

  /* Create the elements */
  source        = gst_element_factory_make ("nvcamerasrc", "source");
  sink          = gst_element_factory_make ("nvoverlaysink", "sink");
  tee           = gst_element_factory_make ("tee", "videotee");
  encoder_q     = gst_element_factory_make ("queue", "encoderq");
  encoder_qmux  = gst_element_factory_make ("queue", "muxq");
  vq1           = gst_element_factory_make ("queue", "q1");
  vq2           = gst_element_factory_make ("queue", "q2");
  encoder       = gst_element_factory_make ("omxh265enc" , "h265encoder");
  parser        = gst_element_factory_make ("h265parse", "parser-h265");
  mux           = gst_element_factory_make ("matroskamux", "muxer");
  fsink         = gst_element_factory_make ("filesink", "destination");  

  recorder = GST_BIN(gst_bin_new("recording-bin"));

  /* Create the empty pipeline */
  pipeline = gst_pipeline_new ("test-pipeline");

  if (!pipeline || !source || !sink || !tee || !encoder_q || !vq1 || !vq2|| !encoder || !parser || !mux || !fsink) {
    g_printerr ("Not all elements could be created.\n");
    return -1;
  }


  caps = gst_element_factory_make ("capsfilter", "filter");
  g_assert (caps != NULL); /* should always exist */


  filtercaps = gst_caps_from_string("video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080, format=(string)I420, framerate=(fraction)30/1 ");
  g_object_set (G_OBJECT (caps), "caps", filtercaps, NULL);
  gst_caps_unref (filtercaps);


  /* Modify the source's properties */
  //g_object_set (source, "pattern", 0, NULL);
  g_object_set (encoder, "iframeinterval" , 24, "bitrate" , 10000000, NULL);
  g_object_set (mux,     "name" , "mux", NULL);
  g_object_set (fsink,   "location", "/home/ubuntu/cameracapture2.mkv", NULL);
  g_object_set (encoder_qmux,      "name", "queenc", NULL);
  g_object_set (vq1,      "leaky", 1, NULL);

  /* Build recorder pipeline */
  sinkpad               = gst_element_get_static_pad(encoder_q,"sink");
  GstPad  *ghost        = gst_ghost_pad_new("vsink", sinkpad);
  if (NULL == ghost){
     g_error("Unable to create ghostpad !\n");
  }  
  gst_element_add_pad(GST_ELEMENT(recorder),ghost);
  gst_object_unref (GST_OBJECT(sinkpad));
  gst_element_link_many(encoder_q,encoder,parser,encoder_qmux,mux, fsink,NULL);

  /* Build the pipeline */
  gst_bin_add_many (GST_BIN (pipeline), source, caps, tee, vq1, sink, vq2, recorder, NULL);
  if (gst_element_link_many (source,caps,tee, vq1, sink, NULL) != TRUE) {
    g_printerr ("Elements could not be linked.\n");
    gst_object_unref (pipeline);
    return -1;
  }

  /* link the tee and queues */
  srcpad                = gst_element_get_request_pad(tee,"src_%u");
  sinkpad               = gst_element_get_static_pad(vq1,"sink");
  gst_pad_link(srcpad,sinkpad);
  gst_element_link(vq1,sink);
  srcpad                = gst_element_get_request_pad(tee,"src_%u");
  sinkpad               = gst_element_get_static_pad (GST_ELEMENT(recorder),"vsink");
  gst_pad_link(srcpad,sinkpad);
  



  /* Start playing */
  ret = gst_element_set_state (pipeline, GST_STATE_PLAYING);
  if (ret == GST_STATE_CHANGE_FAILURE) {
    g_printerr ("Unable to set the pipeline to the playing state.\n");
    gst_object_unref (pipeline);
    return -1;
  }

  /* Wait until error or EOS */
  bus = gst_element_get_bus (pipeline);
  msg = gst_bus_timed_pop_filtered (bus, GST_CLOCK_TIME_NONE, GST_MESSAGE_ERROR | GST_MESSAGE_EOS);

  /* Parse message */
  if (msg != NULL) {
    GError *err;
    gchar *debug_info;

    switch (GST_MESSAGE_TYPE (msg)) {
      case GST_MESSAGE_ERROR:
        gst_message_parse_error (msg, &err, &debug_info);
        g_printerr ("Error received from element %s: %s\n", GST_OBJECT_NAME (msg->src), err->message);
        g_printerr ("Debugging information: %s\n", debug_info ? debug_info : "none");
        g_clear_error (&err);
        g_free (debug_info);
        break;
      case GST_MESSAGE_EOS:
        g_print ("End-Of-Stream reached.\n");
        break;
      default:
        /* We should not reach here because we only asked for ERRORs and EOS */
        g_printerr ("Unexpected message received.\n");
        break;
    }
    gst_message_unref (msg);
  }

  /* Free resources */
  gst_object_unref (bus);
  gst_element_set_state (pipeline, GST_STATE_NULL);
  gst_object_unref (pipeline);
  return 0;
}

Hi,
Please refer to the source code of nvgstcapture-1.0 in
https://developer.nvidia.com/embedded/dlc/l4t-sources-24-2-1

Also a sample from @dk1900
https://devtalk.nvidia.com/default/topic/1005986/jetson-tk1/tk1-omxh264enc-not-respecting-bitrate-during-scene-changes-/post/5137046/#5137046

Thanks DaneLLL.

After few fixes in aforesaid code, I am able to preview and record. My pipeline is like -

nvcamerasrc --> caps --> tee --> queue --> nvoverlaysink
|
–> encoder_q --> encoder --> parser --> mux --> fsink

Next I would like to extract frames from upper half of pipeline for further computer vision processing. I looked up GStreamer docs - and seems like there a way to extract frames from “appsink” using signals/callbacks.

What is main difference between appsink and nvoverlaysink ? Is there any way I can extract frames from nvoverlaysink ?

Is there any otherway I can extract frames for further processing (from the pipeline that I have built) ?

Thanks,

Hi George,
Please refer to
https://devtalk.nvidia.com/default/topic/978438/jetson-tx1/optimizing-access-to-image-data-acquired-with-nvcamerasrc/post/5026998/#5026998

You can use nvivafilter.

Hi DaneLLL

Out image-processing/CV pipe needs to keep copies of few frames, refer to old and current frames and need few ‘intermediate’ frames/results.

From my first read of nvivafilter - it does not seem suitable (due to inplace calculations of pixels for example) for our requirements. Are there examples that I can refer to where -

  1. Input camera frames buffers are read, and managed in a queue of frames ?
  2. memory for intermediate frames are allocated ?
  3. Memory for intermediate/final output frames are referenced ?

I would assume that my CV pipe requirements are not unique and very typical of many of current users Jetson based CV applications.

Please help.

Thanks

Hi,
For nvcamerasrc + open CV, please refer to
https://devtalk.nvidia.com/default/topic/987537/jetson-tx1/videocapture-fails-to-open-onboard-camera-l4t-24-2-1-opencv-3-1/post/5064902/#5064902