Saving or streaming DeepStream renderer output on aws ec2 instance

Hey Guys,
I am running the DeepStream sample apps in a docker container on AWS EC2 instance. I am able to successfully execute on a video stream but the problem is how to visualize/save the output from there. Any help would be great.

Setup Information:

• Hardware Platform (dGPU) : aws ec2 g4dn.xlarge
• DeepStream Version ; deepstream_sdk_v4.0.2
• TensorRT Version : 5.0
• NVIDIA GPU Driver Version (valid for GPU only) : 440.82

In the corresponding configure file, you just need to change the sink from EglSink to File, give the video container and output-file, like below

[sink1]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File 4=UDPSink 5=nvoverlaysink 6=MsgConvBroker
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4

only SW mpeg4 is supported right now.

codec=3
sync=1
bitrate=2000000
output-file=out.mp4
source-id=0

1 Like

@mchi So, I am using this sample app => deepstream_reference_apps/back-to-back-detectors at master · NVIDIA-AI-IOT/deepstream_reference_apps · GitHub
and added code to write output to a file.

int
main (int argc, char *argv[])
{
  GMainLoop *loop = NULL;
  GstElement *pipeline = NULL, *source = NULL, *h264parser = NULL,
      *decoder = NULL, *streammux = NULL, *sink = NULL, *primary_detector = NULL,
      *secondary_detector = NULL, *nvvidconv = NULL, *nvosd = NULL,
      *queue_sink = NULL, *nvvidconv_sink = NULL, *filter_sink = NULL, 
      *videoconvert = NULL, *encoder = NULL, *muxer = NULL;
  GstCaps *caps_filter_sink = NULL;
#ifdef PLATFORM_TEGRA
  GstElement *transform = NULL;
#endif
  GstBus *bus = NULL;
  guint bus_watch_id;
  GstPad *osd_sink_pad = NULL;

  /* Check input arguments */
  if (argc != 3) {
    g_printerr ("Usage: %s <H264 filename>\n", argv[0]);
    return -1;
  }

  /* Standard GStreamer initialization */
  gst_init (&argc, &argv);
  loop = g_main_loop_new (NULL, FALSE);

  /* Create gstreamer elements */
  /* Create Pipeline element that will form a connection of other elements */
  pipeline = gst_pipeline_new ("pipeline");

  /* Source element for reading from the file */
  source = gst_element_factory_make ("filesrc", "file-source");

  /* Since the data format in the input file is elementary h264 stream,
   * we need a h264parser */
  h264parser = gst_element_factory_make ("h264parse", "h264-parser");

  /* Use nvdec_h264 for hardware accelerated decode on GPU */
  decoder = gst_element_factory_make ("nvv4l2decoder", "nvv4l2-decoder");

  /* Create nvstreammux instance to form batches from one or more sources. */
  streammux = gst_element_factory_make ("nvstreammux", "stream-muxer");

  if (!pipeline || !streammux) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }

  /* Create two nvinfer instances for the two back-to-back detectors */
  primary_detector = gst_element_factory_make ("nvinfer", "primary-nvinference-engine1");

  secondary_detector = gst_element_factory_make ("nvinfer", "primary-nvinference-engine2");

  /* Use convertor to convert from NV12 to RGBA as required by nvosd */
  nvvidconv = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter");

  /* Create OSD to draw on the converted RGBA buffer */
  nvosd = gst_element_factory_make ("nvdsosd", "nv-onscreendisplay");


  if(argv[2]){
    queue_sink = gst_element_factory_make ("queue", "queue_sink");
    nvvidconv_sink = gst_element_factory_make ("nvvideoconvert", "nvvidconv_sink");
    filter_sink = gst_element_factory_make ("capsfilter", "filter_sink");
    caps_filter_sink = gst_caps_from_string ("video/x-raw, format=I420");
    g_object_set (G_OBJECT (filter_sink), "caps", caps_filter_sink, NULL);
    gst_caps_unref (caps_filter_sink);
    videoconvert = gst_element_factory_make ("videoconvert", "videoconverter");
    encoder = gst_element_factory_make ("avenc_mpeg4", "mp4-encoder");
    g_object_set (G_OBJECT (encoder), "bitrate", 1000000, NULL);
    muxer = gst_element_factory_make ("qtmux", "muxer");
    sink = gst_element_factory_make ("filesink", "nvvideo-renderer");
    g_object_set (G_OBJECT (sink), "location", argv[2], NULL);
  }


  else{
     /* Finally render the osd output */
#ifdef PLATFORM_TEGRA
  transform = gst_element_factory_make ("nvegltransform", "nvegl-transform");
#endif
  sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");

  if (!source || !h264parser || !decoder || !primary_detector || !secondary_detector
      || !nvvidconv || !nvosd || !sink) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }

#ifdef PLATFORM_TEGRA
  if(!transform) {
    g_printerr ("One tegra element could not be created. Exiting.\n");
    return -1;
  }
#endif
  }

  /* we set the input filename to the source element */
  g_object_set (G_OBJECT (source), "location", argv[1], NULL);

  g_object_set (G_OBJECT (streammux), "width", MUXER_OUTPUT_WIDTH, "height",
      MUXER_OUTPUT_HEIGHT, "batch-size", 1,
      "batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, NULL);

  /* Set the config files for the two detectors. We demonstrate this by using
   * the same detector model twice but making them act as vehicle-only and
   * person-only detectors by adjusting the bbox confidence thresholds in the
   * two seperate config files. */
  g_object_set (G_OBJECT (primary_detector), "config-file-path", "primary_detector_config.txt",
          "unique-id", PRIMARY_DETECTOR_UID, NULL);

  g_object_set (G_OBJECT (secondary_detector), "config-file-path", "secondary_detector_config.txt",
          "unique-id", SECONDARY_DETECTOR_UID, "process-mode", SECOND_DETECTOR_IS_SECONDARY ? 2 : 1, NULL);

  /* we add a message handler */
  bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
  bus_watch_id = gst_bus_add_watch (bus, bus_call, loop);
  gst_object_unref (bus);

  /* Set up the pipeline */
  /* we add all elements into the pipeline */
if (argv[2]) {
    gst_bin_add_many (GST_BIN (pipeline),
      source, h264parser, decoder, streammux, primary_detector, secondary_detector,
      nvvidconv, nvosd, queue_sink, nvvidconv_sink, filter_sink, videoconvert, encoder, muxer,
        sink, NULL);
  } else{
    #ifdef PLATFORM_TEGRA
  gst_bin_add_many (GST_BIN (pipeline),
      source, h264parser, decoder, streammux, primary_detector, secondary_detector,
      nvvidconv, nvosd, transform, sink, NULL);
#else
  gst_bin_add_many (GST_BIN (pipeline),
      source, h264parser, decoder, streammux, primary_detector, secondary_detector,
      nvvidconv, nvosd, sink, NULL);
#endif

  }

  GstPad *sinkpad, *srcpad;
  gchar pad_name_sink[16] = "sink_0";
  gchar pad_name_src[16] = "src";

  sinkpad = gst_element_get_request_pad (streammux, pad_name_sink);
  if (!sinkpad) {
    g_printerr ("Streammux request sink pad failed. Exiting.\n");
    return -1;
  }

  srcpad = gst_element_get_static_pad (decoder, pad_name_src);
  if (!srcpad) {
    g_printerr ("Decoder request src pad failed. Exiting.\n");
    return -1;
  }

  if (gst_pad_link (srcpad, sinkpad) != GST_PAD_LINK_OK) {
      g_printerr ("Failed to link decoder to stream muxer. Exiting.\n");
      return -1;
  }

  gst_object_unref (sinkpad);
  gst_object_unref (srcpad);

  /* we link the elements together */
  /* file-source -> h264-parser -> nvh264-decoder ->
   * nvinfer -> nvvidconv -> nvosd -> video-renderer */

  if (!gst_element_link_many (source, h264parser, decoder, NULL)) {
    g_printerr ("Elements could not be linked: 1. Exiting.\n");
    return -1;
  }
  if (argv[2]){
    gst_element_link_many (streammux, primary_detector, secondary_detector,
        nvvidconv, nvosd, queue_sink, nvvidconv_sink, filter_sink, videoconvert, encoder, muxer,
        sink, NULL);
  }
  else{
    #ifdef PLATFORM_TEGRA
    if (!gst_element_link_many (streammux, primary_detector, secondary_detector,
        nvvidconv, nvosd, transform, sink, NULL)) {
      g_printerr ("Elements could not be linked: 2. Exiting.\n");
      return -1;
    }
  #else
    if (!gst_element_link_many (streammux, primary_detector, secondary_detector,
        nvvidconv, nvosd, sink, NULL)) {
      g_printerr ("Elements could not be linked: 2. Exiting.\n");
      return -1;
    }
  #endif
  }

  /* Lets add probe to get informed of the meta data generated, we add probe to
   * the sink pad of the osd element, since by that time, the buffer would have
   * had got all the metadata. */
  osd_sink_pad = gst_element_get_static_pad (nvosd, "sink");
  if (!osd_sink_pad)
    g_print ("Unable to get sink pad\n");
  else
    gst_pad_add_probe (osd_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
        osd_sink_pad_buffer_probe, NULL, NULL);

  /* Set the pipeline to "playing" state */
  g_print ("Now playing: %s\n", argv[1]);
  gst_element_set_state (pipeline, GST_STATE_PLAYING);

  /* Wait till pipeline encounters an error or EOS */
  g_print ("Running...\n");
  g_main_loop_run (loop);

  /* Out of the main loop, clean up nicely */
  g_print ("Returned, stopping playback\n");
  gst_element_set_state (pipeline, GST_STATE_NULL);
  g_print ("Deleting pipeline\n");
  gst_object_unref (GST_OBJECT (pipeline));
  g_source_remove (bus_watch_id);
  g_main_loop_unref (loop);
  return 0;
}

But running this code gives me the following error:

Now playing: ../video.mp4
Creating LL OSD context new
0:00:00.983780642   593 0x55f0669096d0 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary-nvinference-engine2> NvDsInferContext[UID 2]:useEngineFile(): Failed to read from model engine file
0:00:00.983806454   593 0x55f0669096d0 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine2> NvDsInferContext[UID 2]:initialize(): Trying to create engine from model files
0:00:00.984057506   593 0x55f0669096d0 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary-nvinference-engine2> NvDsInferContext[UID 2]:generateTRTModel(): Cannot access caffemodel file '/root/deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors/../../../../samples/models/Secondary_FaceDetect/fd_lpd.caffemodel'
0:00:00.984082150   593 0x55f0669096d0 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary-nvinference-engine2> NvDsInferContext[UID 2]:initialize(): Failed to create engine from model files
0:00:00.984111980   593 0x55f0669096d0 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary-nvinference-engine2> error: Failed to create NvDsInferContext instance
0:00:00.984123949   593 0x55f0669096d0 WARN                 nvinfer gstnvinfer.cpp:692:gst_nvinfer_start:<primary-nvinference-engine2> error: Config file path: secondary_detector_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Running...
ERROR from element primary-nvinference-engine2: Failed to create NvDsInferContext instance
Error details: gstnvinfer.cpp(692): gst_nvinfer_start (): /GstPipeline:pipeline/GstNvInfer:primary-nvinference-engine2:
Config file path: secondary_detector_config.txt, NvDsInfer Error: NVDSINFER_CONFIG_FAILED
Returned, stopping playback
Deleting pipeline

I did not update the config file as you asked but rather used the code to write output file in this repository => redaction_with_deepstream/deepstream_redaction_app.c at master · NVIDIA-AI-IOT/redaction_with_deepstream · GitHub. I dont really understand , I know I am doing something wrong as I am just doing this for a couple of days. Any help is welcome. Thanks.

please refer to Back-to-back detector with DeepStream 5.0 - #3 by mchi

@mchi I made the changes as in the link that you provided but it gives me an error in linking. Below is the main function with changes.

int
main (int argc, char *argv[])
{
  GMainLoop *loop = NULL;
  GstElement *pipeline = NULL, *source = NULL, *h264parser = NULL,
      *decoder = NULL, *streammux = NULL, *sink = NULL, *primary_detector = NULL,
      *secondary_detector = NULL, *nvvidconv = NULL, *nvosd = NULL;
#ifdef PLATFORM_TEGRA
  GstElement *transform = NULL;
#endif
  GstBus *bus = NULL;
  guint bus_watch_id;
  GstPad *osd_sink_pad = NULL;

  /* Check input arguments */
  if (argc != 2) {
    g_printerr ("Usage: %s <H264 filename>\n", argv[0]);
    return -1;
  }

  /* Standard GStreamer initialization */
  gst_init (&argc, &argv);
  loop = g_main_loop_new (NULL, FALSE);

  /* Create gstreamer elements */
  /* Create Pipeline element that will form a connection of other elements */
  pipeline = gst_pipeline_new ("pipeline");

  /* Source element for reading from the file */
  source = gst_element_factory_make ("filesrc", "file-source");

  /* Since the data format in the input file is elementary h264 stream,
   * we need a h264parser */
  h264parser = gst_element_factory_make ("h264parse", "h264-parser");

  /* Use nvdec_h264 for hardware accelerated decode on GPU */
  decoder = gst_element_factory_make ("nvv4l2decoder", "nvv4l2-decoder");

  /* Create nvstreammux instance to form batches from one or more sources. */
  streammux = gst_element_factory_make ("nvstreammux", "stream-muxer");

  if (!pipeline || !streammux) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }

  /* Create two nvinfer instances for the two back-to-back detectors */
  primary_detector = gst_element_factory_make ("nvinfer", "primary-nvinference-engine1");

  secondary_detector = gst_element_factory_make ("nvinfer", "primary-nvinference-engine2");

  /* Use convertor to convert from NV12 to RGBA as required by nvosd */
  nvvidconv = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter");

  /* Create OSD to draw on the converted RGBA buffer */
  nvosd = gst_element_factory_make ("nvdsosd", "nv-onscreendisplay");

  /* Finally render the osd output */
  #ifdef PLATFORM_TEGRA
   transform = gst_element_factory_make ("nvegltransform", "nvegl-transform");
 #endif
  // sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");
  GstElement *parser1 = gst_element_factory_make ("h264parse", "h264-parser1");
  GstElement *enc = gst_element_factory_make ("nvv4l2h264enc", "h264-enc");
  GstElement *nvvidconv1 = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter1");
  sink = gst_element_factory_make ("filesink", "file-sink");
  g_object_set (G_OBJECT (sink), "location", "./out.h264", NULL);
  if (!source || !h264parser || !decoder || !primary_detector || !secondary_detector
      || !nvvidconv || !nvosd || !enc || !sink) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }

#ifdef PLATFORM_TEGRA
  if(!transform) {
    g_printerr ("One tegra element could not be created. Exiting.\n");
    return -1;
  }
#endif

  /* we set the input filename to the source element */
  g_object_set (G_OBJECT (source), "location", argv[1], NULL);

  g_object_set (G_OBJECT (streammux), "width", MUXER_OUTPUT_WIDTH, "height",
      MUXER_OUTPUT_HEIGHT, "batch-size", 1,
      "batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, NULL);

  /* Set the config files for the two detectors. We demonstrate this by using
   * the same detector model twice but making them act as vehicle-only and
   * person-only detectors by adjusting the bbox confidence thresholds in the
   * two seperate config files. */
  g_object_set (G_OBJECT (primary_detector), "config-file-path", "primary_detector_config.txt",
          "unique-id", PRIMARY_DETECTOR_UID, NULL);

  g_object_set (G_OBJECT (secondary_detector), "config-file-path", "secondary_detector_config.txt",
          "unique-id", SECONDARY_DETECTOR_UID, "process-mode", SECOND_DETECTOR_IS_SECONDARY ? 2 : 1, NULL);

  /* we add a message handler */
  bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
  bus_watch_id = gst_bus_add_watch (bus, bus_call, loop);
  gst_object_unref (bus);

  /* Set up the pipeline */
  /* we add all elements into the pipeline */
#ifdef PLATFORM_TEGRA
  gst_bin_add_many (GST_BIN (pipeline),
      source, h264parser, decoder, streammux, primary_detector, secondary_detector,
      nvvidconv, nvosd, nvvidconv1, enc, parser1, sink, NULL);
#else
  gst_bin_add_many (GST_BIN (pipeline),
      source, h264parser, decoder, streammux, primary_detector, secondary_detector,
      nvvidconv, nvosd, enc, sink, NULL);
#endif

  GstPad *sinkpad, *srcpad;
  gchar pad_name_sink[16] = "sink_0";
  gchar pad_name_src[16] = "src";

  sinkpad = gst_element_get_request_pad (streammux, pad_name_sink);
  if (!sinkpad) {
    g_printerr ("Streammux request sink pad failed. Exiting.\n");
    return -1;
  }

  srcpad = gst_element_get_static_pad (decoder, pad_name_src);
  if (!srcpad) {
    g_printerr ("Decoder request src pad failed. Exiting.\n");
    return -1;
  }

  if (gst_pad_link (srcpad, sinkpad) != GST_PAD_LINK_OK) {
      g_printerr ("Failed to link decoder to stream muxer. Exiting.\n");
      return -1;
  }

  gst_object_unref (sinkpad);
  gst_object_unref (srcpad);

  /* we link the elements together */
  /* file-source -> h264-parser -> nvh264-decoder ->
   * nvinfer -> nvvidconv -> nvosd -> video-renderer */

  if (!gst_element_link_many (source, h264parser, decoder, NULL)) {
    g_printerr ("Elements could not be linked: 1. Exiting.\n");
    return -1;
  }

#ifdef PLATFORM_TEGRA
   if (!gst_element_link_many (streammux, primary_detector, secondary_detector,
     nvvidconv, nvosd, nvvidconv1, enc, parser1, sink, NULL)) {
     g_printerr ("Elements could not be linked: 2. Exiting.\n");
     return -1;
   }
#else
  if (!gst_element_link_many (streammux, primary_detector, secondary_detector,
        nvvidconv, nvosd, enc, sink, NULL)) {
     g_printerr ("Elements could not be linked: 2. yExiting.\n");
     return -1;
   }
#endif

  /* Lets add probe to get informed of the meta data generated, we add probe to
   * the sink pad of the osd element, since by that time, the buffer would have
   * had got all the metadata. */
  osd_sink_pad = gst_element_get_static_pad (nvosd, "sink");
  if (!osd_sink_pad)
    g_print ("Unable to get sink pad\n");
  else
    gst_pad_add_probe (osd_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
        osd_sink_pad_buffer_probe, NULL, NULL);

  /* Set the pipeline to "playing" state */
  g_print ("Now playing: %s\n", argv[1]);
  gst_element_set_state (pipeline, GST_STATE_PLAYING);

  /* Wait till pipeline encounters an error or EOS */
  g_print ("Running...\n");
  g_main_loop_run (loop);

  /* Out of the main loop, clean up nicely */
  g_print ("Returned, stopping playback\n");
  gst_element_set_state (pipeline, GST_STATE_NULL);
  g_print ("Deleting pipeline\n");
  gst_object_unref (GST_OBJECT (pipeline));
  g_source_remove (bus_watch_id);
  g_main_loop_unref (loop);
  return 0;
}

And this is the error on the terminal:

./back-to-back-detectors ../video1.mp4 
Elements could not be linked: 2. yExiting.

Thanks for your help.

@mchi Hey I tried the solution provided on the link that you shared. Can you please help me out> or may be give me a direction to look in? Thanks

Hi @y14uc339
Sorry! Previous change is for Jetson(Tegra) only, please try below change, I verified it works on my side.

diff --git a/back_to_back_detectors.c b/back_to_back_detectors.c
index 302b55b..0262efe 100644
--- a/back_to_back_detectors.c
+++ b/back_to_back_detectors.c
@@ -237,10 +237,15 @@ main (int argc, char *argv[])
 #ifdef PLATFORM_TEGRA
   transform = gst_element_factory_make ("nvegltransform", "nvegl-transform");
 #endif
-  sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");
+  //sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");
+  GstElement *parser1 = gst_element_factory_make ("h264parse", "h264-parser1");
+  GstElement *enc = gst_element_factory_make ("nvv4l2h264enc", "h264-enc");
+  GstElement *nvvidconv1 = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter1");
+  sink = gst_element_factory_make ("filesink", "file-sink");
+  g_object_set (G_OBJECT (sink), "location", "./out.h264", NULL);

   if (!source || !h264parser || !decoder || !primary_detector || !secondary_detector
-      || !nvvidconv || !nvosd || !sink) {
+    || !nvvidconv || !nvosd || !enc || !sink) {
     g_printerr ("One element could not be created. Exiting.\n");
     return -1;
   }
@@ -279,11 +284,11 @@ main (int argc, char *argv[])
 #ifdef PLATFORM_TEGRA
   gst_bin_add_many (GST_BIN (pipeline),
       source, h264parser, decoder, streammux, primary_detector, secondary_detector,
-      nvvidconv, nvosd, transform, sink, NULL);
+      nvvidconv, nvosd, nvvidconv1, enc, parser1, sink, NULL);
 #else
   gst_bin_add_many (GST_BIN (pipeline),
       source, h264parser, decoder, streammux, primary_detector, secondary_detector,
-      nvvidconv, nvosd, sink, NULL);
+      nvvidconv, nvosd, nvvidconv1, enc, parser1, sink, NULL);
 #endif

   GstPad *sinkpad, *srcpad;
@@ -321,13 +326,13 @@ main (int argc, char *argv[])

 #ifdef PLATFORM_TEGRA
   if (!gst_element_link_many (streammux, primary_detector, secondary_detector,
-      nvvidconv, nvosd, transform, sink, NULL)) {
+      nvvidconv, nvosd, nvvidconv1, enc, parser1, sink, NULL)) {
     g_printerr ("Elements could not be linked: 2. Exiting.\n");
     return -1;
   }
 #else
   if (!gst_element_link_many (streammux, primary_detector, secondary_detector,
-      nvvidconv, nvosd, sink, NULL)) {
+      nvvidconv, nvosd, nvvidconv1, enc, parser1, sink, NULL)) {
     g_printerr ("Elements could not be linked: 2. Exiting.\n");
     return -1;
   }

@mchi Yes it works! Did you get the output I mean did you try on a video? The problem is it doesn’t stop running the video stream which is .mp4 around 6 minutes. Below is the terminal output :

root@583f2b9892f4:~/deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream_reference_apps/back-to-back-detectors# ./back-to-back-detectors ../video.mp4 
Now playing: ../video.mp4
Creating LL OSD context new
0:00:00.943740954   523 0x55964e76b6d0 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary-nvinference-engine2> NvDsInferContext[UID 2]:useEngineFile(): Failed to read from model engine file
0:00:00.943767240   523 0x55964e76b6d0 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine2> NvDsInferContext[UID 2]:initialize(): Trying to create engine from model files
0:00:07.376881872   523 0x55964e76b6d0 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine2> NvDsInferContext[UID 2]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_x86_64/samples/models/Secondary_FaceDetect/fd_lpd.caffemodel_b1_fp32.engine
0:00:07.829779241   523 0x55964e76b6d0 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine1> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:14.554034032   523 0x55964e76b6d0 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine1> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_x86_64/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
Running...

Should Have worked the video is 6 minutes long. Shouldn’t have taken more than 5 minutes to infer.

I tried running deepstream-test1 making the same changes to save output video but its same case there too I am running on sample_720p.mp4. Below is the terminal output:

root@583f2b9892f4:~/deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream-test1# ./deepstream-test1-app $DS_SDK_ROOT/samples/streams/sample_720p.mp4 
Now playing: /root/deepstream_sdk_v4.0.2_x86_64/samples/streams/sample_720p.mp4
Creating LL OSD context new
0:00:00.939370610   580 0x556611f416f0 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:08.545994735   580 0x556611f416f0 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_x86_64/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
Running...

This keeps on running… when it should not have taken this much time. Its weird.

I tried samples/streams/sample_720p.mp4 sample, it ended successfully and generated out.h264 under the folder.

@mchi Can you please share your main function please? I have made the changes but maybe I missed something I did crosscheck multiple times. Also How long did it take?

Oh…sorry, with mp4, I can repo the issue you mentioned.

back-to-back sample only support H264 stream

 $ make
  $ ./back-to-back-detectors <**h264**_elementary_stream>

@mchi Okay I’ll try out with h264 stream. but this also is the case with deepstream-test1 that should work with samples/streams/sample_720p.mp4 sample. But this also takes forever to execute

sorry, not get your point.

@mchi So I meant that I made the same changes to save output in a file in deepstream-test1 in the sample_apps folder but that also like doesnt end running… . Below is the terminal output when I ran deepstream-test1 with the changes:

root@583f2b9892f4:~/deepstream_sdk_v4.0.2_x86_64/sources/apps/sample_apps/deepstream-test1# ./deepstream-test1-app $DS_SDK_ROOT/samples/streams/sample_720p.mp4 
Now playing: /root/deepstream_sdk_v4.0.2_x86_64/samples/streams/sample_720p.mp4
Creating LL OSD context new
0:00:00.939370610   580 0x556611f416f0 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
0:00:08.545994735   580 0x556611f416f0 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:generateTRTModel(): Storing the serialized cuda engine to file at /root/deepstream_sdk_v4.0.2_x86_64/samples/models/Primary_Detector/resnet10.caffemodel_b1_int8.engine
Running...

got it, yes, should sahre the same reason

@mchi Hey thanks it worked just figured out that deepstream-test1 also requires .h264 stream as input. Thanks

1 Like