Python in DeepStream: error {Internal data stream error} while running deepstream-test1

Hi hosseinzadeh.88,

Please help to open a new topic with more details, then we will support you through there. Thanks

when I am trying to replace the sink command you mentioned, i am getting the following error.
0:00:15.611165567 22138 0x53ffde0 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:15.611284942 22138 0x53ffde0 WARN nvinfer gstnvinfer.cpp:1975:gst_nvinfer_output_loop: error: streaming stopped, reason error (-5)
Error: gst-stream-error-quark: Internal data stream error. (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1975): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason error (-5)
Exiting app

is unsetting the display necessary to stop the visual output on the screen? and what exactly is unset DISPLAY?

thanks!! it worked.

1 Like

finally worked.thx

Creating Pipeline

Creating Source

Creating H264Parser

Creating Decoder

nvstreamux

elemetfactory

convertor

onscreendisplay

Creating EGLSink

Playing file /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.h264
Adding elements to Pipeline

Linking elements in the Pipeline

Starting pipeline

Opening in BLOCKING MODE
Opening in BLOCKING MODE
0:00:04.406641673 22810 0x32c8ce10 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/home/myelin/DeepStream/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40

0:00:04.406836914 22810 0x32c8ce10 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /home/myelin/DeepStream/Primary_Detector/resnet10.caffemodel_b1_gpu0_int8.engine
0:00:04.421854652 22810 0x32c8ce10 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config.txt sucessfully
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
Frame Number=0 Number of Objects=5 Vehicle_count=3 Person_count=2
0:00:04.668020916 22810 0x32c998f0 WARN nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop: error: Internal data stream error.
0:00:04.668092535 22810 0x32c998f0 WARN nvinfer gstnvinfer.cpp:1984:gst_nvinfer_output_loop: error: streaming stopped, reason error (-5)
Error: gst-stream-error-quark: Internal data stream error. (1): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1984): gst_nvinfer_output_loop (): /GstPipeline:pipeline0/GstNvInfer:primary-inference:
streaming stopped, reason error (-5)

please tell me how to solve it

Can you share your config?

Hi,
I also have a headless deepstream dev env. I am coding in C. Here is my code translated from python script in the answer:

The filesink path is shown in create_encode_file_bin(). I have a python version of it below. This code can replace the main function in deepstream_test_1.py. It’s very basic with configs hardcoded (container = mp4, bitrate = 2000000, sync = 1, output location = ./out.mp4 etc). You can tweak those to suit your use case. – zhliunycm2

  1. Change main function from /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-test1/deepstream_test1_app.c
    into this.
GMainLoop *loop = NULL;
  GstElement *pipeline = NULL, *source = NULL, *h264parser = NULL,
      *decoder = NULL, *streammux = NULL, *sink = NULL, *pgie = NULL, *nvvidconv = NULL,
      *nvosd = NULL, *queue = NULL, *nvvidconv2 = NULL, *capsfilter = NULL,
      *encoder = NULL, *codeparser = NULL, *container = NULL;
  GstCaps *caps = NULL;
  GstBus *bus = NULL;
  guint bus_watch_id;
  GstPad *osd_sink_pad = NULL;

  int current_device = -1;
  cudaGetDevice(&current_device);
  struct cudaDeviceProp prop;
  cudaGetDeviceProperties(&prop, current_device);
  /* Check input arguments */
  if (argc != 2) {
    g_printerr ("Usage: %s <H264 filename>\n", argv[0]);
    return -1;
  }

  /* Standard GStreamer initialization */
  gst_init (&argc, &argv);
  loop = g_main_loop_new (NULL, FALSE);

  /* Create gstreamer elements */
  /* Create Pipeline element that will form a connection of other elements */
  pipeline = gst_pipeline_new ("dstest1-pipeline");
  GST_PIPL_INIT_ERROR(pipeline, "pipeline");
  /* Source element for reading from the file */
  source = gst_element_factory_make ("filesrc", "file-source");
  GST_PIPL_INIT_ERROR(source, "source");
  /* Since the data format in the input file is elementary h264 stream,
   * we need a h264parser */
  h264parser = gst_element_factory_make ("h264parse", "h264-parser");
  GST_PIPL_INIT_ERROR(h264parser, "h264parser");
  /* Use nvdec_h264 for hardware accelerated decode on GPU */
  decoder = gst_element_factory_make ("nvv4l2decoder", "nvv4l2-decoder");
  GST_PIPL_INIT_ERROR(decoder, "decoder");
  /* Create nvstreammux instance to form batches from one or more sources. */
  streammux = gst_element_factory_make ("nvstreammux", "stream-muxer");
  GST_PIPL_INIT_ERROR(streammux, "streammux");
  /* Use nvinfer to run inferencing on decoder's output,
   * behaviour of inferencing is set through config file */
  pgie = gst_element_factory_make ("nvinfer", "primary-nvinference-engine");
  GST_PIPL_INIT_ERROR(pgie, "pgie");
  /* Use convertor to convert from NV12 to RGBA as required by nvosd */
  nvvidconv = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter");
  GST_PIPL_INIT_ERROR(nvvidconv, "nvvidconv");
  /* Create OSD to draw on the converted RGBA buffer */
  nvosd = gst_element_factory_make ("nvdsosd", "nv-onscreendisplay");
  GST_PIPL_INIT_ERROR(nvosd, "nvosd");
  queue = gst_element_factory_make("queue", "queue");
  GST_PIPL_INIT_ERROR(queue, "queue");
  nvvidconv2 = gst_element_factory_make("nvvideoconvert", "convertor2");
  GST_PIPL_INIT_ERROR(nvvidconv2, "nvvidconv2");
  capsfilter = gst_element_factory_make("capsfilter", "capsfilter");
  GST_PIPL_INIT_ERROR(capsfilter, "capsfilter");
  encoder = gst_element_factory_make("avenc_mpeg4", "encoder");
  GST_PIPL_INIT_ERROR(encoder, "encoder");
  codeparser = gst_element_factory_make("mpeg4videoparse", "mpeg4-parser");
  GST_PIPL_INIT_ERROR(codeparser, "codeparser");
  container = gst_element_factory_make("qtmux", "qtmux");
  GST_PIPL_INIT_ERROR(container, "container");
  sink = gst_element_factory_make ("filesink", "filesink");
  GST_PIPL_INIT_ERROR(sink, "sink");

  /* we set the input filename to the source element */
  g_object_set (G_OBJECT (source), "location", argv[1], NULL);
  g_object_set (G_OBJECT (streammux), "batch-size", 1, NULL);
  g_object_set (G_OBJECT (streammux), "width", MUXER_OUTPUT_WIDTH, "height",
      MUXER_OUTPUT_HEIGHT, "batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, NULL);
  /* Set all the necessary properties of the nvinfer element,
   * the necessary ones are : */
  g_object_set (G_OBJECT (pgie), "config-file-path", "dstest1_pgie_config.txt", NULL);
  g_object_set (G_OBJECT (sink), "location", "./output.mp4", NULL);
  g_object_set (G_OBJECT (sink), "sync", 1, NULL);
  g_object_set (G_OBJECT (sink), "async", 0, NULL);
  caps = gst_caps_new_simple ("video/x-raw", "format", G_TYPE_STRING, "I420", NULL);
  g_object_set (G_OBJECT (capsfilter), "caps", caps, NULL);
  g_object_set (G_OBJECT (encoder), "bitrate", 2000000, NULL);
  /* we add a message handler */
  bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
  bus_watch_id = gst_bus_add_watch (bus, bus_call, loop);
  gst_object_unref (bus);

  /* Set up the pipeline */
  /* we add all elements into the pipeline */
  gst_bin_add_many (GST_BIN (pipeline),
      source, h264parser, decoder, streammux, pgie,
      nvvidconv, nvosd, queue, nvvidconv2, capsfilter, encoder, codeparser, container, sink, NULL);
  /* link decoder to streammux by means of pad*/
  GstPad *strmmuxpad, *decoderpad;
  decoderpad = gst_element_get_static_pad (decoder, "src"); // "src" CANNOT be changed
  if (!decoderpad) {
    g_printerr ("Decoder request src pad failed. Exiting.\n");
    return -1;
  }
  strmmuxpad = gst_element_get_request_pad (streammux, "sink_0"); // "sink_0"  CANNOT be changed
  if (!strmmuxpad) {
    g_printerr ("Streammux request sink pad failed. Exiting.\n");
    return -1;
  }
  if (gst_pad_link (decoderpad, strmmuxpad) != GST_PAD_LINK_OK) {
      g_printerr ("Failed to link decoder to stream muxer. Exiting.\n");
      return -1;
  }
  gst_object_unref (strmmuxpad);
  gst_object_unref (decoderpad);

  /* file-source -> h264-parser -> nvh264-decoder -> streammux
   * nvinfer -> nvvidconv -> nvosd -> queue ->
   * nvvidconv2 -> capsfilter -> encoder -> codeparser ->
   * container -> sink
   */
  if (!gst_element_link_many (source, h264parser, decoder, NULL)) {
    g_printerr ("Elements could not be linked: 1. Exiting.\n");
    return -1;
  }
  // decoder CANNOT directly link to pgie,streammux is needed to put in the middle
  if (!gst_element_link_many (streammux, pgie,
    nvvidconv, nvosd, queue, nvvidconv2, capsfilter, encoder, codeparser, container, sink, NULL)) {
    g_printerr ("Elements could not be linked: 2. Exiting.\n");
    return -1;
  }

  /* Lets add probe to get informed of the meta data generated, we add probe to
   * the sink pad of the osd element, since by that time, the buffer would have
   * had got all the metadata. */
  osd_sink_pad = gst_element_get_static_pad (nvosd, "sink");
  if (!osd_sink_pad)
    g_print ("Unable to get sink pad\n");
  else
    gst_pad_add_probe (osd_sink_pad, GST_PAD_PROBE_TYPE_BUFFER,
        osd_sink_pad_buffer_probe, NULL, NULL);
  gst_object_unref (osd_sink_pad);

  /* Set the pipeline to "playing" state */
  g_print ("Now playing: %s\n", argv[1]);
  gst_element_set_state (pipeline, GST_STATE_PLAYING);

  /* Wait till pipeline encounters an error or EOS */
  g_print ("Running...\n");
  g_main_loop_run (loop);

  /* Out of the main loop, clean up nicely */
  g_print ("Returned, stopping playback\n");
  gst_element_set_state (pipeline, GST_STATE_NULL);
  g_print ("Deleting pipeline\n");
  gst_object_unref (GST_OBJECT (pipeline));
  g_source_remove (bus_watch_id);
  g_main_loop_unref (loop);
  return 0;
  1. Add a define at the very beginning of /opt/nvidia/deepstream/deepstream-5.1/sources/apps/sample_apps/deepstream-test1/deepstream_test1_app.c
#define GST_PIPL_INIT_ERROR(g_obj, name) \
  do{ \
    if (!g_obj) { \
      g_printerr ("%s could not be created. Exiting.\n", name); \
      return -1; \
    } \
  } while(0)

You are good to go. And enjoy playing with this demo by saving a mp4 file on the device.

1 Like