ds-yolo-pipeline pipeline with on-screen-display

Hello,

I have trying to run the deepstream-yolo-app with on-screen-display.

https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps#deepstream-yolo-app

The pipeline is source, h264parser, decoder, filter1, nvvidconv,filter2, yolo, nvosd, sink.

The code clearly allows for OSD. https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/sources/apps/deepstream-yolo/deepstream-yolo-app.cpp

/* Finally render the osd output */
  if (!g_strcmp0 ("Tesla", argv[1])) {
    sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");
  } else if (!g_strcmp0 ("Tegra", argv[1])) {
    sink = gst_element_factory_make ("nvoverlaysink", "nvvideo-renderer");
  } else {
    g_printerr ("Incorrect platform. Choose between Telsa/Tegra. Exiting.\n");
    return -1;
  }

What should I change to allow on-screen-display ?

Thank you.

Hi,

OSD can be enabled in the config file directly.
Have you already tried this command:
[url]https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps#deepstream-yolo-app[/url]

Thanks.

Thank you @AastaLL.
I have tried it as in
deepstream-yolo-app Tegra ~/Tools/deepstream_sdk_on_jetson/samples/streams/sample_720p.h264 config/yolov3.txt
And the OSD is not showing. The console is printing the detected objects, but there is no video being shown, nor bouding boxes.

Thanks.

Hi,

May I know how you launch the deepstream-yolo-app?

Please noticed that the OSD will only show on the display which is physically connected the Jetson.=
If you access the Xavier with ssh, OSD is not available for a remote display.

Thanks.

Hi @AastaLLL,

I launch the app like this
deepstream-yolo-app Tegra ~/Tools/deepstream_sdk_on_jetson/samples/streams/sample_720p.h264 config/yolov3.txt

The Jetson Xavier is connected to portable monitor on the USB-C connector.

Thank you.

Hi,

Could you try if this command works for you?

export DISPLAY=:0

Thanks.

Hi, I have a similar question (maybe more silly as I am quite new to deepstream and can not find a proper document). I am running deep-stream-yolo-app in my P4 server with Ubuntu 16.04.

the config file is yolov2.txt with the following options:
–print_prediction_info=true
–print_perf_info=true

but the console only show that
Now playing: /XXXX/samples/streams/sample_72p.h264
Running…

there are not any text output on the console.

What does onscreen display mean? showing the video with bbox on the screen? I cannot see any display window created. how to achieve it?

thanks and best regards

Hi,

You will need to physical connect a monitor to the device.
But it looks like you are using an x86 desktop rather than Jetson platform, is it correct?

Thanks.

Hi, I am a using a P4 server, which I access to via mobaxTerm.
so, the only way for me to view the result is to write it to file?

YES.

Hi, I tried to write the bbox box to file while processing your sample video in samples/streams/sample_72p.h264, by running
./deepstream-yolo-app /home/xxx/samples/streams/sample_720p.h264 /home/xxx/config/yolov2.txt

But no files were written to the foler “testRlt”.
Is there anyway to check whether my yolo2 is running correctly, or debug where the error occurs.

I modified the function as follows

static GstPadProbeReturn
osd_sink_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
gpointer u_data)
{
gchar *output_path=“/home/xxx/testRlt”;
gchar bbox_file[1024] = { 0 };
FILE *bbox_params_dump_file = NULL;
NvOSD_RectParams *rect_params = NULL;

GstMeta *gst_meta = NULL;
NvDsMeta *nvdsmeta = NULL;
gpointer state = NULL;
static GQuark _nvdsmeta_quark = 0;
GstBuffer *buf = (GstBuffer *) info->data;
NvDsFrameMeta *frame_meta = NULL;
guint num_rects = 0, rect_index = 0;
NvDsObjectParams *obj_meta = NULL;
guint car_count = 0;
guint person_count = 0;
guint bicycle_count = 0;
guint truck_count = 0;

if (!_nvdsmeta_quark)
_nvdsmeta_quark = g_quark_from_static_string (NVDS_META_STRING);

while ((gst_meta = gst_buffer_iterate_meta (buf, &state))) {
if (gst_meta_api_type_has_tag (gst_meta->info->api, _nvdsmeta_quark)) {

  nvdsmeta = (NvDsMeta *) gst_meta;

  /* We are interested only in intercepting Meta of type
   * "NVDS_META_FRAME_INFO" as they are from our infer elements. */
  if (nvdsmeta->meta_type == NVDS_META_FRAME_INFO) {
    frame_meta = (NvDsFrameMeta *) nvdsmeta->meta_data;
    if (frame_meta == NULL) {
      g_print ("NvDS Meta contained NULL meta \n");
      return GST_PAD_PROBE_OK;
    }
else {
    	g_snprintf (bbox_file, sizeof (bbox_file) - 1, "%s/%06d.txt",
                 output_path, frame_number);
    	bbox_params_dump_file = fopen (bbox_file, "w");
    }


    num_rects = frame_meta->num_rects;

    /* This means we have num_rects in frame_meta->obj_params.
     * Now lets iterate through them and count the number of cars,
     * trucks, persons and bicycles in each frame */

    for (rect_index = 0; rect_index < num_rects; rect_index++) {
      obj_meta = (NvDsObjectParams *) & frame_meta->obj_params[rect_index];
      if (!g_strcmp0 (obj_meta->attr_info[YOLO_UNIQUE_ID].attr_label,
              "car"))
        car_count++;
      else if (!g_strcmp0 (obj_meta->attr_info[YOLO_UNIQUE_ID].attr_label,
              "person"))
        person_count++;
      else if (!g_strcmp0 (obj_meta->attr_info[YOLO_UNIQUE_ID].attr_label,
              "bicycle"))
        bicycle_count++;
      else if (!g_strcmp0 (obj_meta->attr_info[YOLO_UNIQUE_ID].attr_label,
              "truck"))
        truck_count++;

   
   /* Ouput bbox location as kitti file */
    	rect_params = &(obj_meta->rect_params);
    	if (bbox_params_dump_file) {
    		int left = (int) (rect_params->left);
    		int top = (int) (rect_params->top);
    		int right = left + (int) (rect_params->width);
    		int bottom = top + (int) (rect_params->height);
    		int class_index = obj_meta->class_id;
    		char *text = obj_meta->attr_info[YOLO_UNIQUE_ID].attr_label;
    		fprintf (bbox_params_dump_file,
                "%s 0.0 0 0.0 %d.00 %d.00 %d.00 %d.00 0.0 0.0 0.0 0.0 0.0 0.0 0.0\n",
               text, left, top, right, bottom);
    	}

    }
if (bbox_params_dump_file) {
    	fclose (bbox_params_dump_file);
        bbox_params_dump_file = NULL;
    }

  }
}

}
g_print ("Frame Number = %d Number of objects = %d "
"Car Count = %d Person Count = %d "
“Bicycle Count = %d Truck Count = %d \n”,
frame_number, num_rects, car_count, person_count, bicycle_count,
truck_count);
frame_number++;

return GST_PAD_PROBE_OK;
}

@AastaLLL, I used the updated code GitHub - NVIDIA-AI-IOT/deepstream_reference_apps: Samples for TensorRT/Deepstream for Tesla & Jetson and now I can run the example fine : https://github.com/NVIDIA-AI-IOT/deepstream_reference_apps/blob/master/yolo/README.md .
Also using export DISPLAY=:0 is useful to know. Thank you.