Having trouble with dynamic cropping using nvvideoconvert - deepstream-4.0

Hi,
I have pipeline that has the following elements (among others)

… “queue” “nvvideoconvert” “nvdsosd” …

Using “nvvideoconvert”, I’m trying to dynamically crop each frame based on the “rect_params” received for a specific “class_id”.

Using a GST_PAD_PROBE_TYPE_BUFFER on the sink pad of “queue”, I’m able to get the (left, top, width, height) values, and then add a GST_PAD_PROBE_TYPE_IDLE on the “src” pad of “nvvideoconvert”.

In the callback for the IDLE probe, I’m able to unlink the pads between “nvvideoconvert” and “nvdsosd”, set the new “src-crop” and “dest-crop” values, and then relink the pads…

then add a GST_PAD_PROBE_TYPE_EVENT_DOWNSTREAM on the “sink” of “nvdsosd” to handle and drop the EOS event created from the unlink.

Everything seems to look good when I run my pipeline, and I can see the cropping following the detected object as it moves around using an EGL sink,

however, I’m getting a NvMapMemCacheMaint:1075334668 failed [14] on each frame, so I’m doing something wrong.

Any Ideas?

Thanks,
Robert

It looks like the above might related to My custom app based on deepstream-test3 gets lots of NvMapMemCacheMaint:1075334668 failed [14] error...

and

NvMapMemCacheMaint:1075334668 failed [14] ==> here 14 is the eror number returned from ioctl(), which means EFAULT :

define EFAULT 14 /* Bad address */

So, seems the nvidia gst plugin receive wrong address.

Maybe you could try split the pipeline to find out which plugin reports this error, or share us a repo.

Thanks!

I believe I’ve narrowed it down to the “nvvideoconvert” or “nvdsosd”, and if I had to guess it’s the "nvdsosd". The error occurs at the moment the pads for the two are re-linked while the pipeline is in a state of playing. Changing the crop settings without unlinking and relinking resolves the error… but, not surprisingly, leads to some very strange frame echoing for lack of a better description.

I’m happy to share my repo, but probably easier if I try and reproduce the problem with a patch on one of your smaller examples.

Let me know if the above helps at all, or if I should proceed with the example

Thanks so much

Hi Rjhowell44,
Yes, a simple repo sample is good for us.

Thank you so much!

Hi mchi,

I have a hacked up version of the deepstream_test3.cpp example that approximates what I’m trying to do re: dynamic clipping… and although similar, the actual clipping seems to look better in my application.

That said, both versions produce the same NvMapMemCacheMaint:1075334668 failed [14] messages.

I’ve been trying to follow the Dynamically changing the pipeline documentation on the GStreamer site as a guide for some help on this.

I’m not exactly sure how you’d like me to share this, as the examples a bit big from direct paste… but here it is for now. Let me know if there’s another approach you’d prefer.

Thanks,
Robert

P.S. although deepstream_test3.cpp supports multiple sources, it only makes sense to use one for this example.

/*
 * Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
 *
 * Permission is hereby granted, free of charge, to any person obtaining a
 * copy of this software and associated documentation files (the "Software"),
 * to deal in the Software without restriction, including without limitation
 * the rights to use, copy, modify, merge, publish, distribute, sublicense,
 * and/or sell copies of the Software, and to permit persons to whom the
 * Software is furnished to do so, subject to the following conditions:
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 *
 * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
 * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
 * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
 * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
 * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
 * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
 * DEALINGS IN THE SOFTWARE.
 */

#include <gst/gst.h>
#include <glib.h>
#include <stdio.h>
#include <math.h>
#include <string.h>
#include <sys/time.h>

#include "gstnvdsmeta.h"
//#include "gstnvstreammeta.h"
#ifndef PLATFORM_TEGRA
#include "gst-nvmessage.h"
#endif

#define MAX_DISPLAY_LEN 64

#define PGIE_CLASS_ID_VEHICLE 0
#define PGIE_CLASS_ID_PERSON 2

/* The muxer output resolution must be set if the input streams will be of
 * different resolution. The muxer will scale all the input frames to this
 * resolution. */
#define MUXER_OUTPUT_WIDTH 1280
#define MUXER_OUTPUT_HEIGHT 720

/* Muxer batch formation timeout, for e.g. 40 millisec. Should ideally be set
 * based on the fastest source's framerate. */
#define MUXER_BATCH_TIMEOUT_USEC 4000000

#define TILED_OUTPUT_WIDTH 1280
#define TILED_OUTPUT_HEIGHT 720

/* NVIDIA Decoder source pad memory feature. This feature signifies that source
 * pads having this capability will push GstBuffers containing cuda buffers. */
#define GST_CAPS_FEATURES_NVMM "memory:NVMM"

gchar pgie_classes_str[4][32] = { "Vehicle", "TwoWheeler", "Person",
  "RoadSign"
};

#define FPS_PRINT_INTERVAL 300
//static struct timeval start_time = { };

//static guint probe_counter = 0;

/* make the elements static for simplicity */
static GstElement *pipeline = NULL, *streammux = NULL, *sink = NULL, *pgie = NULL,
      *queue = NULL, *nvvidconv = NULL, *nvosd = NULL, *tiler = NULL;

/* tiler_sink_pad_buffer_probe  will extract metadata received on OSD sink pad
 * and update params for drawing rectangle, object information etc. */

static NvOSD_RectParams crop_rect_params = {};

static GstPadProbeReturn consume_eos_prop(GstPad* pPad, 
  GstPadProbeInfo* pInfo, gpointer u_data)
{
  if (GST_EVENT_TYPE(GST_PAD_PROBE_INFO_DATA(pInfo)) != GST_EVENT_EOS)
  {
    return GST_PAD_PROBE_PASS;
  }
  // remove the probe first
  gst_pad_remove_probe(pPad, GST_PAD_PROBE_INFO_ID(pInfo));
  return GST_PAD_PROBE_DROP;
}

/* video converter_sink_pad_buffer_probe  will extract metadata received on OSD sink pad
 * and update params for drawing rectangle, object information etc. */
static GstPadProbeReturn
vidconv_src_pad_idle_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer rect_params)
{
    
    NvOSD_RectParams* crop_params = (NvOSD_RectParams*)rect_params;
    gchar crop_dim[20] = { };

    GstPad* conv_src_pad = gst_element_get_static_pad(nvvidconv, "src");
    GstPad* osd_sink_pad = gst_element_get_static_pad(nvosd, "sink");

    /* unlink the pads between the nvvidconv and the nvosd */   
    gst_pad_unlink(conv_src_pad, osd_sink_pad);

    /* generate the crop dimensions string and update the video-converter's 
    * source and destination crop properties */
    g_snprintf (crop_dim, 19, "%u:%u:%u:%u", 
        crop_params->left, crop_params->top, crop_params->width, crop_params->height);
    g_object_set (G_OBJECT (nvvidconv), "src-crop", crop_dim, "dest-crop", crop_dim, NULL);

    /* Re-link the pads for the new settings to take effect */
    gst_pad_link(conv_src_pad, osd_sink_pad);

    /* The unlink will cause the OSD to generate an EOS which we need to catch and drop */
    gst_pad_add_probe(osd_sink_pad, GST_PAD_PROBE_TYPE_EVENT_DOWNSTREAM, consume_eos_prop, NULL, NULL);

    gst_object_unref(conv_src_pad);
    gst_object_unref(osd_sink_pad);

    return GST_PAD_PROBE_REMOVE;
}

/* tiler_sink_pad_buffer_probe  will extract metadata received on OSD sink pad
 * and update params for drawing rectangle, object information etc. */
static GstPadProbeReturn
tiler_src_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer crop_params)
{
    GstBuffer *buf = (GstBuffer *) info->data;
    guint num_rects = 0; 
    NvDsObjectMeta *obj_meta = NULL;
    guint vehicle_count = 0;
    guint person_count = 0;
    NvDsMetaList * l_frame = NULL;
    NvDsMetaList * l_obj = NULL;
    //NvDsDisplayMeta *display_meta = NULL;

    GstPad* conv_src_pad = gst_element_get_static_pad(nvvidconv, "src");

    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);

    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);

        for (l_obj = frame_meta->obj_meta_list; l_obj != NULL;
                l_obj = l_obj->next) {
            obj_meta = (NvDsObjectMeta *) (l_obj->data);
            if (obj_meta->class_id == PGIE_CLASS_ID_PERSON) {
                
              memcpy(crop_params, &(obj_meta->rect_params), sizeof(NvOSD_RectParams));

              gst_pad_add_probe(conv_src_pad, GST_PAD_PROBE_TYPE_IDLE, vidconv_src_pad_idle_probe, crop_params, NULL);
              gst_object_unref(conv_src_pad);
                
              /* Done processing the frame after first occurence of PERSON */
              return GST_PAD_PROBE_PASS;
            }
        }
    }
    return GST_PAD_PROBE_PASS;
}

static gboolean
bus_call (GstBus * bus, GstMessage * msg, gpointer data)
{
  GMainLoop *loop = (GMainLoop *) data;
  switch (GST_MESSAGE_TYPE (msg)) {
    case GST_MESSAGE_EOS:
      g_print ("End of stream\n");
      g_main_loop_quit (loop);
      break;
    case GST_MESSAGE_WARNING:
    {
      gchar *debug;
      GError *error;
      gst_message_parse_warning (msg, &error, &debug);
      g_printerr ("WARNING from element %s: %s\n",
          GST_OBJECT_NAME (msg->src), error->message);
      g_free (debug);
      g_printerr ("Warning: %s\n", error->message);
      g_error_free (error);
      break;
    }
    case GST_MESSAGE_ERROR:
    {
      gchar *debug;
      GError *error;
      gst_message_parse_error (msg, &error, &debug);
      g_printerr ("ERROR from element %s: %s\n",
          GST_OBJECT_NAME (msg->src), error->message);
      if (debug)
        g_printerr ("Error details: %s\n", debug);
      g_free (debug);
      g_error_free (error);
      g_main_loop_quit (loop);
      break;
    }
#ifndef PLATFORM_TEGRA
    case GST_MESSAGE_ELEMENT:
    {
      if (gst_nvmessage_is_stream_eos (msg)) {
        guint stream_id;
        if (gst_nvmessage_parse_stream_eos (msg, &stream_id)) {
          g_print ("Got EOS from stream %d\n", stream_id);
        }
      }
      break;
    }
#endif
    default:
      break;
  }
  return TRUE;
}

static void
cb_newpad (GstElement * decodebin, GstPad * decoder_src_pad, gpointer data)
{
  g_print ("In cb_newpad\n");
  GstCaps *caps = gst_pad_get_current_caps (decoder_src_pad);
  const GstStructure *str = gst_caps_get_structure (caps, 0);
  const gchar *name = gst_structure_get_name (str);
  GstElement *source_bin = (GstElement *) data;
  GstCapsFeatures *features = gst_caps_get_features (caps, 0);

  /* Need to check if the pad created by the decodebin is for video and not
   * audio. */
  if (!strncmp (name, "video", 5)) {
    /* Link the decodebin pad only if decodebin has picked nvidia
     * decoder plugin nvdec_*. We do this by checking if the pad caps contain
     * NVMM memory features. */
    if (gst_caps_features_contains (features, GST_CAPS_FEATURES_NVMM)) {
      /* Get the source bin ghost pad */
      GstPad *bin_ghost_pad = gst_element_get_static_pad (source_bin, "src");
      if (!gst_ghost_pad_set_target (GST_GHOST_PAD (bin_ghost_pad),
              decoder_src_pad)) {
        g_printerr ("Failed to link decoder src pad to source bin ghost pad\n");
      }
      gst_object_unref (bin_ghost_pad);
    } else {
      g_printerr ("Error: Decodebin did not pick nvidia decoder plugin.\n");
    }
  }
}

static void
decodebin_child_added (GstChildProxy * child_proxy, GObject * object,
    gchar * name, gpointer user_data)
{
  g_print ("Decodebin child added: %s\n", name);
  if (g_strrstr (name, "decodebin") == name) {
    g_signal_connect (G_OBJECT (object), "child-added",
        G_CALLBACK (decodebin_child_added), user_data);
  }
  if (g_strstr_len (name, -1, "nvv4l2decoder") == name) {
    g_print ("Seting bufapi_version\n");
    g_object_set (object, "bufapi-version", TRUE, NULL);
  }
}

static GstElement *
create_source_bin (guint index, gchar * uri)
{
  GstElement *bin = NULL, *uri_decode_bin = NULL;
  gchar bin_name[20] = { };

  g_snprintf (bin_name, 19, "source-bin-%02d", index);
  /* Create a source GstBin to abstract this bin's content from the rest of the
   * pipeline */
  bin = gst_bin_new (bin_name);

  /* Source element for reading from the uri.
   * We will use decodebin and let it figure out the container format of the
   * stream and the codec and plug the appropriate demux and decode plugins. */
  uri_decode_bin = gst_element_factory_make ("uridecodebin", "uri-decode-bin");

  if (!bin || !uri_decode_bin) {
    g_printerr ("One element in source bin could not be created.\n");
    return NULL;
  }

  /* We set the input uri to the source element */
  g_object_set (G_OBJECT (uri_decode_bin), "uri", uri, NULL);

  /* Connect to the "pad-added" signal of the decodebin which generates a
   * callback once a new pad for raw data has beed created by the decodebin */
  g_signal_connect (G_OBJECT (uri_decode_bin), "pad-added",
      G_CALLBACK (cb_newpad), bin);
  g_signal_connect (G_OBJECT (uri_decode_bin), "child-added",
      G_CALLBACK (decodebin_child_added), bin);

  gst_bin_add (GST_BIN (bin), uri_decode_bin);

  /* We need to create a ghost pad for the source bin which will act as a proxy
   * for the video decoder src pad. The ghost pad will not have a target right
   * now. Once the decode bin creates the video decoder and generates the
   * cb_newpad callback, we will set the ghost pad target to the video decoder
   * src pad. */
  if (!gst_element_add_pad (bin, gst_ghost_pad_new_no_target ("src",
              GST_PAD_SRC))) {
    g_printerr ("Failed to add ghost pad in source bin\n");
    return NULL;
  }

  return bin;
}

int
main (int argc, char *argv[])
{
  GMainLoop *loop = NULL;
/*  GstElement *pipeline = NULL, *streammux = NULL, *sink = NULL, *pgie = NULL,
      *nvvidconv = NULL, *nvosd = NULL, *tiler = NULL; */
#ifdef PLATFORM_TEGRA
  GstElement *transform = NULL;
#endif
  GstBus *bus = NULL;
  guint bus_watch_id;
  GstPad *tiler_src_pad = NULL;
  guint i, num_sources;
  guint tiler_rows, tiler_columns;
  guint pgie_batch_size;
  GstPadProbeType probe_type;
  GstPad *conv_sink_pad = NULL;

  /* Check input arguments */
  if (argc < 2) {
    g_printerr ("Usage: %s <uri1> [uri2] ... [uriN] \n", argv[0]);
    return -1;
  }
  num_sources = argc - 1;

  /* Standard GStreamer initialization */
  gst_init (&argc, &argv);
  loop = g_main_loop_new (NULL, FALSE);

  /* Create gstreamer elements */
  /* Create Pipeline element that will form a connection of other elements */
  pipeline = gst_pipeline_new ("dstest3-pipeline");

  /* Create nvstreammux instance to form batches from one or more sources. */
  streammux = gst_element_factory_make ("nvstreammux", "stream-muxer");
  
  g_object_set (G_OBJECT (streammux), "live-source", 0, NULL);

  if (!pipeline || !streammux) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }
  gst_bin_add (GST_BIN (pipeline), streammux);

  for (i = 0; i < num_sources; i++) {
    GstPad *sinkpad, *srcpad;
    gchar pad_name[16] = { };
    GstElement *source_bin = create_source_bin (i, argv[i + 1]);

    if (!source_bin) {
      g_printerr ("Failed to create source bin. Exiting.\n");
      return -1;
    }

    gst_bin_add (GST_BIN (pipeline), source_bin);

    g_snprintf (pad_name, 15, "sink_%u", i);
    sinkpad = gst_element_get_request_pad (streammux, pad_name);
    if (!sinkpad) {
      g_printerr ("Streammux request sink pad failed. Exiting.\n");
      return -1;
    }

    srcpad = gst_element_get_static_pad (source_bin, "src");
    if (!srcpad) {
      g_printerr ("Failed to get src pad of source bin. Exiting.\n");
      return -1;
    }

    if (gst_pad_link (srcpad, sinkpad) != GST_PAD_LINK_OK) {
      g_printerr ("Failed to link source bin to stream muxer. Exiting.\n");
      return -1;
    }

    gst_object_unref (srcpad);
    gst_object_unref (sinkpad);
  }

  /* Use nvinfer to infer on batched frame. */
  pgie = gst_element_factory_make ("nvinfer", "primary-nvinference-engine");

  /* Use nvtiler to composite the batched frames into a 2D tiled array based
   * on the source of the frames. */
  tiler = gst_element_factory_make ("nvmultistreamtiler", "nvtiler");

  /* Queue for video convert */
   
  queue = gst_element_factory_make ("queue", "queue");
  
  /* Use convertor to convert from NV12 to RGBA as required by nvosd */
  nvvidconv = gst_element_factory_make ("nvvideoconvert", "nvvideo-converter");

  /* Create OSD to draw on the converted RGBA buffer */
  nvosd = gst_element_factory_make ("nvdsosd", "nv-onscreendisplay");

  /* Finally render the osd output */
#ifdef PLATFORM_TEGRA
  transform = gst_element_factory_make ("nvegltransform", "nvegl-transform");
#endif
  sink = gst_element_factory_make ("nveglglessink", "nvvideo-renderer");

  if (!pgie || !tiler || !queue || !nvvidconv || !nvosd || !sink) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }

#ifdef PLATFORM_TEGRA
  if(!transform) {
    g_printerr ("One tegra element could not be created. Exiting.\n");
    return -1;
  }
#endif

  g_object_set (G_OBJECT (streammux), "width", MUXER_OUTPUT_WIDTH, "height",
      MUXER_OUTPUT_HEIGHT, "batch-size", num_sources,
      "batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, NULL);

  /* Configure the nvinfer element using the nvinfer config file. */
  g_object_set (G_OBJECT (pgie),
      "config-file-path", "dstest3_pgie_config.txt", NULL);

  /* Override the batch-size set in the config file with the number of sources. */
  g_object_get (G_OBJECT (pgie), "batch-size", &pgie_batch_size, NULL);
  if (pgie_batch_size != num_sources) {
    g_printerr
        ("WARNING: Overriding infer-config batch-size (%d) with number of sources (%d)\n",
        pgie_batch_size, num_sources);
    g_object_set (G_OBJECT (pgie), "batch-size", num_sources, NULL);
  }

  tiler_rows = (guint) sqrt (num_sources);
  tiler_columns = (guint) ceil (1.0 * num_sources / tiler_rows);
  /* we set the tiler properties here */
  g_object_set (G_OBJECT (tiler), "rows", tiler_rows, "columns", tiler_columns,
      "width", TILED_OUTPUT_WIDTH, "height", TILED_OUTPUT_HEIGHT, NULL);

  /* we add a message handler */
  bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
  bus_watch_id = gst_bus_add_watch (bus, bus_call, loop);
  gst_object_unref (bus);

  /* Set up the pipeline */
  /* we add all elements into the pipeline */
#ifdef PLATFORM_TEGRA
  gst_bin_add_many (GST_BIN (pipeline), pgie, tiler, queue, nvvidconv, nvosd, transform, sink,
      NULL);
  /* we link the elements together
   * nvstreammux -> nvinfer -> nvtiler -> nvvidconv -> nvosd -> video-renderer */
  if (!gst_element_link_many (streammux, pgie, tiler, queue, nvvidconv, nvosd, transform, sink,
          NULL)) {
    g_printerr ("Elements could not be linked. Exiting.\n");
    return -1;
  }
#else
gst_bin_add_many (GST_BIN (pipeline), pgie, tiler, queue, nvvidconv, nvosd, sink,
      NULL);
  /* we link the elements together
   * nvstreammux -> nvinfer -> nvtiler -> nvvidconv -> nvosd -> video-renderer */
  if (!gst_element_link_many (streammux, pgie, tiler, queue, nvvidconv, nvosd, sink,
          NULL)) {
    g_printerr ("Elements could not be linked. Exiting.\n");
    return -1;
  }
#endif

  /* Lets add probe to get informed of the meta data generated, we add probe to
   * the sink pad of the osd element, since by that time, the buffer would have
   * had got all the metadata. */
  tiler_src_pad = gst_element_get_static_pad (tiler, "src");
  if (!tiler_src_pad){
//  conv_sink_pad = gst_element_get_static_pad (nvvidconv, "sink");
//  if (!conv_sink_pad)
    g_printerr ("Could not create source pad for Tiler. Exiting.\n");
    return -1;
  }
      
  /* *************** START MOD *********************
   */
    probe_type = (GstPadProbeType)(GST_PAD_PROBE_TYPE_BLOCK | GST_PAD_PROBE_TYPE_BUFFER);
    gst_pad_add_probe (tiler_src_pad, probe_type,
        tiler_src_pad_buffer_probe, (gpointer)&crop_rect_params, NULL);

    gst_object_unref(tiler_src_pad);
  
  /* **************** END MOD *********************
   */

  /* Set the pipeline to "playing" state */
  g_print ("Now playing:");
  for (i = 0; i < num_sources; i++) {
    g_print (" %s,", argv[i + 1]);
  }
  g_print ("\n");
  gst_element_set_state (pipeline, GST_STATE_PLAYING);

  /* Wait till pipeline encounters an error or EOS */
  g_print ("Running...\n");
  g_main_loop_run (loop);

  /* Out of the main loop, clean up nicely */
  g_print ("Returned, stopping playback\n");
  gst_element_set_state (pipeline, GST_STATE_NULL);
  g_print ("Deleting pipeline\n");
  gst_object_unref (GST_OBJECT (pipeline));
  g_source_remove (bus_watch_id);
  g_main_loop_unref (loop);
  return 0;
}

Hi rjhowell44,
Sorry for late response!
Maybe this is a bug in osd which we recently fixed in DS-5.0 which will be public in following one or two weeks. The fix is for use case that crop dynamically and display, this is the same as your case, right? If not then you could try workaround by moving videconvert before tiler.
And, sorry! Due to some urgent issue, I didn’t get chance to debug your code, so I provided informantion abovefirstly and will look into your code once I get chance if above informantion does not help you.

Thanks for your patience!

Hi mchi,
Thanks for your response, and yes, it sounds like the same use case “crop dynamically and display” .

Do you know if there will be an example for this case included with the release?

And if there’s anything else you would like to share about DS-5.0 … what’s new, etc… feel free to talk at length :)

Thanks,
Robert.