Custom Gst-nvinferserver post processing received wild pointer resulting in signal11

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson )
• DeepStream Version 6.1
• JetPack Version jetPack5.0.2
• TensorRT Version 8.4.1.5 + cuda11.4.239
• Issue Type( bugs)

The NvDsInferLayerInfo in std::vector is found in the YoloV5 custom post-processing plug-in after GRPC communication through tritonserver and Gst-nvinferserver The buffer pointer becomes an outfield pointer, causing the program to throw signal 11.

tritonserver config:
config.pbtxt

platform: "tensorrt_plan"
  max_batch_size: 1
  default_model_filename: "mutilModelsA.engine"
  input [
    {
      name: "images"
      data_type: TYPE_FP32
      dims: [ 3, 384, 640 ]
    }
  ]
  output [
    {
      name: "output1"
      data_type: TYPE_FP32
      dims: [ 3, 12, 20 , 28 ]
    },
	{
      name: "output2"
      data_type: TYPE_FP32
      dims: [ 3, 24, 40 , 28 ]
    },
	{
      name: "output3"
      data_type: TYPE_FP32
      dims: [ 3, 48, 80 , 28 ]
    }
  ]
instance_group [
    {
      count: 1
      kind: KIND_GPU 
      gpus: [ 0 ]
    }
  ]
dynamic_batching {
  preferred_batch_size: [1]
  max_queue_delay_microseconds: 5000000
  preserve_ordering: true
}
version_policy: { all { }}
optimization { execution_accelerators {
  gpu_execution_accelerator : [ {
    name : "tensorrt"
    parameters { key: "precision_mode" value: "FP16" }
    parameters { key: "max_workspace_size_bytes" value: "1073741824" }
    }]
}}

Gst-nvinferserver :

infer_config {
  unique_id: 1
  gpu_ids: [0]
  max_batch_size: 1
  backend {
    inputs: [ {
      name: "images"
    }
    ]
    outputs: [
      {name: "output1"},
      {name: "output2"},
      {name: "output3"}
    ]
    triton {
      model_name: "mutilModelsA1"
      version: 1
      grpc {
        url: "localhost:8052"
      }
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_LINEAR
    tensor_name: "images"
    maintain_aspect_ratio: 0
    frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
    frame_scaling_filter: 1
    normalize {
      scale_factor: 0.0039215697906911373
      channel_offsets: [0, 0, 0]
    }
  }
  custom_lib {
    path: "models/plugins/libnvdsinfer_custom_impl_Yolo.so"
  }
  postprocess {
    labelfile_path: "models/Runmodels/mutilModelsA1/labels.txt"
    detection {
      num_detected_classes: 23
      custom_parse_bbox_func: "NvDsInferParseCustomYoloV5_3_Out"
      per_class_params {
        key: 0
        value { pre_threshold: 0.6 }
      }
      nms {
        confidence_threshold:0.2
        topk:20
        iou_threshold:0.6
      }
    }
  }
}

input_control {
  process_mode: PROCESS_MODE_FULL_FRAME
  operate_on_gie_id: -1
  interval: 25
}

output_control { 
  detect_control { 
    default_filter { 
      bbox_filter { 
        min_width: 32, 
        min_height: 32 
      } 
    } 
  } 
}
  • Post-processing code interface
static inline std::vector<const NvDsInferLayerInfo*>
SortLayers(const std::vector<NvDsInferLayerInfo> & outputLayersInfo)
{
    std::vector<const NvDsInferLayerInfo*> outLayers;
    for (auto const &layer : outputLayersInfo) {
        outLayers.push_back (&layer);
    }
    std::sort(outLayers.begin(), outLayers.end(),
        [ ](const NvDsInferLayerInfo* a, const NvDsInferLayerInfo* b) {
            return a->inferDims.d[1] < b->inferDims.d[1];
        });
    return outLayers;
}

extern "C" bool NvDsInferParseCustomYoloV5_3_Out(
    std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
    NvDsInferNetworkInfo const& networkInfo,
    NvDsInferParseDetectionParams const& detectionParams,
    std::vector<NvDsInferParseObjectInfo>& objectList)
{   
    const int anchor[3][6] = {{116, 90, 156, 198, 373, 326},{30, 61, 62, 45, 59, 119},{10, 13, 16, 30, 33, 23}};
    assert(outputLayersInfo.size() == 3);  //3 out
    assert(outputLayersInfo[0].inferDims.numDims == 4);  //3*grid_h*grid_w*idScore
    assert(outputLayersInfo[0].inferDims.d[3] == (detectionParams.numClassesConfigured+5));
    const std::vector<const NvDsInferLayerInfo*> sortedLayers =
        SortLayers (outputLayersInfo);
    for (uint idx = 0; idx < outputLayersInfo.size(); ++idx) {
        const NvDsInferLayerInfo &layer = *sortedLayers[idx];
        const uint gridSizeH = layer.inferDims.d[1];
        const uint gridSizeW = layer.inferDims.d[2];
        const uint stride = DIVUP(networkInfo.width, gridSizeW);
        assert(stride == DIVUP(networkInfo.height, gridSizeH));
        size_t size = layer.inferDims.numElements;
        float *pBuf = (float*)(layer.buffer);
        std::cout <<"*pBuf:" <<*pBuf<<std::endl;    //There is a paragraph error (signal 11)   I think it's become a wild pointer
        std::vector<NvDsInferParseObjectInfo> outObjs =
            decodeYoloV5_3_Tensor((const float*)(layer.buffer), anchor[idx], gridSizeW, gridSizeH, stride,
                       detectionParams, networkInfo.width, networkInfo.height);
        objectList.insert(objectList.end(), outObjs.begin(), outObjs.end());
    }
    return true;
}

I will check.

please refer to this yolo triton sample:Deepstream / Triton Server - YOLOv7

    • I said the deployment was successful and the test results came out,[wild pointer is an accidental event, It’s not 100% triggered.

will the result be ok if remove “std::cout <<”*pBuf:" <<*pBuf<<std::endl; "?

  • If deleted, the same problem will occur later in the process

  • In the following function x = input[offset + SCORE]; this is going to be a problem.

static std::vector<NvDsInferParseObjectInfo>
decodeYoloV5_3_Tensor(const float* input, const int anchor[6], const uint grid_w, const uint grid_h, 
    const uint stride, NvDsInferParseDetectionParams const& detectionParams,const uint& netW,const uint& netH){
    enum BBoxIndex {BOX_X = 0,BOX_Y, BOX_W, BOX_H, SCORE, LABEL}; 
    std::vector<NvDsInferParseObjectInfo> binfo;
    int grid_len = grid_h * grid_w;
    // std::cout<<"anchor="<<anchor[0]<<" grid_w="<<grid_w<<" grid_h="<<grid_h<<" stride"<<stride
    //                  <<" netW="<<netW<<" netH="<<netH<<std::endl;
    int offset;
    float x;
    float box_confidence;
    float box_x;
    float box_y;
    float box_w;
    float box_h;
    float maxClassProbs;
    int maxClassId;

    for (int a = 0; a < 3; ++a)
    {
        for (int i = 0; i < grid_h; ++i)
        {
            for (int j = 0; j < grid_w; ++j)
            {
           
                //cx, cy, w, h, conf, cls...
                offset = (a * grid_len + i * grid_w + j) * (detectionParams.numClassesConfigured + 5); 
                x =  input[offset + SCORE];
                box_confidence = sigmoid(x);
                if (box_confidence >= detectionParams.perClassPreclusterThreshold[0])
                {   
                    NvDsInferParseObjectInfo ObjectInfo;
              
                    box_x = sigmoid(input[offset + BOX_X]) * 2.0 - 0.5;
                    box_y = sigmoid(input[offset + BOX_Y]) * 2.0 - 0.5;
                    box_w = sigmoid(input[offset + BOX_W]) * 2.0;
                    box_h = sigmoid(input[offset + BOX_H]) * 2.0;
                    box_x = (box_x + j) * (float)stride;=
                    box_y = (box_y + i) * (float)stride;
                    box_w = box_w * box_w * (float)anchor[a * 2];
                    box_h = box_h * box_h * (float)anchor[a * 2 + 1];
                    box_x -= (box_w / 2.0);
                    box_y -= (box_h / 2.0);

                    maxClassProbs = input[offset + LABEL];
                    maxClassId = 0;
                    for (int k = 0; k < detectionParams.numClassesConfigured; ++k)
                    {
                         float prob = input[offset + LABEL + k];
                        if (prob > maxClassProbs)
                        {
                            maxClassId = k;
                            maxClassProbs = prob;
                        }
                    }

                    box_x = clamp(box_x, 0, netW);
                    box_y = clamp(box_y, 0, netH);
                    box_w = clamp(box_w, 0, netW);
                    box_h = clamp(box_h, 0, netH);

                    ObjectInfo.classId = maxClassId;
                    ObjectInfo.detectionConfidence = box_confidence;
                    ObjectInfo.left = box_x;
                    ObjectInfo.top = box_y;
                    ObjectInfo.width = box_w;
                    ObjectInfo.height = box_h;
                
                    binfo.push_back(ObjectInfo); 
                }
            }
        }
    }
    return binfo;;
}

To narrow to this issue, did you try Triton CAPI method? the only one modification is model_repo, please refer to opt\nvidia\deepstream\deepstream\samples\configs\deepstream-app-triton\config_infer_plan_engine_primary.txt

  • Tried the C api approach,Only by changing the configuration of grpc and model_repo, the number of rtsp reasoning routes supported by the program is reduced from 50 to 3. If more than 3 routes are used, they will get stuck. Of course, after changing to the configuration of model_repo, there is no problem with outfield Pointers.

Using the same post-processing and model, I also had no problems using Gst-nvinfer on Jetson Xavier NX Jetpack4.5.1.

I found the limit of 3 ways, which is related to the fact that my program uses multiple model cascade. After many tests, I ran 50 ways of rtsp at the same time and only used one model. There was no problem with the C API,Switching to GRPC would occasionally bring up wild Pointers, but my current project requirements still require GRPC. So we still need your help with this problem.

  1. could you describe your deployment? how did you deploy the triton server? are the 50 rtsp sources same?
  2. could you share the simplified nvinfer and nvinferserver code, configuration files, models? how often it crashed?

1.tritonserver and my application are both deployed in the jetson edge box, the configuration I have sent above.
Since I need to process multiple models simultaneously (each model corresponds to a gst-nvinferserver plug-in and a separate configuration), the 50 RTSP sources are the same (distribution via streaming can support thousands of channels so 50 channels are fine).
2.I’m sorry that my project can’t be simplified, but I’ll try to use the deepstream example to replicate this problem and provide the code and environment for debugging

Maybe this ?

• Hardware Platform (Jetson )
• DeepStream Version 6.1
• JetPack Version jetPack5.0.2
• TensorRT Version 8.4.1.5 + cuda11.4.239

Sorry, I was too busy in the last two days. Today, I repeated this problem by simplifying the code of deepstream-test3. Below, I will provide my relevant code, model, triton configuration and nvinferserver configuration.

  • If it doesn’t appear immediately, please try several times

models.zip (14.7 MB)

main.cpp
input: 50 rtsp

  • The rtsp is reused 50 times
#include <gst/gst.h>
#include <glib.h>
#include <stdio.h>
#include <math.h>
#include <string.h>
#include <sys/time.h>
#include <cuda_runtime_api.h>

#include "gstnvdsmeta.h"
#include "gst-nvmessage.h"

#define MAX_DISPLAY_LEN 64

#define PGIE_CLASS_ID_VEHICLE 0
#define PGIE_CLASS_ID_PERSON 2

/* By default, OSD process-mode is set to CPU_MODE. To change mode, set as:
 * 1: GPU mode (for Tesla only)
 * 2: HW mode (For Jetson only)
 */
#define OSD_PROCESS_MODE 0

/* By default, OSD will not display text. To display text, change this to 1 */
#define OSD_DISPLAY_TEXT 0

/* The muxer output resolution must be set if the input streams will be of
 * different resolution. The muxer will scale all the input frames to this
 * resolution. */
#define MUXER_OUTPUT_WIDTH 1920
#define MUXER_OUTPUT_HEIGHT 1080

/* Muxer batch formation timeout, for e.g. 40 millisec. Should ideally be set
 * based on the fastest source's framerate. */
#define MUXER_BATCH_TIMEOUT_USEC 40000

#define TILED_OUTPUT_WIDTH 1280
#define TILED_OUTPUT_HEIGHT 720

/* NVIDIA Decoder source pad memory feature. This feature signifies that source
 * pads having this capability will push GstBuffers containing cuda buffers. */
#define GST_CAPS_FEATURES_NVMM "memory:NVMM"

gchar pgie_classes_str[4][32] = { "Vehicle", "TwoWheeler", "Person",
  "RoadSign"
};


/* tiler_sink_pad_buffer_probe  will extract metadata received on OSD sink pad
 * and update params for drawing rectangle, object information etc. */

static GstPadProbeReturn
tiler_src_pad_buffer_probe (GstPad * pad, GstPadProbeInfo * info,
    gpointer u_data)
{
    GstBuffer *buf = (GstBuffer *) info->data;
    guint num_rects = 0; 
    NvDsObjectMeta *obj_meta = NULL;
    guint vehicle_count = 0;
    guint person_count = 0;
    NvDsMetaList * l_frame = NULL;
    NvDsMetaList * l_obj = NULL;
    //NvDsDisplayMeta *display_meta = NULL;

    NvDsBatchMeta *batch_meta = gst_buffer_get_nvds_batch_meta (buf);

    for (l_frame = batch_meta->frame_meta_list; l_frame != NULL;
      l_frame = l_frame->next) {
        NvDsFrameMeta *frame_meta = (NvDsFrameMeta *) (l_frame->data);
        //int offset = 0;
        for (l_obj = frame_meta->obj_meta_list; l_obj != NULL;
                l_obj = l_obj->next) {
            obj_meta = (NvDsObjectMeta *) (l_obj->data);
            if (obj_meta->class_id == PGIE_CLASS_ID_VEHICLE) {
                vehicle_count++;
                num_rects++;
            }
            if (obj_meta->class_id == PGIE_CLASS_ID_PERSON) {
                person_count++;
                num_rects++;
            }
        }
          g_print ("Frame Number = %d \n",frame_meta->frame_num);
#if 0
        display_meta = nvds_acquire_display_meta_from_pool(batch_meta);
        NvOSD_TextParams *txt_params  = &display_meta->text_params;
        txt_params->display_text = g_malloc0 (MAX_DISPLAY_LEN);
        offset = snprintf(txt_params->display_text, MAX_DISPLAY_LEN, "Person = %d ", person_count);
        offset = snprintf(txt_params->display_text + offset , MAX_DISPLAY_LEN, "Vehicle = %d ", vehicle_count);

        /* Now set the offsets where the string should appear */
        txt_params->x_offset = 10;
        txt_params->y_offset = 12;

        /* Font , font-color and font-size */
        txt_params->font_params.font_name = "Serif";
        txt_params->font_params.font_size = 10;
        txt_params->font_params.font_color.red = 1.0;
        txt_params->font_params.font_color.green = 1.0;
        txt_params->font_params.font_color.blue = 1.0;
        txt_params->font_params.font_color.alpha = 1.0;

        /* Text background color */
        txt_params->set_bg_clr = 1;
        txt_params->text_bg_clr.red = 0.0;
        txt_params->text_bg_clr.green = 0.0;
        txt_params->text_bg_clr.blue = 0.0;
        txt_params->text_bg_clr.alpha = 1.0;

        nvds_add_display_meta_to_frame(frame_meta, display_meta);
#endif

    }
    return GST_PAD_PROBE_OK;
}

static gboolean
bus_call (GstBus * bus, GstMessage * msg, gpointer data)
{
  GMainLoop *loop = (GMainLoop *) data;
  switch (GST_MESSAGE_TYPE (msg)) {
    case GST_MESSAGE_EOS:
      g_print ("End of stream\n");
      g_main_loop_quit (loop);
      break;
    case GST_MESSAGE_WARNING:
    {
      gchar *debug;
      GError *error;
      gst_message_parse_warning (msg, &error, &debug);
      g_printerr ("WARNING from element %s: %s\n",
          GST_OBJECT_NAME (msg->src), error->message);
      g_free (debug);
      g_printerr ("Warning: %s\n", error->message);
      g_error_free (error);
      break;
    }
    case GST_MESSAGE_ERROR:
    {
      gchar *debug;
      GError *error;
      gst_message_parse_error (msg, &error, &debug);
      g_printerr ("ERROR from element %s: %s\n",
          GST_OBJECT_NAME (msg->src), error->message);
      if (debug)
        g_printerr ("Error details: %s\n", debug);
      g_free (debug);
      g_error_free (error);
      g_main_loop_quit (loop);
      break;
    }
    case GST_MESSAGE_ELEMENT:
    {
      if (gst_nvmessage_is_stream_eos (msg)) {
        guint stream_id;
        if (gst_nvmessage_parse_stream_eos (msg, &stream_id)) {
          g_print ("Got EOS from stream %d\n", stream_id);
        }
      }
      break;
    }
    default:
      break;
  }
  return TRUE;
}

static void
cb_newpad (GstElement * decodebin, GstPad * decoder_src_pad, gpointer data)
{
  GstCaps *caps = gst_pad_get_current_caps (decoder_src_pad);
  if (!caps) {
    caps = gst_pad_query_caps (decoder_src_pad, NULL);
  }
  const GstStructure *str = gst_caps_get_structure (caps, 0);
  const gchar *name = gst_structure_get_name (str);
  GstElement *source_bin = (GstElement *) data;
  GstCapsFeatures *features = gst_caps_get_features (caps, 0);

  /* Need to check if the pad created by the decodebin is for video and not
   * audio. */
  if (!strncmp (name, "video", 5)) {
    /* Link the decodebin pad only if decodebin has picked nvidia
     * decoder plugin nvdec_*. We do this by checking if the pad caps contain
     * NVMM memory features. */
    if (gst_caps_features_contains (features, GST_CAPS_FEATURES_NVMM)) {
      /* Get the source bin ghost pad */
      GstPad *bin_ghost_pad = gst_element_get_static_pad (source_bin, "src");
      if (!gst_ghost_pad_set_target (GST_GHOST_PAD (bin_ghost_pad),
              decoder_src_pad)) {
        g_printerr ("Failed to link decoder src pad to source bin ghost pad\n");
      }
      gst_object_unref (bin_ghost_pad);
    } else {
      g_printerr ("Error: Decodebin did not pick nvidia decoder plugin.\n");
    }
  }
}

static void
decodebin_child_added (GstChildProxy * child_proxy, GObject * object,
    gchar * name, gpointer user_data)
{
  g_print ("Decodebin child added: %s\n", name);
  if (g_strrstr (name, "decodebin") == name) {
    g_signal_connect (G_OBJECT (object), "child-added",
        G_CALLBACK (decodebin_child_added), user_data);
  }
}

static GstElement *
create_source_bin (guint index, gchar * uri)
{
  GstElement *bin = NULL, *uri_decode_bin = NULL;
  gchar bin_name[16] = { };

  g_snprintf (bin_name, 15, "source-bin-%02d", index);
  /* Create a source GstBin to abstract this bin's content from the rest of the
   * pipeline */
  bin = gst_bin_new (bin_name);

  /* Source element for reading from the uri.
   * We will use decodebin and let it figure out the container format of the
   * stream and the codec and plug the appropriate demux and decode plugins. */
 
  uri_decode_bin = gst_element_factory_make ("uridecodebin", "uri-decode-bin");
  if (!bin || !uri_decode_bin) {
    g_printerr ("One element in source bin could not be created.\n");
    return NULL;
  }

  /* We set the input uri to the source element */
  g_object_set (G_OBJECT (uri_decode_bin), "uri", uri, NULL);

  /* Connect to the "pad-added" signal of the decodebin which generates a
   * callback once a new pad for raw data has beed created by the decodebin */
  g_signal_connect (G_OBJECT (uri_decode_bin), "pad-added",
      G_CALLBACK (cb_newpad), bin);
  g_signal_connect (G_OBJECT (uri_decode_bin), "child-added",
      G_CALLBACK (decodebin_child_added), bin);

  gst_bin_add (GST_BIN (bin), uri_decode_bin);

  /* We need to create a ghost pad for the source bin which will act as a proxy
   * for the video decoder src pad. The ghost pad will not have a target right
   * now. Once the decode bin creates the video decoder and generates the
   * cb_newpad callback, we will set the ghost pad target to the video decoder
   * src pad. */
  if (!gst_element_add_pad (bin, gst_ghost_pad_new_no_target ("src",
              GST_PAD_SRC))) {
    g_printerr ("Failed to add ghost pad in source bin\n");
    return NULL;
  }

  return bin;
}

int
main (int argc, char *argv[])
{
  printf("start");
  GMainLoop *loop = NULL;
  GstElement *pipeline = NULL, *streammux = NULL, *sink = NULL, *pgie = NULL,
      *queue1, *queue2;

  GstBus *bus = NULL;
  guint bus_watch_id;
  GstPad *tiler_src_pad = NULL;
  guint i =0;
  guint num_sources = atoi(argv[1]);
  guint pgie_batch_size;
  int current_device = -1;
  cudaGetDevice(&current_device);
  struct cudaDeviceProp prop;
  cudaGetDeviceProperties(&prop, current_device);

  /* Standard GStreamer initialization */
  gst_init (&argc, &argv);
  loop = g_main_loop_new (NULL, FALSE);

  /* Create gstreamer elements */
  /* Create Pipeline element that will form a connection of other elements */
  pipeline = gst_pipeline_new ("dstest3-pipeline");

  /* Create nvstreammux instance to form batches from one or more sources. */
  streammux = gst_element_factory_make ("nvstreammux", "stream-muxer");

  if (!pipeline || !streammux) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }
  gst_bin_add (GST_BIN (pipeline), streammux);

  for (i = 0; i < num_sources; i++) {
    GstPad *sinkpad, *srcpad;
    gchar pad_name[16] = { };

    GstElement *source_bin= NULL;
   
    source_bin = create_source_bin (i, argv[2]);
    
    if (!source_bin) {
      g_printerr ("Failed to create source bin. Exiting.\n");
      return -1;
    }

    gst_bin_add (GST_BIN (pipeline), source_bin);

    g_snprintf (pad_name, 15, "sink_%u", i);
    sinkpad = gst_element_get_request_pad (streammux, pad_name);
    if (!sinkpad) {
      g_printerr ("Streammux request sink pad failed. Exiting.\n");
      return -1;
    }

    srcpad = gst_element_get_static_pad (source_bin, "src");
    if (!srcpad) {
      g_printerr ("Failed to get src pad of source bin. Exiting.\n");
      return -1;
    }

    if (gst_pad_link (srcpad, sinkpad) != GST_PAD_LINK_OK) {
      g_printerr ("Failed to link source bin to stream muxer. Exiting.\n");
      return -1;
    }

    gst_object_unref (srcpad);
    gst_object_unref (sinkpad);

  }
  /* Use nvinfer to infer on batched frame. */
  pgie = gst_element_factory_make ("nvinferserver", "primary-nvinference-engine");

  /* Add queue elements between every two elements */
  queue1 = gst_element_factory_make ("queue", "queue1");
  queue2 = gst_element_factory_make ("queue", "queue2");

  sink = gst_element_factory_make ("fakesink", "fakesink1");
  

  if (!pgie || !sink) {
    g_printerr ("One element could not be created. Exiting.\n");
    return -1;
  }

    g_object_set (G_OBJECT (streammux), "batch-size", num_sources, NULL);

    g_object_set (G_OBJECT (streammux), "width", MUXER_OUTPUT_WIDTH, "height",
        MUXER_OUTPUT_HEIGHT,
        "batched-push-timeout", MUXER_BATCH_TIMEOUT_USEC, NULL);

    /* Configure the nvinfer element using the nvinfer config file. */
    g_object_set (G_OBJECT (pgie),
        "config-file-path", "models/Runmodels/mutilModelsA/config_inferserver_0.txt", NULL);

    /* Override the batch-size set in the config file with the number of sources. */
    // g_object_get (G_OBJECT (pgie), "batch-size", 1, NULL);
    g_object_set (G_OBJECT (sink), "sync", 0, "async", false,NULL);
    g_object_set (G_OBJECT (sink), "qos", 0, NULL);



  /* we add a message handler */
  bus = gst_pipeline_get_bus (GST_PIPELINE (pipeline));
  bus_watch_id = gst_bus_add_watch (bus, bus_call, loop);
  gst_object_unref (bus);


 
  gst_bin_add_many (GST_BIN (pipeline), queue1, pgie, queue2,  sink, NULL);
    /* we link the elements together
    * nvstreammux -> nvinfer -> nvdslogger -> nvtiler -> nvvidconv -> nvosd
    * -> video-renderer */
    if (!gst_element_link_many (streammux, queue1, pgie, queue2, sink, NULL)) {
      g_printerr ("Elements could not be linked. Exiting.\n");
      return -1;
    }
  

  /* Lets add probe to get informed of the meta data generated, we add probe to
   * the sink pad of the osd element, since by that time, the buffer would have
   * had got all the metadata. */
  tiler_src_pad = gst_element_get_static_pad (pgie, "src");
  if (!tiler_src_pad)
    g_print ("Unable to get src pad\n");
  else
    gst_pad_add_probe (tiler_src_pad, GST_PAD_PROBE_TYPE_BUFFER,
        tiler_src_pad_buffer_probe, NULL, NULL);
  gst_object_unref (tiler_src_pad);


  gst_element_set_state (pipeline, GST_STATE_PLAYING);

  /* Wait till pipeline encounters an error or EOS */
  g_print ("Running...\n");
  g_main_loop_run (loop);

  /* Out of the main loop, clean up nicely */
  g_print ("Returned, stopping playback\n");
  gst_element_set_state (pipeline, GST_STATE_NULL);
  g_print ("Deleting pipeline\n");
  gst_object_unref (GST_OBJECT (pipeline));
  g_source_remove (bus_watch_id);
  g_main_loop_unref (loop);
  return 0;
}

  • No, I don’t think this is an architectural problem

On jetson , using your code without any modification, there is an error :
$ ./deepstream-test3-app 10 rtsp://xx
ERROR: Could not open lib: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test3/models/Runmodels/mutilModelsA/models, error string: /opt/nvidia/deepstream/deepstream/sources/apps/sample_apps/deepstream-test3/models/Runmodels/mutilModelsA/models: cannot open shared object file: No such file or directory

trtionsever started ok, here is the command :
/opt/tritonserver/bin/tritonserver --model-repository=./models/Runmodels --strict-model-config=false --grpc-infer-allocation-pool-size=16 --log-verbose=1

  • Whether your project and model paths are configured correctly And whether the following nvinferserver configuration file path is correct
 g_object_set (G_OBJECT (pgie),
        "config-file-path", "models/Runmodels/mutilModelsA/config_inferserver_0.txt", NULL);

I unzip models.zip in deepstream-test3, tritonserver can run, did not modify configuration, and rebuild your deepstream-test3.c。
there is no models under models/Runmodels/mutilModelsA.

Can you make sure there are no models under this path?

  • The model path is
models\Runmodels\mutilModelsA\1\mutilModelsA.engine

I mean there is no models/Runmodels/mutilModelsA/models, there is models\Runmodels\mutilModelsA\1\mutilModelsA.engine