In jetson, wild Pointers appeared in the interactive data in the GRPC mode of Deepstream6.1 and tritonserver

• Hardware Platform (Jetson )
• DeepStream Version 6.1
• JetPack Version tritonserver2.27.0-jetpack5.0.2
• TensorRT Version 8.4.1.5 + cuda11.4.239
• Issue Type( bugs)

The NvDsInferLayerInfo in std::vector is found in the YoloV5 custom post-processing plug-in after GRPC communication through tritonserver and Gst-nvinferserver The buffer pointer becomes an outfield pointer, causing the program to throw signal 11.

tritonserver config:
config.pbtxt

platform: "tensorrt_plan"
  max_batch_size: 1
  default_model_filename: "mutilModelsA.engine"
  input [
    {
      name: "images"
      data_type: TYPE_FP32
      dims: [ 3, 384, 640 ]
    }
  ]
  output [
    {
      name: "output1"
      data_type: TYPE_FP32
      dims: [ 3, 12, 20 , 28 ]
    },
	{
      name: "output2"
      data_type: TYPE_FP32
      dims: [ 3, 24, 40 , 28 ]
    },
	{
      name: "output3"
      data_type: TYPE_FP32
      dims: [ 3, 48, 80 , 28 ]
    }
  ]
instance_group [
    {
      count: 1
      kind: KIND_GPU 
      gpus: [ 0 ]
    }
  ]
dynamic_batching {
  preferred_batch_size: [1]
  max_queue_delay_microseconds: 5000000
  preserve_ordering: true
}
version_policy: { all { }}
optimization { execution_accelerators {
  gpu_execution_accelerator : [ {
    name : "tensorrt"
    parameters { key: "precision_mode" value: "FP16" }
    parameters { key: "max_workspace_size_bytes" value: "1073741824" }
    }]
}}

Gst-nvinferserver :

infer_config {
  unique_id: 1
  gpu_ids: [0]
  max_batch_size: 1
  backend {
    inputs: [ {
      name: "images"
    }
    ]
    outputs: [
      {name: "output1"},
      {name: "output2"},
      {name: "output3"}
    ]
    triton {
      model_name: "mutilModelsA1"
      version: 1
      grpc {
        url: "localhost:8052"
      }
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_LINEAR
    tensor_name: "images"
    maintain_aspect_ratio: 0
    frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
    frame_scaling_filter: 1
    normalize {
      scale_factor: 0.0039215697906911373
      channel_offsets: [0, 0, 0]
    }
  }
  custom_lib {
    path: "models/plugins/libnvdsinfer_custom_impl_Yolo.so"
  }
  postprocess {
    labelfile_path: "models/Runmodels/mutilModelsA1/labels.txt"
    detection {
      num_detected_classes: 23
      custom_parse_bbox_func: "NvDsInferParseCustomYoloV5_3_Out"
      per_class_params {
        key: 0
        value { pre_threshold: 0.6 }
      }
      nms {
        confidence_threshold:0.2
        topk:20
        iou_threshold:0.6
      }
    }
  }
}

input_control {
  process_mode: PROCESS_MODE_FULL_FRAME
  operate_on_gie_id: -1
  interval: 25
}

output_control { 
  detect_control { 
    default_filter { 
      bbox_filter { 
        min_width: 32, 
        min_height: 32 
      } 
    } 
  } 
}
  • Post-processing code interface
static inline std::vector<const NvDsInferLayerInfo*>
SortLayers(const std::vector<NvDsInferLayerInfo> & outputLayersInfo)
{
    std::vector<const NvDsInferLayerInfo*> outLayers;
    for (auto const &layer : outputLayersInfo) {
        outLayers.push_back (&layer);
    }
    std::sort(outLayers.begin(), outLayers.end(),
        [ ](const NvDsInferLayerInfo* a, const NvDsInferLayerInfo* b) {
            return a->inferDims.d[1] < b->inferDims.d[1];
        });
    return outLayers;
}

extern "C" bool NvDsInferParseCustomYoloV5_3_Out(
    std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
    NvDsInferNetworkInfo const& networkInfo,
    NvDsInferParseDetectionParams const& detectionParams,
    std::vector<NvDsInferParseObjectInfo>& objectList)
{   
    const int anchor[3][6] = {{116, 90, 156, 198, 373, 326},{30, 61, 62, 45, 59, 119},{10, 13, 16, 30, 33, 23}};
    assert(outputLayersInfo.size() == 3);  //3 out
    assert(outputLayersInfo[0].inferDims.numDims == 4);  //3*grid_h*grid_w*idScore
    assert(outputLayersInfo[0].inferDims.d[3] == (detectionParams.numClassesConfigured+5));
    const std::vector<const NvDsInferLayerInfo*> sortedLayers =
        SortLayers (outputLayersInfo);
    for (uint idx = 0; idx < outputLayersInfo.size(); ++idx) {
        const NvDsInferLayerInfo &layer = *sortedLayers[idx];
        const uint gridSizeH = layer.inferDims.d[1];
        const uint gridSizeW = layer.inferDims.d[2];
        const uint stride = DIVUP(networkInfo.width, gridSizeW);
        assert(stride == DIVUP(networkInfo.height, gridSizeH));
        size_t size = layer.inferDims.numElements;
        float *pBuf = (float*)(layer.buffer);
        std::cout <<"*pBuf:" <<*pBuf<<std::endl;    //There is a paragraph error (signal 11)   I think it's become a wild pointer
        std::vector<NvDsInferParseObjectInfo> outObjs =
            decodeYoloV5_3_Tensor((const float*)(layer.buffer), anchor[idx], gridSizeW, gridSizeH, stride,
                       detectionParams, networkInfo.width, networkInfo.height);
        objectList.insert(objectList.end(), outObjs.begin(), outObjs.end());
    }
    return true;
}

Did you use your own trained YOLOV5 model? Can you confirm that the output of the model is three-layer and both has values? Could you use our demo to test it?
https://github.com/NVIDIA-AI-IOT/yolov5_gpu_optimization

  • Yes, I’m sure,Using the same post-processing and model, I also had no problems using Gst-nvinfer on Jetson Xavier NX Jetpack4.5.1.

I also published under this topic, can you discuss it under this topic?

Sure, we’ll close this topic. Cause both of the two topic are wild point issue.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.