Unable to parse the Yolov4 tao model in triton-inference server

@Morganh @fanzh

• Hardware Platform (Jetson / GPU): T4
• DeepStream Version: 6.0.1
• TensorRT Version: 8.2.3
• NVIDIA GPU Driver Version (valid for GPU only): 470

I have yolov4 .etlt model and generated trt.engine from nvinfer (deepstream-app) and I have generated libnvds_infercustomparser_tao.so

I wish to use same in triton inference server.
model-repository:
image

Then Written a deepstream-app for the same with nvinferserver plugin.

For one model with the same configuration, I am able to do inference.
But Unable to get any meta info & obj_count for other model with distinct classes.
All are working with nvinfer but not with nvinferserver.

here is config.pbtxt

name: "Helmet"
platform: "tensorrt_plan"
max_batch_size: 16
default_model_filename: "trt.engine"
input [
  {
    name: "Input"
    data_type: TYPE_FP32
    format: FORMAT_NCHW
    dims: [ 3, 384, 1248 ]
  }
]
output [
  {
    name: "BatchedNMS"
    data_type: TYPE_INT32
    dims: [ 1 ]
  },
  {
    name: "BatchedNMS_1"
    data_type: TYPE_FP32
    dims: [ 200, 4 ]
  },
  {
    name: "BatchedNMS_2"
    data_type: TYPE_FP32
    dims: [ 200 ]
  },
  {
    name: "BatchedNMS_3"
    data_type: TYPE_FP32
    dims: [ 200 ]
  }
]
instance_group [
  {
    kind: KIND_GPU
    count: 1
    gpus: 0
  }
]

Here is my config_infer.txt

infer_config {
  unique_id: 1
  gpu_ids: [0]
  max_batch_size: 16
  
  backend {
    inputs: [ {
      name: "Input"
    }]
    outputs: [
      {name: "BatchedNMS"},
      {name: "BatchedNMS_1"},
      {name: "BatchedNMS_2"},
      {name: "BatchedNMS_3"}
    ]
    triton {
      model_name: "Helmet"
      version: 1
      grpc {
        url: "172.17.0.2:8001"		
      }
    }
  }

  preprocess {
    network_format: MEDIA_FORMAT_NONE
    tensor_order: TENSOR_ORDER_NONE
    tensor_name: "Input"
    maintain_aspect_ratio: 0
    frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
    frame_scaling_filter: 1
    normalize {
    scale_factor: 1.0
    channel_offsets: [0, 0, 0]
    }
  }

  postprocess {
    labelfile_path: "../../model_repository/Helmet_model/labels.txt"
    detection {
      num_detected_classes: 2
      custom_parse_bbox_func:"NvDsInferParseCustomBatchedNMSTLT"
      per_class_params {
          key: 0
          value { pre_threshold: 0.4 }
        }
      nms {
        confidence_threshold:0.2
        topk:20
        iou_threshold:0.5
      }
                             
    }
  }

  custom_lib {
    path:"../../.../customLib/libnvds_infercustomparser_tao.so"
  }
}

didn’t get your point.
what’s “one model”, and “other model”?
“All are working with nvinfer but not with nvinferserver.” ==> but what you gave is config.pbtxt for nvinferserer.

And, what is your question for this topic?

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.