Triton Inference through docker

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Tesla T4
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

i get this error , when i do tritan inference while inferencing through custom ONNX model

In cb_newpad

gstname= video/x-raw
features= <Gst.CapsFeatures object at 0x7f0088fbd0a8 (GstCapsFeatures at 0x7f0014037640)>
In cb_newpad

gstname= audio/x-raw
ERROR: infer_preprocess.cpp:569 cudaMemset2DAsync failed to set 0 to scaled padding area, cuda err_no:700, err_str:cudaErrorIllegalAddress
Segmentation fault (core dumped)

Below is my config file


infer_config {
  unique_id: 1
  gpu_ids: [0]
  max_batch_size: 16
  
  backend {
    inputs: [ {
      name: "images"
    }]
    outputs: [
      {name: "690"},
      {name: "742"},
      {name: "794"},
      {name: "output"}
    ]
    triton {
      model_name: "model"
      version: 1
      
    }
  }


  postprocess {
    #labelfile_path: "../../../../samples/trtis_model_repo/ssd_inception_v2_coco_2018_01_28/labels.txt"
    #labelfile_path: "../../../samples/trtis_model_repo/ssd_inception_v2_coco_2018_01_28/labels.txt"
    labelfile_path: "../../model_repository/PPEKit_model/labels.txt"
    classification {
      threshold: 0.51
    }
    #detection {
    #num_detected_classes: 4 
    #custom_parse_bbox_func:"/opt/nvidia/deepstream/deepstream-6.0/lib/libnvds_infercustomparser.so"
    #custom_parse_bbox_func:"yolov3_postprocessor.py"
    #output-blob-names=BatchedNMS
    #custom_parse_bbox_func:"../../customLib/libnvds_infercustomparser_tlt.so"
    #custom_parse_bbox_func:"NvDsInferParseYolo4"
                                           
    #}
  }

    custom_lib {
    #path: "/opt/nvidia/deepstream/deepstream/lib/libnvds_infercustomparser.so"
    path:"/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream-apps/customLib/libnvdsinfer_custom_impl_Yolo.so"
    #path:"/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream-apps/project/tensorrt_demos/plugins/libyolo_layer.so"
    #path:"/opt/nvidia/deepstream/deepstream-6.0/sources/deepstream-apps/TensorRT-8.0.1.6/lib/libnvinfer.so"
  }


preprocess {
network_format: IMAGE_FORMAT_RGB
#tensor_order: TENSOR_ORDER_NONE
tensor_order: TENSOR_ORDER_NHWC
maintain_aspect_ratio: 0
frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
frame_scaling_filter: 1
#normalize {
#scale_factor: 1.0
#channel_offsets: [0, 0, 0]
#}
}


 


}
input_control {
  process_mode: PROCESS_MODE_FULL_FRAME
  operate_on_gie_id: -1
  interval: 0
}
output_control {
  output_tensor_meta: true
}

Sorry for the late response, have you managed to get issue resolved or still need the support? Thanks

nope i am still facing the issue- , yoloV5 model is supported by Deepstream 6.0 ???

yes, you can refer to Improved DeepStream for YOLO models

or yolov5_triton.tgz - Google Drive this is a yolov5 Triton sample

thank you . will check on it .

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.