Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.1.1
• TensorRT Version NGC docker
• NVIDIA GPU Driver Version (valid for GPU only) 525
• Issue Type( questions, new requirements, bugs) Question
I’m trying to implement secondary gie on triton server and connect from deepstream with nvinferserver but I have not been able to find a working example online. I already have a yolov7 working as a primary gie.
I have come up with the config file below, but I’m getting an error. I’d appreciate any help.
CONFIG FILE
infer_config {
unique_id: 3
gpu_ids: [0]
max_batch_size: 1
backend {
triton {
model_name: "vehicle-class"
version: -1
grpc {
url: "localhost:8001"
enable_cuda_buffer_sharing: false
}
}
}
preprocess {
network_format: IMAGE_FORMAT_RGB
tensor_order: TENSOR_ORDER_LINEAR
maintain_aspect_ratio: 1
frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
frame_scaling_filter: 1
normalize {
scale_factor: 1.0
}
}
postprocess {
labelfile_path: "/app/model/vehicle-class/labels.txt"
}
extra {
copy_input_to_host_buffers: false
}
}
input_control {
process_mode: PROCESS_MODE_CLIP_OBJECTS
operate_on_gie_id: 1
operate_on_class_ids: [2,7]
async_mode: true
}
ERROR LOG
2023-02-28 11:00:10.499 | SUCCESS | Pipeline started
2023-02-28 11:00:10.500 | WARNING | Warning: gst-library-error-quark: NvInferServer asynchronous mode is applicable for secondaryclassifiers only. Turning off asynchronous mode (5) <<-->> gstnvinferserver_impl.cpp(390): validatePluginConfig (): /GstPipeline:pipeline0/GstBin:infer-1/GstNvInferServer:vehicle-class-infer-1
2023-02-28 11:00:10.500 | ERROR | Error: gst-resource-error-quark: Failed to initialize InferTrtIsContext (1) <<-->> gstnvinferserver_impl.cpp(547): start (): /GstPipeline:pipeline0/GstBin:infer-1/GstNvInferServer:vehicle-class-infer-1:
Config file path: /app/gies/vehicle-class.txt
2023-02-28 11:00:10.500 | WARNING | Pipeline stopped