I was be able to train, export , create engine trt file on Triton-Server
But I’m missing some configuration on deepstream preprocessing.
All tests are successful including convert engine and visualizing TensorRT inferences.
But It’s not working (0 results after inference) when use on Deepstream/Triton-Server.
I think is missing some configuration on deesptream preprocessing.
During Export of Classification TF1 tao generated triton-sever and deepstream-config, but Classification TF2 doesnt generate it.
SGIE Deepstream Triton Config
infer_config {
unique_id: 6
max_batch_size: 50
backend {
triton {
model_name: "vehicletypenet"
version: -1
grpc {
url: "0.0.0.0:8001"
enable_cuda_buffer_sharing: true
}
}
}
preprocess {
network_format: IMAGE_FORMAT_BGR
tensor_order: TENSOR_ORDER_LINEAR
maintain_aspect_ratio: 0
frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
frame_scaling_filter: 1
normalize {
scale_factor: 0.017507
channel_offsets: [123.675,116.280,103.53]
#mean_file: "mean_vehiclemake.ppm"
}
}
postprocess {
labelfile_path: "label_vehicletype.txt"
classification {
threshold: 0.30
}
}
}
input_control {
process_mode: PROCESS_MODE_CLIP_OBJECTS
operate_on_gie_id: 1
operate_on_class_ids: [0,3]
interval: 0
async_mode: true
object_control {
bbox_filter {
min_width: 128
min_height: 128
}
}
}
Triton-Config
name: "vehicletypenet"
platform: "tensorrt_plan"
max_batch_size: 128
default_model_filename: "efficientnet-b0.fp16x128_new.engine"
input [
{
name: "input:0"
data_type: TYPE_FP32
format: FORMAT_NCHW
dims: [3, 256, 256]
}
]
output [
{
name: "Identity:0"
data_type: TYPE_FP32
dims: [12]
}
]
instance_group [
{
kind: KIND_GPU
count: 1
gpus: 0
}
]