My model accept input in the form NCHW, but according to the documentation for tensor_order
:
num TensorOrder with order types: TENSOR_ORDER_NONE, TENSOR_ORDER_LINEAR, TENSOR_ORDER_NHWC. It can deduce the value from backend layers info if set to TENSOR_ORDER_NONE
How can I feed the data as NCHW
to my model? I am running a custom model in Triton Server.
pgie_config.txt
:
infer_config {
unique_id: 5
gpu_ids: [0]
max_batch_size: 1
backend {
trt_is {
model_name: "yolov5_custom_tensorrt"
version: -1
model_repo {
root: "/src/models_repository"
log_level: 2
}
}
}
preprocess {
network_format: IMAGE_FORMAT_RGB
tensor_order: TENSOR_ORDER_NCHW <------ INVALID
maintain_aspect_ratio: 1
normalize {
scale_factor: 0.00392156862745098
channel_offsets: [0, 0, 0]
}
}
postprocess {
labelfile_path: "/src/models_repository/yolov5_custom/labels.txt"
other {}
}
extra {
copy_input_to_host_buffers: true
}
custom_lib {
path: "/opt/nvidia/deepstream/deepstream/lib/libnvds_infercustomparser.so"
}
}
input_control {
process_mode: PROCESS_MODE_FULL_FRAME
interval: 0
}
output_control {
output_tensor_meta: true
}
model repo config:
name: "yolov5_custom_tensorrt"
platform: "tensorrt_plan"
max_batch_size: 1
input [
{
name: "data"
data_type: TYPE_FP32
format: FORMAT_NCHW
dims: [ 3, 640, 640 ]
}
]
output [
{
name: "prob"
data_type: TYPE_FP32
dims: [ 6001, 1, 1]
reshape { shape: [6001,1,1] }
}
]
# Specify GPU instance.
instance_group {
kind: KIND_GPU
count: 1
gpus: 0
}
version_policy: { all { } }