Preprocess CHANNEL_ORDER_NCHW

My model accept input in the form NCHW, but according to the documentation for tensor_order :

num TensorOrder with order types: TENSOR_ORDER_NONE, TENSOR_ORDER_LINEAR, TENSOR_ORDER_NHWC. It can deduce the value from backend layers info if set to TENSOR_ORDER_NONE

How can I feed the data as NCHW to my model? I am running a custom model in Triton Server.

pgie_config.txt :

infer_config {
  unique_id: 5
  gpu_ids: [0]
  max_batch_size: 4
  backend {
    trt_is {
      model_name: "yolov5n-onnx"
      version: -1
      model_repo {
        root: "./"
        log_level: 2
        tf_gpu_memory_fraction: 0.4
        tf_disable_soft_placement: 0
      }
    }
  }

  preprocess {
    network_format: IMAGE_FORMAT_RGB
    tensor_order: TENSOR_ORDER_NCHW <---- INVALID
    maintain_aspect_ratio: 0
    normalize {
      scale_factor: 1.0
      channel_offsets: [0, 0, 0]
    }
  }

  postprocess {
    labelfile_path: "yolov5n-onnx/labels.txt"
    other {}
  }

  extra {
    copy_input_to_host_buffers: false
  }

  custom_lib {
    path: "libnvdsinfer_custom_impl_Yolo.so"
  }
}
input_control {
  process_mode: PROCESS_MODE_FULL_FRAME
  interval: 0
}
output_control {
  output_tensor_meta: true
}

model repo config:

name: "yolov5n-onnx"
backend: "onnxruntime"

input [
  {
    name: "images"
    data_type: TYPE_FP32
    # format: FORMAT_NCHW
    dims: [3,640,640]
  }
]

output [
  {
    name: "output"
    data_type: TYPE_FP32
    dims: [1,25200,85]
  }
]

instance_group [{ kind: KIND_CPU }]

As the diagram in Gst-nvinferserver — DeepStream 6.3 Release documentation, in DeepStream, you just need to connect the nvinferserver plugin with nvstreammux, nvinferserver plugin will receive the NV12/RGBA buffers from nvstreammux and convert to NCHW data for your model

I tried, but there is the error:

0:00:06.569915679   230      0x3030810 ERROR          nvinferserver gstnvinferserver.cpp:362:gst_nvinfer_server_logger:<primary-inference> nvinferserver[UID 5]: Error in fixateInferenceInfo() <infer_cuda_context.cpp:128> [UID = 5]: InferContext(uid:5) cannot figure out input tensor order, please specify in config file(preprocess.)

Can you please provide info like below? We need that to check if it’s caused by old version

• Hardware Platform (Jetson / GPU) GPU 2XNvidia T4
• DeepStream Version nvcr.io/nvidia/deepstream:6.0-triton
• TensorRT Version TensorRT 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only) 470.103.01

Here it is:

• Hardware Platform: GPU Nvidia Tesla T4
• DeepStream Version: nvcr.io/nvidia/deepstream:6.0-triton
• TensorRT Version: TensorRT 8.0.1
• NVIDIA GPU Driver Version: 470.82.01
• CUDA Version: 11.4

The log already show up the errors. You need set tensor_order: TENSOR_ORDER_LINEAR. for channels orders such as NCHW, CHW, DCHW.

Please see valid values from Gst-nvinferserver — DeepStream 6.1.1 Release documentation

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.