Modify input format for custom model implementation

I am trying to implement a custom YOLOv3 .onnx file. The input dimensions of the network are in the order HWC. When I try to compile the model I receive an error stating that the network RGB input channels are not 3. I noticed that I can change the infer-dims property to be the required input format, but it appears in the documentation that this still would need the channel input to be listed first. Is there a way to change the input format within the pgie such that it can accept my model format of HWC?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

This is on a dGPU
This is Deepstream 6.0
Testing will be performed on a 3090
Issue type: question

15.2.2 Input Order
1). network-input-order= // 0:NCHW 1:NHWC
2). infer-dims= // if network-input-order=1, i.e. NHWC, infer-dims must be specified, otherwise, nvinfer can’t detect input dims automatically
3). model-color-format= // 0: RGB 1: BGR 2: GRAY

I have solved this issue by using ONNX’s NHWC → CHWN converter tool

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.