Deepstream 6.0 network input order issue NHWC

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Nano 2Gb
• DeepStream Version 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.0.1
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) Question/Bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

Using the new nvinfer configuration flag “network-input-order” introduced in Deepstream 6.0 isnt giving the expected result. I have an onnx file that has input dimensions of 128x128x1 (Grey Scale). This is a NHWC input format, so, as one would expect I set the “network-input-order” to 1, which is NHWC. However, deepstream throws an error:

0:02:58.926432640 15050   0x559293b8d0 INFO                 nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<inference> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:1947> [UID = 1]: serialize cuda engine to file: /home/james/main/models/Segment/model.onnx_b1_gpu0_fp32.engine successfully
INFO: [Implicit Engine Info]: layers num: 2
0   INPUT  kFLOAT img             128x128x1       
1   OUTPUT kFLOAT conv2d_23       128x128x1       

0:02:59.573292270 15050   0x559293b8d0 ERROR                nvinfer gstnvinfer.cpp:632:gst_nvinfer_logger:<inference> NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:973> [UID = 1]: GRAY input format specified but network input channels is not 1.
ERROR: Infer Context prepare preprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED

This same error is thrown when if I set “network-input-order” to 0. This error is giving me the impression that deepstream is still trying to use NCHW even if the order is set to NHWC, hence why it says the input channels is not 1 (Reading 128 first). I have also tried setting “input-tensor-meta” to 0 (As the docs say that network-input-order is ignored if this flag is 1). However, this gives the warning: Unknown or legacy key specified “input-tensor-meta’ for group [property]”

Further, I have tried creating the engine file directly with trtexec. The engine is created successfully but loading the engine file in the nvinfer config yields the same result. I’ve tried creating the engine with these two commands:

/usr/src/tensorrt/bin/trtexec --onnx=/home/james/main/models/Segment/model.onnx --explicitBatch --saveEngine=/home/james/test.trt

and…

/usr/src/tensorrt/bin/trtexec --onnx=/home/james/main/models/Segment/model.onnx --shapes=img:1x128x128x1 --explicitBatch --saveEngine=/home/james/test.trt

An example of my nvinfer configuration:

[property]
gpu-id=0
net-scale-factor=1.0
model-color-format=2                 # Grey scale
#onnx-file=model.onnx                # The onnx file for conversion
model-engine-file=test.trt           # The successfully converted engine file
input-tensor-meta=0                  # Diable the input tensor meta (The same error is thrown even if I comment this flag out)
network-input-order=1                # Input as NHWC
segmentation-output-order=1          # Output as NHWC (It gives the same error even if I comment this flag out)
gie-unique-id=1
network-type=2                       # Segmentation
output-blob-names=conv2d_23
segmentation-threshold=0.0

Am I missing something in getting my onnx file to run correctly using the NHWC input order? Please advise.
Thank you,
James

please set “infer-dims=1;128;128” or “infer-dims=128;128;1” in gie and try again

Thx!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.