Wrong number of channels in config file

• Hardware Platform (Jetson / GPU)
Jetson AGX Xavier
• DeepStream Version
Deepstream 5.1
• JetPack Version (valid for Jetson only)
4.5
• TensorRT Version
7.3.1
• Issue Type( questions, new requirements, bugs)
Question
I’m trying to use facenet as a seconday classifier and have converted the network to an onnx model which have successfully been generated to an engine file. But then I get this error message:

INFO: [Implicit Engine Info]: layers num: 2
0 INPUT kFLOAT input:0 160x160x3
1 OUTPUT kFLOAT Bottleneck/BatchNorm/batchnorm/add_1:0 128

0:00:05.192311491 16805 0x33baf920 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<face_encodings> NvDsInferContext[UID 2]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 2]: Use deserialized engine model: /home/bounty/repos/bounty_vidproc/nvidia/face_rec/facenet.onnx_b1_gpu0_fp16.engine
0:00:05.192365766 16805 0x33baf920 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<face_encodings> NvDsInferContext[UID 2]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:874> [UID = 2]: RGB/BGR input format specified but network input channels is not 3

It seems like the configuration file does not match the engine-file somehow. But I have specified the input dimentions and also speficied uff-input-order. as nhwc

what have I missed?

[property]
gpu-id=0
net-scale-factor=1
#uff-file=facenet.uff
onnx-file=facenet.onnx
#model-engine-file=facenet.onnx_b1_gpu0_fp16.engine
uff-input-blob-name=input:0
output-blob-names=Bottleneck/BatchNorm/batchnorm/add_1:0
batch-size=1
infer-dims=160;160;3
uff-input-order=1
model-color-format=0

0=FP32 and 1=INT8 mode 2=INT16

network-mode=2

1=Primary, 2=Secondary

process-mode=2
#input-object-min-width=160
#input-object-min-height=160
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
classifier-async-mode=1

#scaling-filter=0
#scaling-compute-hw=0

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I downloaded the facenet pb file: github.com/apollo-time/facenet/raw/master/model/resnet/facenet.pb

then converted the model to onnx

python3 -m tf2onnx.convert --input facenet.pb --inputs input:0[1,160,160,3] --inputs-as-nchw input_1:0 --outputs Bottleneck/BatchNorm/batchnorm/add_1:0 --output facenet.onnx

and then used this config file as secondary model:
[property]
gpu-id=0
net-scale-factor=1
#uff-file=facenet.uff
onnx-file=facenet.onnx
#model-engine-file=facenet.onnx_b1_gpu0_fp16.engine
uff-input-blob-name=input:0
output-blob-names=Bottleneck/BatchNorm/batchnorm/add_1:0
batch-size=1
infer-dims=160;160;3
uff-input-order=1
model-color-format=0

0=FP32 and 1=INT8 mode 2=INT16

network-mode=2

1=Primary, 2=Secondary

process-mode=2
#input-object-min-width=160
#input-object-min-height=160
gpu-id=0
gie-unique-id=2
operate-on-gie-id=1
operate-on-class-ids=0
is-classifier=1
classifier-async-mode=1

#scaling-filter=0
#scaling-compute-hw=0
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Currently deepstream only support NCHW input layer with onnx model. Please change your model input layer to NCHW.

which model formats supports NHWC? since I convert it from a *pb model it might be easier to change to another format than changing the model.

Pytorch normally uses NCHW, but TF may use NHWC. Thus, it is better to convert the onnx model for desired format
Refer to GitHub - onnx/tensorflow-onnx: Convert TensorFlow models to ONNX ; ie look for option “–inputs-as-nchw”

1 Like