Nvinfer input formats issue

I am currently using the nvinfer plugin to perform classification for an image. Ran into a weird error regarding the input format.

model-colour-formats=0 (RGB)

0:00:04.980314545 20682     0x35f94b90 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<dr> NvDsInferContext[UID 418]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:875> [UID = 418]: RGB/BGR input format specified but network input channels is not 3

Changing it to model-colour-formats=2 (GRAY)

0:00:05.401674841 21336      0x684f190 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<dr> NvDsInferContext[UID 418]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:884> [UID = 418]: GRAY input format specified but network input channels is not 1.

The model input is as follows:

Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            [(None, 299, 299, 3) 0                                            
__________________________________________________________________________________________________

Environment

TensorRT Version : 7.1.3.0
Jetpack Version : 4.5
CUDA Version : 10.2.89
CUDNN Version : 8.0.0.180

Based on this thread, the onnx model has to be converted to NCHW instead of NHWC.

Ran

python3 -m tf2onnx.convert --input ./dr_model.h5 --inputs input_1:0[1,299,299,3] --inputs-as-nchw input_1:0 --outputs sequential_1/dense_2/sigmoid:0 --opset 13 --fold_const --output dr_test.onnx

Running into the following error:

AssertionError: sequential_1/dense_2/sigmoid is not in graph

The last layer of the model from netron and model.summary() is the following:

mixed10 (Concatenate)           (None, 8, 8, 2048)   0           activation_86[0][0]              
                                                                 mixed9_1[0][0]                   
                                                                 concatenate_2[0][0]              
                                                                 activation_94[0][0]              
__________________________________________________________________________________________________
sequential (Sequential)         (None, 1)            2099201     mixed10[0][0]

Not exactly sure what the outputs should be here

Hi,

You can visualize the ONNX model in the below page.
It can help you to find out the correct output node name:

https://netron.app/

Thanks.

i managed to convert the ONNX model by saving the keras model into saved model format first before using tf2onnx.

0:00:05.620264989 22453     0x18a67390 ERROR                nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger:<dr> NvDsInferContext[UID 418]: Error in NvDsInferContextImpl::preparePreprocess() <nvdsinfer_context_impl.cpp:875> [UID = 418]: RGB/BGR input format specified but network input channels is not 3
ERROR: Infer Context prepare preprocessing resource failed., nvinfer error:NVDSINFER_CONFIG_FAILED

However, the input formats issue is still there even after using --inputs-as-nchw when converting.

Hi,

Since Deepstream SDK uses TensorRT as inference backend.
Could you try to convert the ONNX model with trtexec first?

$ /usr/src/tensorrt/bin/trtexec --onnx=[your/model]

If the issue still occurs, could you share the ONNX file with us?
Thanks.

@AastaLLL

Thanks for your help. I’m working with @kurkur14 on this.

gives:

----------------------------------------------------------------
Input filename:   ./model_name_here.onnx
ONNX IR version:  0.0.7
Opset version:    13
Producer name:    tf2onnx
Producer version: 1.8.5
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[07/06/2021-15:20:20] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
terminate called after throwing an instance of 'std::out_of_range'
  what():  Attribute not found: axes
Aborted (core dumped)

Netron shows input shape of that .onnx is 1x3x299x299

nvinfer fails with

ERROR: ModelImporter.cpp:472 In function importModel:
[4] Assertion failed: !_importer_ctx.network()->hasImplicitBatchDimension() && "This version of the ONNX parser only supports TensorRT INetworkDefinitions with an explicit batch dimension. Please ensure the network was created using the EXPLICIT_BATCH NetworkDefinitionCreationFlag."
ERROR: Failed to parse onnx file
ERROR: failed to build network since parsing model errors.
ERROR: failed to build network.

I will ask if this is possible.

Hi,

Could you check if adding --explicitBatch flag helps?
If not, it will help if we can directly check on the model.

Thanks.

Thanks, @AastaLLL

I’ve been doing devops stuff recently so haven’t had much time, but I will try this and get back to you. I have permission to share one of the models if needs be.