.trtexec converter throws an error when trying to convert a model with input type int8 or fp16

Description

I am converting the DeepSORT ReID model to an Tensorrt engine. The steps are follows:

  1. first converting .pb model provided with DeepSORT repository to a .uff file by using the convert.py script provided by inside Deepstream (copied this file from DeepStream and I am not running anything with DeepStream)
  2. Next I use .trtexec command line tool to convert the .uff file to an TensorRT engine.

I have been able to perform the above successfully and use the DeepSORT ReID model with TensoRT and Triton server.
The above conversion steps with default options in .trtexec converter, convert the model with input type FP32.

Now I want to convert the model with input type int8/fp16 (since unit8 input is not supported by TensorRT yet). Since .trtexec converter allows to change the input data type with --inputIOFormats argument, I tried the following commands.

trtexec --uff=/etc/models/person-reid/deepsort/mars-small128/mars-small128-fp16.uff --verbose --uffInput=images,128,64,3 --uffNHWC --batch=480 --output=features --inputIOFormats=fp16:hwc --saveEngine=model_inpfp16.plan
trtexec --uff=/etc/models/person-reid/deepsort/mars-small128/mars-small128-int8.uff --verbose --uffInput=images,128,64,3 --uffNHWC --batch=480 --output=features --inputIOFormats=int8:hwc --saveEngine=model_inpint8.plan

I am not successful in achieving this as I am seeing the following errors.

E] Error[3]: images: has DataType Half unsupported by tensor's allowed TensorFormats.
[11/29/2022-04:07:55] [E] Error[4]: [network.cpp::validate::2635] Error Code 4: Internal Error (DataType does not match TensorFormats.)
[11/29/2022-04:07:55] [E] Error[2]: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )
[11/29/2022-04:07:55] [E] Engine could not be created from network
[11/29/2022-04:07:55] [E] Building engine failed
[11/29/2022-04:07:55] [E] Failed to create engine from model.
[11/29/2022-04:07:55] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8201] # trtexec --uff=/etc/models/person-reid/deepsort/mars-small128/mars-small128-fp16.uff --verbose --uffInput=images,128,64,3 --uffNHWC --batch=480 --output=features --inputIOFormats=fp16:hwc --saveEngine=model_inpfp16.plan
E] Error[3]: images: has DataType Int8 unsupported by tensor's allowed TensorFormats.
[11/29/2022-04:02:07] [E] Error[4]: [network.cpp::validate::2635] Error Code 4: Internal Error (DataType does not match TensorFormats.)
[11/29/2022-04:02:07] [E] Error[2]: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )

I have tried with .uff file wich has input type fp32 and also input type int8 and int 16 (by specifying the input node’s data type in step 1), but got the same results.

Can someone help me understand the cause for this error and if its possible to convert a model with int8/fp16 input data types with .trtexec. Thanks.

Hi,

We recommend you to please try on the latest TensorRT version 8.5.1 and let us know if you still face this issue.
UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.
Please export your model to ONNX format and try converting to the TensorRT.

Thank you.

Any luck with this? I got a similar error with ONNX on TRT8.5