Converted TensorFlow ARGMAX may not work properly

Hello,

My programming environment: Win10, VS2017, C++, TF1.13, tf2ONNX 1.5 CUDA 9.0, cuDNN 7.6.3.30, TensorRT 1.5.3 6.0.1.5.

I have created a UNet using Tensorflow 1.13 and exported it to ONNX using the tf2ONNX tool. The UNet model’s Argmax layer has three channels/feature planes.

These are my observations:

  1. The neural net works correctly using TF (Python, C++)

  2. The neural net works correctly using TRT BUT ONLY if my output is the Softmax layer before Argmax in which case I implemented the code calculating the class max probability where the TRT Softmax layer’s layout is Class1, Class2, Class3, Class1, Class2, Class3…(CHW)

  3. The neural network doesn’t work correctly using TRT if I use the version of my TF model with Argmax as the last layer. The TRT Argmax, after the conversion process, is always “empty” (all of its elements are zero). The model was converted to the TRT format without any errors.

  4. There is one more problem that may not be related to this issue. The version of the model where Argmx type is tf.int32 (as opposed to the default one which is tf.int64) cannot be converted. The TRT converter reports an error saying: “error trt_dtype == nvinfer1::DataType::kHALF %% cast_dtype == ::ONNX_NAMESPACE::TensorProto::FLOAT”

Has anyone else encountered these problems ?

Thanks.

Hi everyone,

I have found what the problem was with argmax. Since I just started using TensorRT, I thought it would convert every layer’s type to FP32 during the model conversion. I decided to check the TensorRT model’s argmax layer type and it turned out to be INT32 (the Tensorflow’s argmax type is INT64). Therefore, I used an INT32 pointer to the output buffer and everything worked as intended.

I still don’t know why the problem under 4) happens, though.

Hi,

Could you please share your script and model file so we can better help?

Thanks