I am trying to convert onnx model exported from tensorflow to tensorRT. However, it throws an error:
[ ERROR] Min__1082: cannot use precision Int32 with weights of type Float
[ERROR] Layer Min__1082 failed validation
[ ERROR] Network validation failed.
terminate called after throwing an instance of 'std::runtime_error'
what(): Failed to create object
Aborted (core dumped)
For the particular node, I tried setting layer->setPrecision(nvinfer1::DataType::kFLOAT)
, it still does not work. Yes my inputs for this op are floats.
Its a private model.
TensorRT Version : 7.0
GPU Type :
Nvidia Driver Version : 418.67
CUDA Version : 10.0
CUDNN Version : 7.4.2.24-1+cuda10.0
Operating System + Version : Ubuntu 18.04
ONNX IR version: 0.0.6
Opset version: 11
Producer name: tf2onnx
Producer version: 1.6.0
Domain:
Model version: 0