When converting onnx model to tensorRT engine, it gives an error:
Building TensorRT engine, FP16 available:0
Max batch size: 32
Max workspace size: 1024 MiB
BUG] Assertion failed: format.hasValue() && "format not recognized or not supported"
../builder/cudnnPluginV2Builder.cpp:102
Aborting...
[ERROR] ../builder/cudnnPluginV2Builder.cpp (102) - Assertion Error in deduceTensorFormat: 0 (format.hasValue() && "format not recognized or not supported")
terminate called after throwing an instance of 'std::runtime_error'
what(): Failed to create object
Aborted (core dumped)
Environment
TensorRT Version: 7.0.0
GPU Type: Quadro P4000
Nvidia Driver Version: Driver Version: 418.67
CUDA Version: 10.0
CUDNN Version: 7.6.3.30
Operating System + Version: Ubuntu 18.04
ONNX IR version: 0.0.6
Opset version: 11
Producer name: tf2onnx
Producer version: 1.6.0