Description
I am creating a simple network using INetworkDefinition consisting of one PluginV2DynamicExt. This plugin/network has two inputs - one is 4 dimensional (NHWC) FP32 tensor, and another is 2 dimensional FP32 tensor. In plugin I implement supportsFormatCombination as follows:
bool ProjectiveTransformPlugin::supportsFormatCombination(int pos,
const nvinfer1::PluginTensorDesc* inOut,
int nbInputs, int nbOutputs) {
if ((pos == 0) || (pos == 2)) {
auto format = inOut[pos].format;
return (format == nvinfer1::TensorFormat::kHWC) || (format == nvinfer1::TensorFormat::kHWC8);
} else if (pos == 1) {
return inOut[pos].type == nvinfer1::DataType::kFLOAT;
}
return true;
}
When I try to build network I get the following error:
(Unnamed Layer* 0) [PluginV2DynamicExt]: could not find any supported formats consistent with input/output data types
../builder/cudnnBuilderGraphNodes.cpp (879) - Misc Error in reportPluginError: 0 (could not find any supported formats consistent with input/output data types)
During debug I found that TensorRT tries only Linear and CHW32 formats, though documentation states that HWC format is supported. This behaviour was found in both 7.2.2 and 7.2.3 versions.
Environment
TensorRT Version:
7.2.2/7.2.3
GPU Type:
GeForce RTX 2080
Nvidia Driver Version:
460.80
CUDA Version:
11.1
CUDNN Version:
8.0.5
Operating System + Version:
Ubuntu 18.04