TensorRT plugin got zero sized dimensions

Description

I am trying to port Attention OCR to Jetson Nano. I’v successfully trained the model and converted it to ONNX. I have removed some unsuported datatypes and layers and I ended up with only one unsupported layer: OneHot. Now I’m trying to create a C++ OneHot plugin but when running trtexe to convert ONNX to TRT engine, invalid dimensions was sent to the plugin:

Expected inputs:

  • input 0: indices (input tensor), kINT32
  • input 1: depth = scalar, kINT32
  • input 2: values [off_value, on_value], kFLOAT

Expected input dimensions:

  • input 0: [??] - depending on the previous layer
  • input 1: [1] ? - scalar (depth of the OneHot, dimension of output)
  • input 2: [2] - two values in one dimensional tensor

(see https://github.com/onnx/onnx/blob/master/docs/Operators.md#OneHot)

Part of C++ code:

void OneHotLayer::configurePlugin(const Dims* inputDims, int nbInputs...

content of parameters:
    nbInputs = 3
    inputDims[0].nbDims: 0
    inputDims[1].nbDims: 0
    inputDims[2].nbDims: 0

The plugin cannot be configured with null inputs. Tensorrt behaves the same in enqueue() call. Obviously, any attempt to access the values in inputs leads to sigsegv.

Is there a proper way to run trtexe to get valid dimesions?

Environment

TensorRT Version: 7.1.3
GPU Type: Jetson Nano
CUDA Version: 10.2
CUDNN Version:
Operating System + Version: Ubuntu 18.04.5
TensorFlow Version (if applicable): 1.15 (model trained on different machine]

Relevant Files

https://www.hobrasoft.cz/files/attentionocr.tgz

drwxr-xr-x root/root         0 2021-09-01 09:09 AttentionOcr/
drwxr-xr-x root/root         0 2021-09-01 09:09 AttentionOcr/lib/
-rw-r--r-- root/root       648 2021-08-30 19:58 AttentionOcr/lib/Makefile
-rw-r--r-- root/root      3389 2021-08-31 12:09 AttentionOcr/lib/onehot.h-1
-rw-r--r-- root/root     12518 2021-08-31 12:09 AttentionOcr/lib/onehot.cpp-1
-rw-r--r-- root/root     11710 2021-08-31 16:36 AttentionOcr/lib/onehot.cpp
-rw-r--r-- root/root      3137 2021-08-31 12:25 AttentionOcr/lib/onehot.h
-rwxr-xr-x root/root       205 2021-09-01 09:09 AttentionOcr/konverze-onnx.sh
-rw-r--r-- root/root  11886433 2021-08-31 14:58 AttentionOcr/model.onnx
-rw-r--r-- root/root       108 2021-08-31 15:06 AttentionOcr/check_model.py

Note: there are two versions of the plugin. The files ending with -1 contains plugin derived from IPluginV2Ext, other files contains plugin derived from IPluginV2IOExt. Both plugins behave the same.

Steps To Reproduce

tar -xzvf attentionocr.tgz
cd AttentionOcr/lib
make
cd …
./konverze-onnx.sh

trtexe output

…
[09/01/2021-09:32:17] [I] Loading supplied plugin library: lib/libonehot.so
----------------------------------------------------------------
Input filename:   model.onnx
ONNX IR version:  0.0.6
Opset version:    11
Producer name:    tf2onnx
Producer version: 1.9.1
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
[09/01/2021-09:32:18] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[09/01/2021-09:32:18] [W] [TRT] Tensor DataType is determined at build time for tensors not marked as input or output.
[09/01/2021-09:32:18] [I] [TRT] ModelImporter.cpp:135: No importer registered for op: OneHot. Attempting to import as plugin.
[09/01/2021-09:32:18] [I] [TRT] builtin_op_importers.cpp:3659: Searching for plugin: OneHot, plugin_version: 1, plugin_namespace: 
OneHotLayerCreator::createPlugin() AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/loop_function/OneHotEncoding/one_hot nb:1
OneHotLayer::OneHotLayer(fc) mDepth=36
[09/01/2021-09:32:18] [I] [TRT] builtin_op_importers.cpp:3676: Successfully created plugin: OneHot
OneHotLayer::getOutputDataType(0, inputTypes, 3): kFLOAT
OneHotLayer::OneHotLayer(axis) mDepth=36
OneHotLayer::getOutputDataType(0, inputTypes, 3): kFLOAT
OneHotLayer::getOutputDimensions(0, ([][][]), 3) 
…

Hi,
Please refer to below links related custom plugin implementation and sample:

While IPluginV2 and IPluginV2Ext interfaces are still supported for backward compatibility with TensorRT 5.1 and 6.0.x respectively, however, we recommend that you write new plugins or refactor existing ones to target the IPluginV2DynamicExt or IPluginV2IOExt interfaces instead.

Thanks!

Thanks for quick reply,
I looked to the example. Most of the example I have completed:

  • trained TF model
  • removed unsupported layers from model using TF
  • freezed the model
  • converted the TF frozen model to ONNX
  • created plugin OneHot derived from IPluginV2IOExt
  • There is no need to use ONNX graphsurgeon to change the ONNX model (I suppose)
  • The only unsuported layer is the OneHot and it should by implemented by the plugin.

I have not found any information why the trtexe calls my plugin with NULL DIMENSIONS.

The most significat difference in your example and my plugin is the basic class.
The example is derived from IPluginV2DynamicExt and my plugin is deriver from IPluginV2IOExt.

Should I derive my plugin from IPluginV2DynamicExt, too?

Yes, some experiments show that the IPluginV2DynamicExt is the right way. I received expected values in getOutputDimensions() now. I hope I will be able to create the plugin now. Thank you.

The plugin got expected values in getOutputDimensions() and the trtexec can parse the ONNX file properly.
But in next stage the trtexe failed:

 ----- Parsing of ONNX model model.onnx is Done ---- 
OneHotLayer::getOutputDimensions()AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/loop_function/OneHotEncoding/one_hot
    input (nbDims=1) value=1 constant: true
    input (nbDims=1) value=1 constant: true
    input (nbDims=1) value=2 constant: true
[09/02/2021-10:05:05] [E] [TRT] ../builder/cudnnBuilderGraphShapeAnalyzer.cpp (2467) - Assertion Error in updateExtent: 0 (layer validation and shape analyzer disagree about dimensions)
[09/02/2021-10:05:05] [E] Engine creation failed
[09/02/2021-10:05:05] [E] Engine set up failed

I cannot find the cudnnBuilderGraphShapeAnalyzer.cpp source. Please, is the analyzer checking the dimensions of my plugin last called? Can I have more information about the real and expected dimensions from the trtexec?

My fault.

https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_expr_builder.html

Citation:
Do not inherit from IExprBuilder class, as doing so will break forward-compatibility of the API and ABI.

Hi,

Are you still facing this issue.

Thank you.

It is solved. It is not possible to use the static plugin.
If I use the plugin derived from IPluginV2DynamicExt then the dimensions are sent to the plugin properly.
Now I have the prototype of the OneHot plugin working.

Thank you

1 Like
1 Like