Description
I am trying to port Attention OCR to Jetson Nano. I’v successfully trained the model and converted it to ONNX. I have removed some unsuported datatypes and layers and I ended up with only one unsupported layer: OneHot. Now I’m trying to create a C++ OneHot plugin but when running trtexe to convert ONNX to TRT engine, invalid dimensions was sent to the plugin:
Expected inputs:
- input 0: indices (input tensor), kINT32
- input 1: depth = scalar, kINT32
- input 2: values [off_value, on_value], kFLOAT
Expected input dimensions:
- input 0: [??] - depending on the previous layer
- input 1: [1] ? - scalar (depth of the OneHot, dimension of output)
- input 2: [2] - two values in one dimensional tensor
(see https://github.com/onnx/onnx/blob/master/docs/Operators.md#OneHot)
Part of C++ code:
void OneHotLayer::configurePlugin(const Dims* inputDims, int nbInputs...
content of parameters:
nbInputs = 3
inputDims[0].nbDims: 0
inputDims[1].nbDims: 0
inputDims[2].nbDims: 0
The plugin cannot be configured with null inputs. Tensorrt behaves the same in enqueue() call. Obviously, any attempt to access the values in inputs leads to sigsegv.
Is there a proper way to run trtexe to get valid dimesions?
Environment
TensorRT Version: 7.1.3
GPU Type: Jetson Nano
CUDA Version: 10.2
CUDNN Version:
Operating System + Version: Ubuntu 18.04.5
TensorFlow Version (if applicable): 1.15 (model trained on different machine]
Relevant Files
https://www.hobrasoft.cz/files/attentionocr.tgz
drwxr-xr-x root/root 0 2021-09-01 09:09 AttentionOcr/
drwxr-xr-x root/root 0 2021-09-01 09:09 AttentionOcr/lib/
-rw-r--r-- root/root 648 2021-08-30 19:58 AttentionOcr/lib/Makefile
-rw-r--r-- root/root 3389 2021-08-31 12:09 AttentionOcr/lib/onehot.h-1
-rw-r--r-- root/root 12518 2021-08-31 12:09 AttentionOcr/lib/onehot.cpp-1
-rw-r--r-- root/root 11710 2021-08-31 16:36 AttentionOcr/lib/onehot.cpp
-rw-r--r-- root/root 3137 2021-08-31 12:25 AttentionOcr/lib/onehot.h
-rwxr-xr-x root/root 205 2021-09-01 09:09 AttentionOcr/konverze-onnx.sh
-rw-r--r-- root/root 11886433 2021-08-31 14:58 AttentionOcr/model.onnx
-rw-r--r-- root/root 108 2021-08-31 15:06 AttentionOcr/check_model.py
Note: there are two versions of the plugin. The files ending with -1 contains plugin derived from IPluginV2Ext, other files contains plugin derived from IPluginV2IOExt. Both plugins behave the same.
Steps To Reproduce
tar -xzvf attentionocr.tgz
cd AttentionOcr/lib
make
cd …
./konverze-onnx.sh
trtexe output
…
[09/01/2021-09:32:17] [I] Loading supplied plugin library: lib/libonehot.so
----------------------------------------------------------------
Input filename: model.onnx
ONNX IR version: 0.0.6
Opset version: 11
Producer name: tf2onnx
Producer version: 1.9.1
Domain:
Model version: 0
Doc string:
----------------------------------------------------------------
[09/01/2021-09:32:18] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[09/01/2021-09:32:18] [W] [TRT] Tensor DataType is determined at build time for tensors not marked as input or output.
[09/01/2021-09:32:18] [I] [TRT] ModelImporter.cpp:135: No importer registered for op: OneHot. Attempting to import as plugin.
[09/01/2021-09:32:18] [I] [TRT] builtin_op_importers.cpp:3659: Searching for plugin: OneHot, plugin_version: 1, plugin_namespace:
OneHotLayerCreator::createPlugin() AttentionOcr_v1/sequence_logit_fn/SQLR/LSTM/attention_decoder/loop_function/OneHotEncoding/one_hot nb:1
OneHotLayer::OneHotLayer(fc) mDepth=36
[09/01/2021-09:32:18] [I] [TRT] builtin_op_importers.cpp:3676: Successfully created plugin: OneHot
OneHotLayer::getOutputDataType(0, inputTypes, 3): kFLOAT
OneHotLayer::OneHotLayer(axis) mDepth=36
OneHotLayer::getOutputDataType(0, inputTypes, 3): kFLOAT
OneHotLayer::getOutputDimensions(0, ([][][]), 3)
…