TensorRT nvuffparser::IUffParser Parse() report on - 'Invalid DataType value!'

Hello,

I have a problem to parse a Tensorflow model using the nvuffparser::IUffParser Parse operation.

I only have a freezed graph created by the tensorflow framework.

This is my configuration:
Windows 10
Pyton - 3.6.8
VS2015 x64
Tensorflow Python\C++ (TF)- 1.9 (C++ version was built from sources)
TensorRT C++ (TRT) - 6.0.1.5
CuDNN - 7.6.3
CUDA - 9.0

I’m getting the following messge - “Invalid DataType value!”.

When I checked which type is the unsupported one I found that it is the DT_BOOL.

I immediately thought about replacing the layer which use this type with a custom layer (plugin),
But the problem is that this type is used almost by all model layers, sometimes as the type of one of the layer tensors inputs and sometimes as the type of one of the layer tensors outputs.

These are my questions:

  1. Is there a plan to update TensorRT to support DT_BOOL type?
  2. If yes, what is the estimated date for this version?
  3. If no, What can I do in this situation? Shall I implement a custom layer for all model layers almost?
  4. Can graphsurgeon help here anyway?

Thanks,

Hi orong13,

I reached out to the engineering team about this. I hope to get back to you with some answers within a couple days.

Hi orong13,

TensorRT 7.0 was released today which should support BOOL datatype: http://docs.nvidia.com/deeplearning/sdk/tensorrt-release-notes/index.html

Please try downloading the new release and see if that works for you: https://developer.nvidia.com/nvidia-tensorrt-download

Hello,
OMG!

Great news!

Thanks!

I will try using this version and update if the described problem above was sloved.

Best regard,

Hello,
I’m still getting the same error after updating the TensorRT version to 7.0.0.11.

Is it possible that despite the fact that this enum was updated:

enum class DataType : int
{
    kFLOAT = 0, //!< FP32 format.
    kHALF = 1,  //!< FP16 format.
    kINT8 = 2,  //!< quantized INT8 format.
    kINT32 = 3, //!< INT32 format.
    kBOOL = 4   //!< BOOL format.
};

Still the uff parser logic wasn’t updated too to support bool type?

This is the uff parser types enum:

enum class FieldType : int
{
    kFLOAT = 0,     //!< FP32 field type.
    kINT32 = 1,     //!< INT32 field type.
    kCHAR = 2,      //!< char field type. String for length>1.
    kDIMS = 4,      //!< nvinfer1::Dims field type.
    kDATATYPE = 5,  //!< nvinfer1::DataType field type.
    kUNKNOWN = 6
};

From the *.pbtxt file that was generated by the uff converter, this is the node that raised the problem:

nodes {
    id: "bn_conv1/keras_learning_phase"
    operation: "Input"
    fields {
      key: "dtype"
      value {
        dtype: 7
      }
    }
    fields {
      key: "shape"
      value {
        i_list {
        }
      }
    }
  }

Maybe this information will help you verify this issue.

Regard,

Sorry for the delay.

Good catch. Moving forward UFF/Caffe parsers are being deprecated per the release notes. You should be able to use the BOOL datatype using the ONNX parser. Please try to use tf2onnx to convert your model to ONNX, and then try the ONNX parser or “trtexec --onnx=model.onnx --explicitBatch”

Hello,
Thanks for the information.

I successfully used the tf2onnx tool to convert my tensorflow model to a onnx format based on this link guidence:
https://github.com/onnx/tensorflow-onnx

When I activated the trtexec tool from the TRT 7 downloaded material directory I got the following failure:

[12/31/2019-07:34:55] [I] === Model Options ===
[12/31/2019-07:34:55] [I] Format: ONNX
[12/31/2019-07:34:55] [I] Model: C:\AAG\HPC\Sensors\ATR\Data\longtrain_22_7_19\Algo\latest_verssion\pb_oct_22\freeze_oct_22.onnx
[12/31/2019-07:34:55] [I] Output:
[12/31/2019-07:34:55] [I] === Build Options ===
[12/31/2019-07:34:55] [I] Max batch: explicit
[12/31/2019-07:34:55] [I] Workspace: 16 MB
[12/31/2019-07:34:55] [I] minTiming: 1
[12/31/2019-07:34:55] [I] avgTiming: 8
[12/31/2019-07:34:55] [I] Precision: FP32
[12/31/2019-07:34:55] [I] Calibration:
[12/31/2019-07:34:55] [I] Safe mode: Disabled
[12/31/2019-07:34:55] [I] Save engine:
[12/31/2019-07:34:55] [I] Load engine:
[12/31/2019-07:34:55] [I] Inputs format: fp32:CHW
[12/31/2019-07:34:55] [I] Outputs format: fp32:CHW
[12/31/2019-07:34:55] [I] Input build shapes: model
[12/31/2019-07:34:55] [I] === System Options ===
[12/31/2019-07:34:55] [I] Device: 0
[12/31/2019-07:34:55] [I] DLACore:
[12/31/2019-07:34:55] [I] Plugins:
[12/31/2019-07:34:55] [I] === Inference Options ===
[12/31/2019-07:34:55] [I] Batch: Explicit
[12/31/2019-07:34:55] [I] Iterations: 10
[12/31/2019-07:34:55] [I] Duration: 3s (+ 200ms warm up)
[12/31/2019-07:34:55] [I] Sleep time: 0ms
[12/31/2019-07:34:55] [I] Streams: 1
[12/31/2019-07:34:55] [I] ExposeDMA: Disabled
[12/31/2019-07:34:55] [I] Spin-wait: Disabled
[12/31/2019-07:34:55] [I] Multithreading: Disabled
[12/31/2019-07:34:55] [I] CUDA Graph: Disabled
[12/31/2019-07:34:55] [I] Skip inference: Disabled
[12/31/2019-07:34:55] [I] Inputs:
[12/31/2019-07:34:55] [I] === Reporting Options ===
[12/31/2019-07:34:55] [I] Verbose: Disabled
[12/31/2019-07:34:55] [I] Averages: 10 inferences
[12/31/2019-07:34:55] [I] Percentile: 99
[12/31/2019-07:34:55] [I] Dump output: Disabled
[12/31/2019-07:34:55] [I] Profile: Disabled
[12/31/2019-07:34:55] [I] Export timing to JSON file:
[12/31/2019-07:34:55] [I] Export output to JSON file:
[12/31/2019-07:34:55] [I] Export profile to JSON file:
[12/31/2019-07:34:55] [I]
----------------------------------------------------------------
Input filename:   C:\AAG\HPC\Sensors\ATR\Data\longtrain_22_7_19\Algo\latest_verssion\pb_oct_22\freeze_oct_22.onnx
ONNX IR version:  0.0.6
Opset version:    7
Producer name:    tf2onnx
Producer version: 1.5.3
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
While parsing node number 7 [If]:
ERROR: ModelImporter.cpp:134 In function parseGraph:
[8] No importer registered for op: If
[12/31/2019-07:34:57] [E] Failed to parse onnx file
[12/31/2019-07:34:57] [E] Parsing model failed
[12/31/2019-07:34:57] [E] Engine creation failed
[12/31/2019-07:34:57] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # trtexec.exe --onnx=C:\AAG\HPC\Sensors\ATR\Data\longtrain_22_7_19\Algo\latest_verssion\pb_oct_22\freeze_oct_22.onnx --explicitBatch

I checked the root cause of the problem here:
https://github.com/onnx/onnx-tensorrt/blob/b7c0840493e72891096771d000d6de26a03aed62/operators.md

And you can see that If operation isn’t supported by the TRT Onnx.

Which means that I shall implement a plugin for it which takes me to another topic that I already opened about TRT onnx and how to register a plugin to it:
https://devtalk.nvidia.com/default/topic/1068292/tensorrt/custom-layer-plugin-tensorrtc-nvuffparser-iuffparser-vs-tensorrt-c-nvonnxparser-iparser/

Any kind of help\example how to register a plugin to the TRT onnx parser will be much appriciated.

Thanks,

Hi orong13,

Regarding ONNX plugin, I responded on your other thread. Hope it helps, but I haven’t done it myself: https://devtalk.nvidia.com/default/topic/1068292/tensorrt/custom-layer-plugin-tensorrtc-nvuffparser-iuffparser-vs-tensorrt-c-nvonnxparser-iparser/