I have a problem to parse a Tensorflow model using the nvuffparser::IUffParser Parse operation.
I only have a freezed graph created by the tensorflow framework.
This is my configuration:
Pyton - 3.6.8
Tensorflow Python\C++ (TF)- 1.9 (C++ version was built from sources)
TensorRT C++ (TRT) - 22.214.171.124
CuDNN - 7.6.3
CUDA - 9.0
I’m getting the following messge - “Invalid DataType value!”.
When I checked which type is the unsupported one I found that it is the DT_BOOL.
I immediately thought about replacing the layer which use this type with a custom layer (plugin),
But the problem is that this type is used almost by all model layers, sometimes as the type of one of the layer tensors inputs and sometimes as the type of one of the layer tensors outputs.
These are my questions:
- Is there a plan to update TensorRT to support DT_BOOL type?
- If yes, what is the estimated date for this version?
- If no, What can I do in this situation? Shall I implement a custom layer for all model layers almost?
- Can graphsurgeon help here anyway?