Does the convert_to_uff.py support Conv3D & MaxPool3D?

Hi all,

I have a TensorFlow model want to convert to uff format.
Use command: python convert_to_uff.py TF.pb

But i got error as below:
Warning: No conversion function registered for layer: Conv3D yet.
Warning: No conversion function registered for layer: MaxPool3D yet.

I found Conv3D was supported in TensorRT as below:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-700/tensorrt-support-matrix/index.html#layers-matrix

Does the convert_to_uff.py support Conv3D & MaxPool3D?
if convert_to_uff.py not support yet, Does have any plan to support Conv3D & MaxPool3D?


Environment:
TensorRT 7.0
CUDA 9.0

Hi,

We are deprecating Caffe Parser and UFF Parser in TensorRT 7.
https://docs.nvidia.com/deeplearning/sdk/tensorrt-archived/tensorrt-700/tensorrt-release-notes/tensorrt-7.html#tensorrt-7

Try converting your model to ONNX instead using tf2onnx and then convert to TensorRT using ONNX parser. Any layer that are not supported needs to be replaced by custom plugin.
https://github.com/onnx/tensorflow-onnx
https://github.com/onnx/onnx-tensorrt/blob/master/operators.md

Thanks

Hi SunilJB,

I have try to convert TensorFlow frozen file to onnx with tf2onnx.
but i got error message as below:



ERROR - tf2onnx.tfonnx: Tensorflow op [max_pooling3d_1/MaxPool3D: MaxPool3D] is not supported



ValueError: kernel rank must be 2* spatial

It seems that tf2onnx does not support Conv3D / MaxPool3D either.

Do you have any suggestion?

BR,
Frankle Yeh

Hi,

As an alternate approach, you can convert the model to TRT using TF-TRT and serialize it to a .plan file. Then deserialize the .plan file using the C++ API (TensorRT’s C++ API or through the TensorRT Inference Server).
See:
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#usage-example
and
https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#tensorrt-plan

Thanks

Hi SunilJB,

I try to to load the frozen graph file and parse it to create a deserialized GraphDef:
using below code by refecnce :Accelerating Inference In TF-TRT User Guide :: NVIDIA Deep Learning Frameworks Documentation


import tensorflow as tf
from tensorflow.python.compiler.tensorrt import trt_convert as trt
with tf.Session() as sess:
# First deserialize your frozen graph:
with tf.gfile.GFile(“/path/to/your/frozen/graph.pb”, ‘rb’) as f:
frozen_graph = tf.GraphDef()
frozen_graph.ParseFromString(f.read())
# Now you can create a TensorRT inference graph from your
# frozen graph:
converter = trt.TrtGraphConverter(
input_graph_def=frozen_graph,
nodes_blacklist=[‘logits’, ‘classes’]) #output nodes
trt_graph = converter.convert()
# Import the TensorRT graph into a new graph and run:
output_node = tf.import_graph_def(
trt_graph,
return_elements=[‘logits’, ‘classes’])
sess.run(output_node)

and I got error below:
2020-02-07 16:34:31.592606: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library ‘libnvinfer.so.5’; dlerror: libnvinfer.so.5: cannot open shared object file: No such file or directory
2020-02-07 16:34:31.592648: F tensorflow/compiler/tf2tensorrt/stub/nvinfer_stub.cc:49] getInferLibVersion symbol not found.

My TensorFlow version is 1.15, and TensorRT version is 7.0

Is this relevant to my situation?

BR,
Frankle Yeh

Hi,

Can you try with latest Tensorflow version 2.0?

Thanks