My working environment is Ubuntu16.04 (with trt4)on DGX-station.
I have a tensorflow model which is able to be converted to uff model successfully, and then run this uff model on Drive px2 platform (trt3) swimmingly.
Based on my original tensorflow model, I replaced some tf.nn.conv2d() API by tf.nn.atrous_conv2d() API, and some warning rose when I convert tensorflow model to uff model on DGX-station by using trt4:
(tf) adas@adas-DGX-Station:~/tf_to_uff$ python tf_to_uff.py 2019-02-13 14:03:15.940875: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA 2019-02-13 14:03:16.316725: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties: name: Tesla V100-DGXS-32GB major: 7 minor: 0 memoryClockRate(GHz): 1.53 pciBusID: 0000:0f:00.0 totalMemory: 31.74GiB freeMemory: 31.32GiB 2019-02-13 14:03:16.316759: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0 2019-02-13 14:03:16.794658: I tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-02-13 14:03:16.794692: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0 2019-02-13 14:03:16.794702: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N 2019-02-13 14:03:16.795323: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30351 MB memory) -> physical GPU (device: 0, name: Tesla V100-DGXS-32GB, pci bus id: 0000:0f:00.0, compute capability: 7.0) Automatically deduced output nodes: inference_ins/output Using output node inference_ins/output Converting to UFF graph [b]WARNING: The UFF converter currently only supports 2D dilated convolutions WARNING: The UFF converter currently only supports 2D dilated convolutions WARNING: The UFF converter currently only supports 2D dilated convolutions WARNING: The UFF converter currently only supports 2D dilated convolutions WARNING: The UFF converter currently only supports 2D dilated convolutions WARNING: The UFF converter currently only supports 2D dilated convolutions[/b] No. nodes: 193 UFF Output written to model/px2HRYT_lanenet_pb.uff UFF Text Output written to model/px2HRYT_lanenet_pb.uff.pbtxt [HRYT_lanenet] Successfully transfer to UFF model
I’m sure that the only thing I changed is replacing conv2d with atrous_conv2d.
This warning looks like it doesn’t hinder my model converting, because the tf.nn.atrous_conv2d is just the only supported 2D dilated convolutions by UFF converter.
However, when I test the converted uff model by trtexec of trt126.96.36.199, error rose like:
adas@adas-DGX-Station:~/TensorRT-188.8.131.52/bin$ ./trtexec --uff='/home/adas/tf_to_uff/model/px2HRYT_lanenet_pb.uff' --uffInput=input_tensor,3,256,256 --output=inference_ins/output --engine=px2lanenet uff: /home/adas/tf_to_uff/model/px2HRYT_lanenet_pb.uff uffInput: input_tensor,3,256,256 output: inference_ins/output engine: px2lanenet UFFParser: Parser error: net_build/decode/deconv_1/conv2d_transpose: Output shape of UFF ConvTranspose is wrong Engine could not be created Engine could not be created
I can’t understand this error, why a warning about 2d dilated convolutions can cause output shape of UFF ConvTranspose wrong?
Or 2D dilated convolutions is not supported by trt184.108.40.206 in reality?