Using UFF from Tensorflow and running a lite TensorRT Engine

Hi everyone, I have a tensorflow model that consists mainly of the following layers:

  • tf.layers.conv2d
  • tf.layers.batch_normalization
  • tf.layers.dense

I was able to generate the UFF, and it’s seems valid, but when I try to generate the engine I encounter the following error:

[TensorRT] INFO: Detecting Framework
[TensorRT] INFO: Parsing Model from uff
[TensorRT] INFO: UFFParser: parsing input_image
[TensorRT] INFO: UFFParser: parsing PilotNet/Conv2dBatchNorm/conv2d/kernel
[TensorRT] INFO: UFFParser: parsing PilotNet/Conv2dBatchNorm/conv2d/Conv2D
[TensorRT] INFO: UFFParser: parsing PilotNet/Conv2dBatchNorm/conv2d/bias
[TensorRT] INFO: UFFParser: parsing PilotNet/Conv2dBatchNorm/conv2d/BiasAdd
[TensorRT] INFO: UFFParser: parsing PilotNet/Conv2dBatchNorm/batch_normalization/gamma
[TensorRT] INFO: UFFParser: parsing PilotNet/Conv2dBatchNorm/batch_normalization/beta
[TensorRT] INFO: UFFParser: parsing PilotNet/Conv2dBatchNorm/batch_normalization/moving_mean
[TensorRT] INFO: UFFParser: parsing PilotNet/Conv2dBatchNorm/batch_normalization/moving_variance
[TensorRT] INFO: UFFParser: parsing PilotNet/Conv2dBatchNorm/batch_normalization/FusedBatchNorm
python: Network.h:104: virtual nvinfer1::DimsHW nvinfer1::NetworkDefaultConvolutionFormula::compute(nvinfer1::DimsHW, nvinfer1::DimsHW, nvinfer1::DimsHW, nvinfer1::DimsHW, nvinfer1::DimsHW, const char*): Assertion `(input.w() + padding.w() * 2) >= dkw && "Image width with padding must always be at least the width of the dilated filter."' failed.

The following is my model

#######################
Model
#######################
Tensor("input_image:0", shape=(?, 66, 200, 3), dtype=float32)
Tensor("PilotNet/Conv2dBatchNorm/Relu:0", shape=(?, 31, 98, 24), dtype=float32)
Tensor("PilotNet/Conv2dBatchNorm_1/Relu:0", shape=(?, 14, 47, 36), dtype=float32)
Tensor("PilotNet/Conv2dBatchNorm_2/Relu:0", shape=(?, 5, 22, 48), dtype=float32)
Tensor("PilotNet/Conv2dBatchNorm_3/Relu:0", shape=(?, 3, 20, 64), dtype=float32)
Tensor("PilotNet/Conv2dBatchNorm_4/Relu:0", shape=(?, 1, 18, 64), dtype=float32)
Tensor("PilotNet/Flatten/flatten/Reshape:0", shape=(?, 1152), dtype=float32)
Tensor("PilotNet/DenseBatchNorm/Relu:0", shape=(?, 400), dtype=float32)
Tensor("PilotNet/DenseBatchNorm_1/Relu:0", shape=(?, 50), dtype=float32)
Tensor("PilotNet/DenseBatchNorm_2/Relu:0", shape=(?, 10), dtype=float32)
Tensor("PilotNet/dense/MatMul:0", shape=(?, 1), dtype=float32)

Hi,

Looks like you are using a Flatten op in your model.

Flatten operation is available until TensorRT 3.0 GA(libnvinfer4.0.1).
The newest TensorRT package for Jetson is 3.0 RC(libnvinfer4.0.0), which doesn’t support this ops yet.

Please run your model with x86 Linux based package or wait for our next JetPack release.
Thanks.

Thank you, but it seems to me that the error descriptions is saying that is failing in a convolutional layer, also we reproduce the error on an x86-64 architecture. Any ideas?

Hi,

TensorRT requires ( img_width + padding*2 ) > filter_width.
Please check the size of the convolution input is larger than the filter width to avoid this error.

Thanks.