I am trying to port mobilenet model to tensorRT5. I am able to get to uff conversion without any warning (so the conversion looks correct to me). However when I try to load the file and parse it to build the network, I am getting errors saying difference in dimensions of two inputs.
Error : ERROR: add_1/add: elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [13,40,63] and [13,40,62])
Has anyone managed to convert mobilenet models on tensorRT ? How can I resolve this error ?
(env) :~$ convert-to-uff -o sample_problem_model.uff --input-file sample_problem_model.pb -O add_1/add
env/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprd. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
WARNING:tensorflow:From env/lib/python3.5/site-packages/uff/converters/tensorflow/conversion_helpers.py:185: FastGFile.__init__ (from tensorflow.python.prm.gfile) is deprecated and will be removed in a future version.
Instructions for updating:
UFF Version 0.5.5
=== Automatically deduced input nodes ===
Using output node add_1/add
Converting to UFF graph
No. nodes: 88
UFF Output written to sample_problem_model.uff
UFF Version 0.5.5 (on Ubuntu 16.04) TensorRT-18.104.22.168 (C++ API on Windows)
I managed to solve this issue and the root cause seems to be that Conv2D layer implementation in tensorRT supports only padding=same. My implementation had padding=valid in Conv2D layer which worked well in keras with tensorflow backend, however when I converted that model pb file to uff, it changed the dimensions of the input incorrectly. Because of mismatch in dimensions of inputs for add layer, when I try to build the network by loading uff file it gives parsing error.
I would like to ask Nvidia, is there any plan for adding support for padding=valid in tensorRT ?