Description
I think the Uff parser might have a bug and sometimes switche dimensions back to [H,W,C].
Here’s my problem:
I use build_enginy.py to convert my ssd_mobilenet_v3 to uff. Hereby all unsupported operations are replaced by other operations or plugins.
When parsing the uff file I encounter the following error:
[TensorRT] ERROR: FeatureExtractor/MobilenetV3/expanded_conv/squeeze_excite/mul: elementwise inputs must have same dimensions or follow broadcast rules (input dimensions were [38,38,8] and [8,1,1]).
But this is not simply a dimension mismatch in my model:
The output of the layer FeatureExtractor/MobilenetV3/expanded_conv/depthwise/Relu
is of dimension [8,38,38]
(check logfile line 114).
But the next layer (the mul-layer from above) that has the Relu layer as Input gets the dimension [38,38,8] passed (Check logfile line 163+164). Therefore it fails with a dimension mismatch.
I think it is an error of the uff parser to switch the dimensions in that way, since it ususally works.
Additionally, I added a reshape node to bring the dimensions back in shape. This fixes the error but the network is not usable anymore (bad predictions) because reshaping is of course not the inverse of the transposition that has taken place.
Would be great to solve that!
BTW: I know that Uff will be deprecated after the next major release and we should switch to onnx.
But uff is great and easy to handle, for example to replace NMS with the TRT-PLugin (a lot harder in ONNX). So since the Uff parser is not deprecated yet, it would be great to find a solution
Environment
TensorRT Version: 7.1
GPU Type: Tegra
Plattform: Jetson AGX Xavier
CUDA Version: 10.2.89
CUDNN Version: 8.0.0.145
Operating System + Version: Ubuntu 18.04 for arm
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 1.15.2
Baremetal or Container (if container which image + tag): Baremetal
Relevant Files
log_reverse_order.txt (25.0 KB)
build_engine.py.txt (15.6 KB)
Steps To Reproduce
Bug/Error can also be reproduced with the original ssd_mobilenet_v3_coco models:
- Download model pb (e.g. from GitHub)
- In build_engine.py specify the path to the frozen_graph.pb in the MODEL_SPECS for ssd_mobilenet_v3_small_coco
- run python3 build_enginy.py -v ssd_mobilenet_v3_small_coco