I’m trying to convert a Tensorflow Detection Model (Mobilenetv2) into an TensorRT Model. The input size is a rectangle (640x360[wxh]).
In the release notes for TensorRT 7.2.1_OSS it is claimed that the the
GridAnchorRect_TRT plugin with rectangular feature maps is re-enabled.
(TensorRT OSS release v7.2.1 by rajeevsrao · Pull Request #835 · NVIDIA/TensorRT · GitHub)
Unfortunately the conversion is not working because during the building of the TRT Engine it can’t find the option
GridAnchorRect_TRT (more information below).
TensorRT Version: 7.2.1
GPU Type: GTX1070
Nvidia Driver Version: 460.32.03
CUDA Version: 11.1
CUDNN Version: 8.0.4
Operating System + Version: Ubuntu18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 1.15.3
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorrt:20.10-py3
The conversion is working with these files for a quadratic Input (300x300)
Inside these file are the config informations for the UFF Parser
uff_converter.py (3.7 KB)
These file convert the UFF File into TRT File
trt_engine_build_from_uff.py (1.0 KB)
frozen_inference_graph.pb (18.2 MB)
frozen_inference_graph.uff (17.9 MB)
Steps To Reproduce
- Use TensorRT 7.2.1 Docker Container (pull)
- Start docker container (with mounted volume to host) and enter /opt/tensorrt/python, run the
python_setup.shfile (installs e.g. uff, graphsurgeon etc.)
- Download the TensorRT_OSS into /workspace directory inside the container. This directory is mounted as volume to the host.
git clone -b master https://github.com/nvidia/TensorRT TensorRT
git submodule update --init --recursive
- create build directory inside /workspace/TensorRT
mkdir -p build && cd build
cmake .. -DTRT_LIB_DIR=$TRT_RELEASE/lib -DTRT_OUT_DIR=pwd
Know I have some folders inside /build e.g. /plugin, /samples etc…
In every /plugin folder is a Makefile but i cant start the make file with make (it is doing nothing)
- Transform my frozen_inference_graph.pb into .uff (working) and start building TRT engine (not working). The model files and conversion scripts are inside the hosted Volume.
root@8f250d1c3328:/workspace/uff/test5# python3 trt_engine_build_from_uff.py
[TensorRT] ERROR: UffParser: Validator error: GridAnchor: Unsupported operation _GridAnchorRect_TRT
Building TensorRT engine, this may take a few minutes…
[TensorRT] ERROR: Network must have at least one output
[TensorRT] ERROR: Network validation failed.
Traceback (most recent call last):
File “trt_engine_build_from_uff.py”, line 27, in
buf = trt_engine.serialize()
AttributeError: ‘NoneType’ object has no attribute ‘serialize’
How can I use the option
GridAnchorRect_TRTto transform my Model into a TRT Model?
Is my way correct or is there an easier way? (ONNX conversion isn’t working for me). I don’t want to install TesnorRT on my Computer.
Unfortunately the conversion isn’t working when i use the description from the TensorRT_OSS Github site (building a docker image is working but conversion is also not working). (GitHub - NVIDIA/TensorRT at release/7.2)
Thanks for any help