How to use the TensorRT_OSS plugins (e.g. GridAnchorPlugin) inside TensorRT Docker Containerexit()

Description

I’m trying to convert a Tensorflow Detection Model (Mobilenetv2) into an TensorRT Model. The input size is a rectangle (640x360[wxh]).
In the release notes for TensorRT 7.2.1_OSS it is claimed that the the GridAnchorRect_TRT plugin with rectangular feature maps is re-enabled.
(TensorRT OSS release v7.2.1 by rajeevsrao · Pull Request #835 · NVIDIA/TensorRT · GitHub)
Unfortunately the conversion is not working because during the building of the TRT Engine it can’t find the option GridAnchorRect_TRT (more information below).

Environment

TensorRT Version: 7.2.1
GPU Type: GTX1070
Nvidia Driver Version: 460.32.03
CUDA Version: 11.1
CUDNN Version: 8.0.4
Operating System + Version: Ubuntu18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable): 1.15.3
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag): nvcr.io/nvidia/tensorrt:20.10-py3

Relevant Files

The conversion is working with these files for a quadratic Input (300x300)

  1. Inside these file are the config informations for the UFF Parser
    uff_converter.py (3.7 KB)

  2. These file convert the UFF File into TRT File
    trt_engine_build_from_uff.py (1.0 KB)

  3. frozen_inference_graph.pb (18.2 MB)

  4. frozen_inference_graph.uff (17.9 MB)

Steps To Reproduce

Workflow

  1. Use TensorRT 7.2.1 Docker Container (pull)
  2. Start docker container (with mounted volume to host) and enter /opt/tensorrt/python, run the python_setup.sh file (installs e.g. uff, graphsurgeon etc.)
  3. Download the TensorRT_OSS into /workspace directory inside the container. This directory is mounted as volume to the host.
    git clone -b master https://github.com/nvidia/TensorRT TensorRT
    cd TensorRT
    git submodule update --init --recursive
    export TRT_SOURCE=pwd``
  4. create build directory inside /workspace/TensorRT
    mkdir -p build && cd build
    cmake .. -DTRT_LIB_DIR=$TRT_RELEASE/lib -DTRT_OUT_DIR=pwd/out
    make -j$(nproc)

Know I have some folders inside /build e.g. /plugin, /samples etc…
In every /plugin folder is a Makefile but i cant start the make file with make (it is doing nothing)

  1. Transform my frozen_inference_graph.pb into .uff (working) and start building TRT engine (not working). The model files and conversion scripts are inside the hosted Volume.

Error
root@8f250d1c3328:/workspace/uff/test5# python3 trt_engine_build_from_uff.py
[TensorRT] ERROR: UffParser: Validator error: GridAnchor: Unsupported operation _GridAnchorRect_TRT
Building TensorRT engine, this may take a few minutes…
[TensorRT] ERROR: Network must have at least one output
[TensorRT] ERROR: Network validation failed.
Traceback (most recent call last):
File “trt_engine_build_from_uff.py”, line 27, in
buf = trt_engine.serialize()
AttributeError: ‘NoneType’ object has no attribute ‘serialize’

Question:

  • How can I use the option GridAnchorRect_TRT to transform my Model into a TRT Model?

  • Is my way correct or is there an easier way? (ONNX conversion isn’t working for me). I don’t want to install TesnorRT on my Computer.

  • Unfortunately the conversion isn’t working when i use the description from the TensorRT_OSS Github site (building a docker image is working but conversion is also not working). (GitHub - NVIDIA/TensorRT at release/7.2)

Thanks for any help

Hi, Request you to check the below reference links for custom plugin implementation.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/sampleOnnxMnistCoordConvAC

Thanks!