Custom SSD_v2 model is not convert to TRT_engine

Dear Team,

Kindly help me to get out of this problem.

[TensorRT] INFO: UFFParser: Applying order forwarding to: Squeeze
[TensorRT] INFO: UFFParser: parsing GridAnchor
[libprotobuf FATAL[b]/home/erisuser/p4sw/sw/gpgpu/MachineLearning/DIT/externals/protobuf/aarch64/10.0/include/google/protobuf/repeated_field.h:1408] CHECK failed: (index) < (current_size_)

Note:
The custom model of SSD_v2 has been trained on tensorflow_v1.12.0. Now I try to convert the frozen_graph.pb to trt engine .bin and getting the error.

If I try to convert the pre-trained model of SSD_v2 then it’s easily converted to trt engine format. Then why the problem occurs when I try to convert the SSD_v2 custom model?

Hi,

This is a known limitation.

A workaround is to re-train the model this this change in multiple_grid_anchor_generator.py.

diff --git a/multiple_grid_anchor_generator.py b/multiple_grid_anchor_generator.py
index 86007c9..12da3bc 100644
--- a/multiple_grid_anchor_generator.py
+++ b/multiple_grid_anchor_generator.py
@@ -95,7 +95,8 @@ class MultipleGridAnchorGenerator(anchor_generator.AnchorGenerator):
       raise ValueError('box_specs_list is expected to be a '
                        'list of lists of pairs')
     if base_anchor_size is None:
-      base_anchor_size = [256, 256]
+      base_anchor_size = [256., 256.]
+    base_anchor_size = tf.constant(base_anchor_size, dtype=tf.float32)
     self._base_anchor_size = base_anchor_size
     self._anchor_strides = anchor_strides
     self._anchor_offsets = anchor_offsets

You can find more information in this topic:
https://devtalk.nvidia.com/default/topic/1069027/tensorrt/parsing-gridanchor-op-gridanchor_trt-protobuf-repeated_field-h-1408-check-failed-index-lt-current_size-/?offset=3#5415537

Thanks.

Hello AastaLLL,

After doing that, got this error:

[TensorRT] ERROR: UffParser: Parser error: image_tensor: Invalid DataType value!
[TensorRT] ERROR: Network must have at least one output

Hi,

Sorry for the late.

Have you re-trained the model with the patch in comment#2?
If yes, would you mind to share the re-trained model with us for debugging?

Thanks.

Hi,

Yes I re-trained the model with the changes what you told in comment #2.

FYI- Model below-

Thanks.

Hi,

We cannot download the model due to no permission.
Could you help to enable it?

Thanks.

Hi,

Check the update.

Hi,

The protobuf error can be fixed by appending a dummy constant tensor to the GridAnchor_TRT layer.
However, there is another issue in concatenate layer that still still under checking.

Attached the change we made for the error for your reference. topic_112757.txt (1.5 KB)

Thanks.

Thanks Aasta for updating.
Kindly keep update when the problem has been resolve.

Regards.

Hi,

Thanks for your patience.
We are still checking this issue.

To give further suggestion, could you tell us how many class are your model trained for?
Thanks.

Hi,

Three classes are there.

Thanks.

Hi,

Sorry that it takes us some time to fix this issue.

We confirmed that your .pb model can be converted into TensorRT with this config.py.

#
# Copyright 1993-2019 NVIDIA Corporation.  All rights reserved.
#
# NOTICE TO LICENSEE:
#
# This source code and/or documentation ("Licensed Deliverables") are
# subject to NVIDIA intellectual property rights under U.S. and
# international Copyright laws.
#
# These Licensed Deliverables contained herein is PROPRIETARY and
# CONFIDENTIAL to NVIDIA and is being provided under the terms and
# conditions of a form of NVIDIA software license agreement by and
# between NVIDIA and Licensee ("License Agreement") or electronically
# accepted by Licensee.  Notwithstanding any terms or conditions to
# the contrary in the License Agreement, reproduction or disclosure
# of the Licensed Deliverables to any third party without the express
# written consent of NVIDIA is prohibited.
#
# NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
# LICENSE AGREEMENT, NVIDIA MAKES NO REPRESENTATION ABOUT THE
# SUITABILITY OF THESE LICENSED DELIVERABLES FOR ANY PURPOSE.  IT IS
# PROVIDED "AS IS" WITHOUT EXPRESS OR IMPLIED WARRANTY OF ANY KIND.
# NVIDIA DISCLAIMS ALL WARRANTIES WITH REGARD TO THESE LICENSED
# DELIVERABLES, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY,
# NONINFRINGEMENT, AND FITNESS FOR A PARTICULAR PURPOSE.
# NOTWITHSTANDING ANY TERMS OR CONDITIONS TO THE CONTRARY IN THE
# LICENSE AGREEMENT, IN NO EVENT SHALL NVIDIA BE LIABLE FOR ANY
# SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, OR ANY
# DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
# WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
# ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
# OF THESE LICENSED DELIVERABLES.
#
# U.S. Government End Users.  These Licensed Deliverables are a
# "commercial item" as that term is defined at 48 C.F.R. 2.101 (OCT
# 1995), consisting of "commercial computer software" and "commercial
# computer software documentation" as such terms are used in 48
# C.F.R. 12.212 (SEPT 1995) and is provided to the U.S. Government
# only as a commercial end item.  Consistent with 48 C.F.R.12.212 and
# 48 C.F.R. 227.7202-1 through 227.7202-4 (JUNE 1995), all
# U.S. Government End Users acquire the Licensed Deliverables with
# only those rights set forth herein.
#
# Any use of the Licensed Deliverables in individual and commercial
# software must include, in the user documentation and internal
# comments to the code, the above Disclaimer and U.S. Government End
# Users Notice.
#

import graphsurgeon as gs
import tensorflow as tf
import numpy as np

Input = gs.create_node("Input",
    op="Placeholder",
    dtype=tf.float32,
    shape=[1, 3, 300, 300])
PriorBox = gs.create_plugin_node(name="GridAnchor", op="GridAnchor_TRT",
    numLayers=6,
    minSize=0.2,
    maxSize=0.95,
    aspectRatios=[1.0, 2.0, 0.5, 3.0, 0.33],
    variance=[0.1,0.1,0.2,0.2],
    featureMapShapes=[19, 10, 5, 3, 2, 1])
NMS = gs.create_plugin_node(name="NMS", op="NMS_TRT",
    shareLocation=1,
    varianceEncodedInTarget=0,
    backgroundLabelId=0,
    confidenceThreshold=1e-8,
    nmsThreshold=0.6,
    topK=100,
    keepTopK=100,
    numClasses=3,
    inputOrder= [0, 2, 1],
    confSigmoid=1,
    isNormalized=1)
concat_priorbox = gs.create_node(name="concat_priorbox", op="ConcatV2", dtype=tf.float32, axis=2)
concat_box_loc = gs.create_plugin_node("concat_box_loc", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)
concat_box_conf = gs.create_plugin_node("concat_box_conf", op="FlattenConcat_TRT", dtype=tf.float32, axis=1, ignoreBatch=0)
dummy_const = gs.create_node(name="dummy_const", op="Const", dtype=tf.float32, value=np.array([1, 1], dtype=np.float32))

namespace_plugin_map = {
    "Concatenate": concat_priorbox,
    "MultipleGridAnchorGenerator": PriorBox,
    "Postprocessor": NMS,
    "image_tensor": Input,
    "Cast": Input,
    "ToFloat": Input,
    "Preprocessor": Input,
    "concat": concat_box_loc,
    "concat_1": concat_box_conf
}

namespace_remove = {
    "ToFloat",
    "image_tensor",
    "Preprocessor/map/TensorArrayStack_1/TensorArrayGatherV3",
}

def preprocess(dynamic_graph):
    dynamic_graph.remove(dynamic_graph.find_nodes_by_path(namespace_remove), remove_exclusive_dependencies=False)
    # Now create a new graph by collapsing namespaces
    dynamic_graph.collapse_namespaces(namespace_plugin_map)
    # Remove the outputs, so we just have a single output node (NMS).
    dynamic_graph.remove(dynamic_graph.graph_outputs, remove_exclusive_dependencies=False)
    dynamic_graph.append(dummy_const)
    dynamic_graph.find_nodes_by_op("GridAnchor_TRT")[0].input.append("dummy_const")
$ sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -o sample_ssd_relu6.uff -O NMS -p config.py
$ /usr/src/tensorrt/bin/trtexec --uff=sample_ssd_relu6.uff --uffInput=Input,3,300,300 --output=NMS

Please let us know your results.
Thanks.

Thank you AastaLLL for your response.
I’m not able to test now because of COVID-19 pandemic. Will update you back after testing.

Sure. Stay safe!