Unable to deploy tlt pruned models on nano properly

I have trained a detectnetv2 resnet10 model and pruned it and then exported it via tlt-export command. I used the following command to export -

!tlt-export $USER_EXPERIMENT_DIR/experiment_dir_retrain/${MODEL_FILE} \
            -o $USER_EXPERIMENT_DIR/experiment_dir_final/${FINAL_MODEL} \
            --outputs output_cov/Sigmoid,output_bbox/BiasAdd \
            --enc_key $KEY \
            --data_type fp16 \
            --input_dims c,h,w\
            --export_module detectnet_v2

The model got exported to etlt format. Then I copied the etlt model to nano and downloaded tlt-coverter tool for nano from https://developer.nvidia.com/tlt-converter-trt60.

I ran tlt-converter command as follows -

./tlt-converter  -k ${KEY} -o output_cov/Sigmoid,output_bbox/BiasAdd -d c,h,w -i nchw -m 16 -e engines/${ENGINE_FILE} ${INPUT_MODEL_FILE}

.

I didn’t get any output on the terminal but the engine file was generated. I made the necessary path changes in the spec file and I tried running deepstream-app. The app runs but the following error is thrown -

gstnvtracker: Batch processing is OFF
0:00:04.780522987 11044   0x5587b01320 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): INVALID_ARGUMENT: Can not find binding of given name
0:00:04.780599551 11044   0x5587b01320 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:checkEngineParams(): Could not find output layer 'conv2d_bbox' in engine
0:00:04.780641948 11044   0x5587b01320 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): INVALID_ARGUMENT: Can not find binding of given name
0:00:04.780675021 11044   0x5587b01320 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[

I have followed the steps exactly mentioned in the documentation.

Hi neophyte1,
What is your TensorRT version in your nano?

Hi neophyte1,

We haven’t heard back from you in a couple weeks, so marking this topic closed.
Please open a new forum topic when you are ready and we’ll pick it up there.

I have similar problem with this. I ran tlt-converter on Jetson Nano with using this command:

tlt-converter resnet18_detector.etlt \
               -k $KEY \
               -c calibration.bin \
               -o output_cov/Sigmoid,output_bbox/BiasAdd \
               -d 3,384,1248 \
               -i nchw \
               -m 64 \
               -t int8 \
               -e resnet18_detector.trt \
               -b 4

This created a .trt engine, that I put in samples folder. I changed my config file to:

################################################################################
# Copyright (c) 2018-2019, NVIDIA CORPORATION. All rights reserved.
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
################################################################################

# Following properties are mandatory when engine files are not specified:
#   int8-calib-file(Only in INT8)
#   Caffemodel mandatory properties: model-file, proto-file, output-blob-names
#   UFF: uff-file, input-dims, uff-input-blob-name, output-blob-names
#   ONNX: onnx-file
#
# Mandatory properties for detectors:
#   num-detected-classes
#
# Optional properties for detectors:
#   enable-dbscan(Default=false), interval(Primary mode only, Default=0)
#   custom-lib-path
#   parse-bbox-func-name
#
# Mandatory properties for classifiers:
#   classifier-threshold, is-classifier
#
# Optional properties for classifiers:
#   classifier-async-mode(Secondary mode only, Default=false)
#
# Optional properties in secondary mode:
#   operate-on-gie-id(Default=0), operate-on-class-ids(Defaults to all classes),
#   input-object-min-width, input-object-min-height, input-object-max-width,
#   input-object-max-height
#
# Following properties are always recommended:
#   batch-size(Default=1)
#
# Other optional properties:
#   net-scale-factor(Default=1), network-mode(Default=0 i.e FP32),
#   model-color-format(Default=0 i.e. RGB) model-engine-file, labelfile-path,
#   mean-file, gie-unique-id(Default=0), offsets, gie-mode (Default=1 i.e. primary),
#   custom-lib-path, network-mode(Default=0 i.e FP32)
#
# The values in the config file are overridden by values set through GObject
# properties.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-engine-file=../../../../samples/models/Primary_Detector_pc/resnet18_detector.trt
labelfile-path=../../../../samples/models/Primary_Detector_pc/labels.txt
int8-calib-file=../../../../samples/models/Primary_Detector_pc/calibration.bin
batch-size=1
process-mode=1
model-color-format=0
network-mode=2
num-detected-classes=1
interval=0
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid

[class-attrs-all]
threshold=0.2
eps=0.2
group-threshold=1

When I ran the model using:

./deepstream-test3-app \
file:///home/dlinano/deepstream_sdk_v4.0.2_jetson/samples/streams/sample.mp4

it generates:
** DeepStream: Launched RTSP Streaming at rtsp://localhost:8554/ds-test ***

Now playing: file:///home/dlinano/deepstream_sdk_v4.0.2_jetson/samples/streams/sample.mp4,
Opening in BLOCKING MODE 
Creating LL OSD context new
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_nvdcf.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is ON
[NvDCF] Initialized
0:00:08.120127810 10912   0x55a3e5aef0 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:log(): INVALID_ARGUMENT: Can not find binding of given name
0:00:08.120201820 10912   0x55a3e5aef0 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:checkEngineParams(): Could not find output layer 'conv2d_bbox' in engine
0:00:08.120232341 10912   0x55a3e5aef0 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:log(): INVALID_ARGUMENT: Can not find binding of given name
0:00:08.120268695 10912   0x55a3e5aef0 WARN                 nvinfer gstnvinfer.cpp:515:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]:checkEngineParams(): Could not find output layer 'conv2d_cov/Sigmoid' in engine

Hi marco,
What is your TensorRT version in your nano?

Hi Morganh,

I used TensorRT version 6.0.1.10-1

Please change your DS config file to

output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd

reference: https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#deepstream_deployment

1 Like