Tlt-convert on jetson nano

I want to convert the etlt file to TensorRT file(trt/plane/engine) for jetson nano.

Description

I tranied the DetectNet-v2 resnet-18 for kitti with docker image nvcr.io/nvidia/tlt-streamanalytics:v1.0_py2 and I also passed all step to convert the trained model to etlt.

Environment

TensorRT Version : 5.1.6.1-1+cuda10.0
GPU Type : Jetson Nano
Nvidia Driver Version : 4.2.2 [L4T 32.2.1]
CUDA Version : 10.0.326
CUDNN Version :7.5.0.56-1+cuda10.0
Operating System + Version : Ubuntu 18.04, Linux kernel 4.9.140
Python Version (if applicable) : 3.6.9

Question

1- For running on jetson nano I need to do convert on jetson nano, I want to know I need to do this step with deep stream or tlt on jetson nano? Is it possible to run and do this step with docker image nvcr.io/nvidia/tlt-streamanalytics:v1.0_py2 on jetson nano? if so, I also need to download tlt-conveter on jetson nano?

I want to run :

tlt-converter /workspace/tmp/experiment_dir_final/resnet18_detector.etlt \
               -k <key> \
               -c /workspace/tmp/experiment_dir_final/calibration.bin \
               -o output_cov/Sigmoid,output_bbox/BiasAdd \
               -d 3,384,1248 \
               -i nchw \
               -m 64 \
               -t int8 \
               -e /workspace/tmp/experiment_dir_final/resnet18_detector.trt \
               -b 4

If I download the tlt-converter for jetson nano, becuase the TLT has build-in tlt-converter, How to do this step to don’t conflict together?

Suggest you read tlt user guide firstly. https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html#gen_eng_tlt_converter
Or see my explanations here.

  1. For running at Nano, there are two ways. 1) use etlt model directly. 2) use trt-converter to generate a trt engine. See tlt user guide.

Option 1: Integrate the model (.etlt) with the encrypted key directly in the DeepStream app. The model file is generated by tlt-export .

Option 2: Generate a device specific optimized TensorRT engine, using tlt-converter. The TensorRT engine file can also be ingested by DeepStream.

  1. The etlt model is generated when you run tlt-export inside docker.
    The trt engine can be generated inside docker or at Jetson Nano. If you run inference in Nano, you must use tlt-converter(Jetson platform version) to generate trt engine in Nano. Yes, you need to download tlt-converter for Jetson platform version.
    For Jetson platform and trt 5.1, please download from https://developer.nvidia.com/tlt-converter-trt51
    For Jetson platform and trt 6 , please download from https://developer.nvidia.com/tlt-converter-trt60
    For Jetson platform and trt 7.1, please download from https://developer.nvidia.com/tlt-converter-trt71
1 Like

Hi @Morganh.
What is the correct way to do the option 1?
I’m trying with this primary-gie on the config file

[primary-gie]
enable=1
gpu-id=0
model-engine-file=/opt/nvidia/deepstream/deepstream/controlflow/models/Controlflow_tlt/frcnn_kitti_resnet18_retrain.etlt.engine
batch-size=8
#Required by the app for OSD, not a plugin property
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=10
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_controlflow.txt 

where the config_infer_controlflow.txt is

# Copyright (c) 2020 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=<Of course I'm not posting my key on the internet>
#tlt-encoded-model=../models/Controlflow_tlt/frcnn_kitti_resnet18_retrain_pf16.etlt
tlt-encoded-model=../models/Controlflow_tlt/frcnn_kitti_resnet18_retrain.etlt
labelfile-path=../models/Controlflow_tlt/labels.txt
#int8-calib-file=../models/Controlflow_tlt/dashcamnet_int8.txt
#model-engine-file=../models/Controlflow_tlt/frcnn_kitti_resnet18_retrain_fp16.etlt.engine
model-engine-file=../models/Controlflow_tlt/frcnn_kitti_resnet18_retrain.etlt.engine
#input-dims=3;384;1248;0
input-dims=3;544;960;0
uff-input-blob-name=input_1
batch-size= 1 #8 #3
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=7
interval=2
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid

[class-attrs-all]
pre-cluster-threshold=0.2
group-threshold=1
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
eps=0.2
#minBoxes=3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0

Am I missing something or is more likely that my model was not exported correctly (as I suspect since the errors refered on the post below)?
Error while executing the fastest RCNN example on the tlt officialy provided docker in my intel computer

Thank you in advance

Please refer to the files under /opt/nvidia/deepstream/deepstream/samples/configs/tlt_pretrained_models

sudo ./tlt-converter /opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/barnet_models/resnet18_detector.etlt -k Nmk4Mzl1ZW4zcXNqZHZqM3AwbmRoOThlcGI6OGRiYjViOWUtYWIzMS00MWFkLWFkY2ItZjM1NTZiN2U4MmFj -c /opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/barnet_models/calibration.bin -o output_bbox/BiasAdd,output_cov/Sigmoid -d 3,1072,1920 -i nchw -m 64 -t int8 -e /tlt-converter /opt/nvidia/deepstream/deepstream-5.0/samples/trtis_model_repo/barnet_models/resnet18_detector.trt -b4

bash: tlt-converter command not found.

I have installed tlt-converter according to https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/text/deploying_to_deepstream.html#generating-an-engine-using-tlt-converter.
I didn’t get any errors when I did the installation. But when I tried to call tlt-converter I am getting command not found error.

Please run
$ chmod +x tlt-converter

1 Like