I want to convert the etlt file to TensorRT file(trt/plane/engine) for jetson nano.
Description
I tranied the DetectNet-v2 resnet-18 for kitti with docker image nvcr.io/nvidia/tlt-streamanalytics:v1.0_py2 and I also passed all step to convert the trained model to etlt.
Environment
TensorRT Version : 5.1.6.1-1+cuda10.0 GPU Type : Jetson Nano Nvidia Driver Version : 4.2.2 [L4T 32.2.1] CUDA Version : 10.0.326 CUDNN Version :7.5.0.56-1+cuda10.0 Operating System + Version : Ubuntu 18.04, Linux kernel 4.9.140 Python Version (if applicable) : 3.6.9
Question
1- For running on jetson nano I need to do convert on jetson nano, I want to know I need to do this step with deep stream or tlt on jetson nano? Is it possible to run and do this step with docker image nvcr.io/nvidia/tlt-streamanalytics:v1.0_py2 on jetson nano? if so, I also need to download tlt-conveter on jetson nano?
For running at Nano, there are two ways. 1) use etlt model directly. 2) use trt-converter to generate a trt engine. See tlt user guide.
Option 1: Integrate the model (.etlt) with the encrypted key directly in the DeepStream app. The model file is generated by tlt-export .
Option 2: Generate a device specific optimized TensorRT engine, using tlt-converter. The TensorRT engine file can also be ingested by DeepStream.
The etlt model is generated when you run tlt-export inside docker.
The trt engine can be generated inside docker or at Jetson Nano. If you run inference in Nano, you must use tlt-converter(Jetson platform version) to generate trt engine in Nano. Yes, you need to download tlt-converter for Jetson platform version.
For Jetson platform and trt 5.1, please download from https://developer.nvidia.com/tlt-converter-trt51
For Jetson platform and trt 6 , please download from https://developer.nvidia.com/tlt-converter-trt60
For Jetson platform and trt 7.1, please download from https://developer.nvidia.com/tlt-converter-trt71
Hi @Morganh.
What is the correct way to do the option 1?
I’m trying with this primary-gie on the config file
[primary-gie]
enable=1
gpu-id=0
model-engine-file=/opt/nvidia/deepstream/deepstream/controlflow/models/Controlflow_tlt/frcnn_kitti_resnet18_retrain.etlt.engine
batch-size=8
#Required by the app for OSD, not a plugin property
## 0=FP32, 1=INT8, 2=FP16 mode
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=10
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_controlflow.txt
where the config_infer_controlflow.txt is
# Copyright (c) 2020 NVIDIA Corporation. All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
tlt-model-key=<Of course I'm not posting my key on the internet>
#tlt-encoded-model=../models/Controlflow_tlt/frcnn_kitti_resnet18_retrain_pf16.etlt
tlt-encoded-model=../models/Controlflow_tlt/frcnn_kitti_resnet18_retrain.etlt
labelfile-path=../models/Controlflow_tlt/labels.txt
#int8-calib-file=../models/Controlflow_tlt/dashcamnet_int8.txt
#model-engine-file=../models/Controlflow_tlt/frcnn_kitti_resnet18_retrain_fp16.etlt.engine
model-engine-file=../models/Controlflow_tlt/frcnn_kitti_resnet18_retrain.etlt.engine
#input-dims=3;384;1248;0
input-dims=3;544;960;0
uff-input-blob-name=input_1
batch-size= 1 #8 #3
process-mode=1
model-color-format=0
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
num-detected-classes=7
interval=2
gie-unique-id=1
output-blob-names=output_bbox/BiasAdd;output_cov/Sigmoid
[class-attrs-all]
pre-cluster-threshold=0.2
group-threshold=1
## Set eps=0.7 and minBoxes for cluster-mode=1(DBSCAN)
eps=0.2
#minBoxes=3
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0