Description
I have an etlt model which is trained on tlt container on GTX1660. I also saved .trt, .trt.int8, and calibration.bin Now I want to deploy this model in Jetson nano.
I would like to know the steps and resources to convert the .etlt model to .trt engine compatible with jetson.
Environment Jetson Nano
TensorRT Version : 7.1.3
GPU Type :
Nvidia Driver Version :
CUDA Version : 10.2
SunilJB
November 24, 2020, 6:26am
4
Request you to raise issue in Jetson Nano forum
Get support, news, and information about Jetson Nano
Thanks
tlt-converter did the magic.
Steps:
$ sudo apt-get install libssl-dev
$ export TRT_LIB_PATH=”/usr/lib/aarch64-linux-gnu”
$ export TRT_INC_PATH=”/usr/include/aarch64-linux-gnu”
Move the .etlt file and calibration.bin file to trt_model_repo directory
Download the tlt-converter for jetson nano, the file will vary according to the tensorrt version. JetPack SDK | NVIDIA Developer trt_model_repo
Unzip the file and move tlt-converter and Readme file to
Give execute permissions to tlt-converter
$ sudo chmod a+rwx tlt-converter
tlt-converter [-h] -k <encryption_key>
-d <input_dimensions>
-o
[-c ]
[-e ]
[-b ]
[-m ]
[-t ]
[-w ]
[-i ]
input_file
https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/text/deploying_to_deepstream.html#generating-an-engine-using-tlt-converter
1 Like