Steps to convert etlt model trt engine for jetson nano

Description

I have an etlt model which is trained on tlt container on GTX1660. I also saved .trt, .trt.int8, and calibration.bin Now I want to deploy this model in Jetson nano.
I would like to know the steps and resources to convert the .etlt model to .trt engine compatible with jetson.

Environment Jetson Nano

TensorRT Version: 7.1.3
GPU Type:
Nvidia Driver Version:
CUDA Version: 10.2

Request you to raise issue in Jetson Nano forum

Thanks

tlt-converter did the magic.
Steps:

  1. $ sudo apt-get install libssl-dev

  2. $ export TRT_LIB_PATH=”/usr/lib/aarch64-linux-gnu”

  3. $ export TRT_INC_PATH=”/usr/include/aarch64-linux-gnu”

  4. Move the .etlt file and calibration.bin file to trt_model_repo directory

  5. Download the tlt-converter for jetson nano, the file will vary according to the tensorrt version. JetPack SDK | NVIDIA Developer trt_model_repo

  6. Unzip the file and move tlt-converter and Readme file to

  7. Give execute permissions to tlt-converter
    $ sudo chmod a+rwx tlt-converter

  8. tlt-converter [-h] -k <encryption_key>
    -d <input_dimensions>
    -o
    [-c ]
    [-e ]
    [-b ]
    [-m ]
    [-t ]
    [-w ]
    [-i ]
    input_file

https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/text/deploying_to_deepstream.html#generating-an-engine-using-tlt-converter

1 Like