I want to convert the etlt file to TensorRT file(trt/plane/engine) for jetson nano.
I tranied the DetectNet-v2 resnet-18 for kitti with docker image nvcr.io/nvidia/tlt-streamanalytics:v1.0_py2 and I also passed all step to convert the trained model to etlt.
TensorRT Version : 22.214.171.124-1+cuda10.0
GPU Type : Jetson Nano
Nvidia Driver Version : 4.2.2 [L4T 32.2.1]
CUDA Version : 10.0.326
CUDNN Version :126.96.36.199-1+cuda10.0
Operating System + Version : Ubuntu 18.04, Linux kernel 4.9.140
Python Version (if applicable) : 3.6.9
1- For running on jetson nano I need to do convert on jetson nano, I want to know I need to do this step with deep stream or tlt on jetson nano? Is it possible to run and do this step with docker image nvcr.io/nvidia/tlt-streamanalytics:v1.0_py2 on jetson nano? if so, I also need to download tlt-conveter on jetson nano?
I want to run :
tlt-converter /workspace/tmp/experiment_dir_final/resnet18_detector.etlt \ -k <key> \ -c /workspace/tmp/experiment_dir_final/calibration.bin \ -o output_cov/Sigmoid,output_bbox/BiasAdd \ -d 3,384,1248 \ -i nchw \ -m 64 \ -t int8 \ -e /workspace/tmp/experiment_dir_final/resnet18_detector.trt \ -b 4
If I download the tlt-converter for jetson nano, becuase the TLT has build-in tlt-converter, How to do this step to don’t conflict together?