TLT-converter YOLO3 on xavier nx fail

Im trying to get the tlt-converter to work my xavier nx with a YOLO3 model built with the TLT.

When I run this:
tlt-converter -k YajdqdnVicTU4Mm0wcGg0OWoyMDI0NmJrMTQ6Y2UzNTk0Y2MtNGY5YS00YzM4LThmNjktNGI0M2VhY2ZjNzM2 -d 3,384,1248 -o BatchedNMS -e /opt/nvidia/deepstream/deepstream-5.0/samples/export/trt.engine -m 1 -t fp16 -i nchw /opt/nvidia/deepstream/deepstream-5.0/samples/export/yolo_resnet18_epoch_100.etlt

I get this:bash: tlt-converter: command not found

When I run this:
./tlt-converter -k YajdqdnVicTU4Mm0wcGg0OWoyMDI0NmJrMTQ6Y2UzNTk0Y2MtNGY5YS00YzM4LThmNjktNGI0M2VhY2ZjNzM2 -d 3,384,1248 -o BatchedNMS -e /opt/nvidia/deepstream/deepstream-5.0/samples/export/trt.engine -m 1 -t fp16 -i nchw /opt/nvidia/deepstream/deepstream-5.0/samples/export/yolo_resnet18_epoch_100.etlt

I get this:bash: ./tlt-converter: Permission denied

When I run this:
sudo ./tlt-converter -k YajdqdnVicTU4Mm0wcGg0OWoyMDI0NmJrMTQ6Y2UzNTk0Y2MtNGY5YS00YzM4LThmNjktNGI0M2VhY2ZjNzM2 -d 3,384,1248 -o BatchedNMS -e /opt/nvidia/deepstream/deepstream-5.0/samples/export/trt.engine -m 1 -t fp16 -i nchw /opt/nvidia/deepstream/deepstream-5.0/samples/export/yolo_resnet18_epoch_100.etlt

I get this:sudo: ./tlt-converter: command not found

If have run these install commands succefully:

TensorRT OSS on Jetson (ARM64)

  1. Install Cmake (>=3.13)

Note: TensorRT OSS requires cmake >= v3.13, while the default cmake on Jetson/UBuntu 18.04 is cmake 3.10.2.

Upgrade TensorRT OSS using:

sudo apt remove --purge --auto-remove cmake wget https://github.com/Kitware/CMake/releases/download/v3.13.5/cmake-3.13.5.tar.gz tar xvf cmake-3.13.5.tar.gz cd cmake-3.13.5/ ./configure make -j$(nproc) sudo make install sudo ln -s /usr/local/bin/cmake /usr/bin/cmake

  1. Get GPU Arch based on your platform. The GPU_ARCHS for different Jetson platform are given in the following table.
Jetson Platform GPU_ARCHS
Nano/Tx1 53
Tx2 62
AGX Xavier/Xavier NX 72
  1. Build TensorRT OSS

git clone -b release/7.0 https://github.com/nvidia/TensorRT cd TensorRT/ git submodule update --init --recursive export TRT_SOURCE=pwd cd $TRT_SOURCE mkdir -p build && cd build

Note: The -DGPU_ARCHS=72 below is for Xavier or NX, for other Jetson platform, please change “72” referring to “GPU_ARCH” from step 2.

/usr/local/bin/cmake … -DGPU_ARCHS=72 -DTRT_LIB_DIR=/usr/lib/aarch64-linux-gnu/ -DCMAKE_C_COMPILER=/usr/bin/gcc -DTRT_BIN_DIR=pwd/out make nvinfer_plugin -j$(nproc)

After building ends successfully, libnvinfer_plugin.so* will be generated under ‘pwd’/out/.
4. Replace “libnvinfer_plugin.so*” with the newly generated.

sudo mv /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.x.y ${HOME}/libnvinfer_plugin.so.7.x.y.bak // backup original libnvinfer_plugin.so.x.y sudo cp pwd/out/libnvinfer_plugin.so.7.m.n /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.7.x.y sudo ldconfig

And this:

For the Jetson platform, the tlt-converter is available to download in the dev zone here. Once the tlt-converter is downloaded, please follow the instructions below to generate a TensorRT engine.

  1. Unzip tlt-converter-trt7.1.zip on the target machine.
  2. Install the open ssl package using the command:sudo apt-get install libssl-dev
  3. Export the following environment variables:

export TRT_LIB_PATH=”/usr/lib/aarch64-linux-gnu” export TRT_INC_PATH=”/usr/include/aarch64-linux-gnu”

  1. For Jetson devices, TensorRT 7.1 comes pre-installed with https://developer.nvidia.com/embedded/jetpack. If you are using older JetPack, upgrade to JetPack 4.4.
  2. If you are deploying FasterRCNN, SSD, DSSD, YOLOv3, or RetinaNet model, you need to build TensorRT Open source software on the machine. If you are using DetectNet_v2 or image classification, you can skip this step. Instructions to build TensorRT OSS on Jetson can be found in TensorRT OSS on Jetson (ARM64) section above or in this GitHub repo.
  3. Run the tlt-converter using the sample command below and generate the engine.

Note: Make sure to follow the output node names as mentioned in Exporting the model.

I have jetpack 4,4
deepstream 5
and tlt_7.1

Could you please try
$ chmod +x tlt-converter

1 Like

Yea Team
success
Thank You