Jetson Xavier - error running tlt-converter

(Jetson Xavier , JP4.4 , Cuda 10.2, TRT 7.1.3)

I downloaded tlt-converter from:
https://developer.nvidia.com/tlt-converter

tried to run, got:
./tlt-converter: error while loading shared libraries: libnvinfer.so.5: cannot open shared object file: No such file or directory

when I sudo find / | grep “nvinfer.so”, I get:
/usr/lib/aarch64-linux-gnu/libnvinfer.so.7
/usr/lib/aarch64-linux-gnu/libnvinfer.so
/usr/lib/aarch64-linux-gnu/libnvinfer.so.7.1.3

Any ideas?

Thanks for the help !

Please download the 7.1 version of tlt-converter.

See https://developer.nvidia.com/tlt-getting-started
For deployment with NVIDIA Jetson, download the tlt-converter file to convert the model from UFF to TensorRT engine

Hi,

Works perfectly. But I do have another question:
I trained resnet34 classification model, and like to run it on DLA (Jetson Xavier).

When converting, I get:

[WARNING] Default DLA is enabled but layer conv1/kernel is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/moving_variance is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/Reshape_1/shape is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/batchnorm/add/y is not supported on DLA, falling back to GPU.
[WARNING] Default DLA is enabled but layer bn_conv1/gamma is not supported on DLA, falling back to GPU.


...

(same msg lot of block_XXX layers)

followed by:
[INFO] --------------- Layers running on DLA:
[INFO] {conv1/convolution,bn_conv1/batchnorm/mul_1,bn_conv1/batchnorm/add_1,activation_1/Relu,block_1a_conv_1/convolution,block_1a_bn_1/batchnorm/mul_1,block_1a_bn_1/batchnorm/add_1,block_1a_relu_1/Relu,block_1a_conv_2/convolution,block_1a_bn_2/batchnorm/mul_1,block_1a_bn_2/batchnorm/add_1,block_1a_conv_shortcut/convolution,block_1a_bn_shortcut/batchnorm/mul_1,block_1a_bn_shortcut/batchnorm/add_1,add_1/add,block_1a_relu/Relu,block_1b_conv_1/convolution,block_1b_bn_1/batchnorm/mul_1,block_1b_bn_1/batchnorm/add_1,block_1b_relu_1/Relu,block_1b_conv_2/convolution,block_1b_bn_2/batchnorm/mul_1,block_1b_bn_2/batchnorm/add_1,block_1b_conv_shortcut/convolution,block_1b_bn_shortcut/batchnorm/mul_1,block_1b_bn_shortcut/batchnorm/add_1,add_2/add,block_1b_relu/Relu,block_1c_conv_1/convolution,block_1c_bn_1/batchnorm/mul_1,block_1c_bn_1/batchnorm/add_1,block_1c_relu_1/Relu,block_1c_conv_2/convolution,block_1c_bn_2/batchnorm/mul_1,block_1c_bn_2/batchnorm/add_1,block_1c_conv_shortcut/convolution,block_1c_bn_shortcut/batchnorm/mul_1,block_1c_bn_shortcut/batchnorm/add_1,add_3/add,block_1c_relu/Relu,block_2a_conv_1/convolution,block_2a_bn_1/batchnorm/mul_1,block_2a_bn_1/batchnorm/add_1,block_2a_relu_1/Relu,block_2a_conv_2/convolution,block_2a_bn_2/batchnorm/mul_1,block_2a_bn_2/batchnorm/add_1,block_2a_conv_shortcut/convolution,block_2a_bn_shortcut/batchnorm/mul_1,block_2a_bn_shortcut/batchnorm/add_1,add_4/add,block_2a_relu/Relu,block_2b_conv_1/convolution,block_2b_bn_1/batchnorm/mul_1,block_2b_bn_1/batchnorm/add_1,block_2b_relu_1/Relu,block_2b_conv_2/convolution,block_2b_bn_2/batchnorm/mul_1,block_2b_bn_2/batchnorm/add_1,block_2b_conv_shortcut/convolution,block_2b_bn_shortcut/batchnorm/mul_1,block_2b_bn_shortcut/batchnorm/add_1,add_5/add,block_2b_relu/Relu,block_2c_conv_1/convolution,block_2c_bn_1/batchnorm/mul_1,block_2c_bn_1/batchnorm/add_1,block_2c_relu_1/Relu,block_2c_conv_2/convolution,block_2c_bn_2/batchnorm/mul_1,block_2c_bn_2/batchnorm/add_1,block_2c_conv_shortcut/convolution,block_2c_bn_shortcut/batchnorm/mul_1,block_2c_bn_shortcut/batchnorm/add_1,add_6/add,block_2c_relu/Relu,block_2d_conv_1/convolution,block_2d_bn_1/batchnorm/mul_1,block_2d_bn_1/batchnorm/add_1,block_2d_relu_1/Relu,block_2d_conv_2/convolution,block_2d_bn_2/batchnorm/mul_1,block_2d_bn_2/batchnorm/add_1,block_2d_conv_shortcut/convolution,block_2d_bn_shortcut/batchnorm/mul_1,block_2d_bn_shortcut/batchnorm/add_1,add_7/add,block_2d_relu/Relu,block_3a_conv_1/convolution,block_3a_bn_1/batchnorm/mul_1,block_3a_bn_1/batchnorm/add_1,block_3a_relu_1/Relu,block_3a_conv_2/convolution,block_3a_bn_2/batchnorm/mul_1,block_3a_bn_2/batchnorm/add_1,block_3a_conv_shortcut/convolution,block_3a_bn_shortcut/batchnorm/mul_1,block_3a_bn_shortcut/batchnorm/add_1,add_8/add,block_3a_relu/Relu,block_3b_conv_1/convolution,block_3b_bn_1/batchnorm/mul_1,block_3b_bn_1/batchnorm/add_1,block_3b_relu_1/Relu,block_3b_conv_2/convolution,block_3b_bn_2/batchnorm/mul_1,block_3b_bn_2/batchnorm/add_1,block_3b_conv_shortcut/convolution,block_3b_bn_shortcut/batchnorm/mul_1,block_3b_bn_shortcut/batchnorm/add_1,add_9/add,block_3b_relu/Relu,block_3c_conv_1/convolution,block_3c_bn_1/batchnorm/mul_1,block_3c_bn_1/batchnorm/add_1,block_3c_relu_1/Relu,block_3c_conv_2/convolution,block_3c_bn_2/batchnorm/mul_1,block_3c_bn_2/batchnorm/add_1,block_3c_conv_shortcut/convolution,block_3c_bn_shortcut/batchnorm/mul_1,block_3c_bn_shortcut/batchnorm/add_1,add_10/add,block_3c_relu/Relu,block_3d_conv_1/convolution,block_3d_bn_1/batchnorm/mul_1,block_3d_bn_1/batchnorm/add_1,block_3d_relu_1/Relu,block_3d_conv_2/convolution,block_3d_bn_2/batchnorm/mul_1,block_3d_bn_2/batchnorm/add_1,block_3d_conv_shortcut/convolution,block_3d_bn_shortcut/batchnorm/mul_1,block_3d_bn_shortcut/batchnorm/add_1,add_11/add,block_3d_relu/Relu,block_3e_conv_1/convolution,block_3e_bn_1/batchnorm/mul_1,block_3e_bn_1/batchnorm/add_1,block_3e_relu_1/Relu,block_3e_conv_2/convolution,block_3e_bn_2/batchnorm/mul_1,block_3e_bn_2/batchnorm/add_1,block_3e_conv_shortcut/convolution,block_3e_bn_shortcut/batchnorm/mul_1,block_3e_bn_shortcut/batchnorm/add_1,add_12/add,block_3e_relu/Relu,block_3f_conv_1/convolution,block_3f_bn_1/batchnorm/mul_1,block_3f_bn_1/batchnorm/add_1,block_3f_relu_1/Relu,block_3f_conv_2/convolution,block_3f_bn_2/batchnorm/mul_1,block_3f_bn_2/batchnorm/add_1,block_3f_conv_shortcut/convolution,block_3f_bn_shortcut/batchnorm/mul_1,block_3f_bn_shortcut/batchnorm/add_1,add_13/add,block_3f_relu/Relu,block_4a_conv_1/convolution,block_4a_bn_1/batchnorm/mul_1,block_4a_bn_1/batchnorm/add_1,block_4a_relu_1/Relu,block_4a_conv_2/convolution,block_4a_bn_2/batchnorm/mul_1,block_4a_bn_2/batchnorm/add_1,block_4a_conv_shortcut/convolution,block_4a_bn_shortcut/batchnorm/mul_1,block_4a_bn_shortcut/batchnorm/add_1,add_14/add,block_4a_relu/Relu,block_4b_conv_1/convolution,block_4b_bn_1/batchnorm/mul_1,block_4b_bn_1/batchnorm/add_1,block_4b_relu_1/Relu,block_4b_conv_2/convolution,block_4b_bn_2/batchnorm/mul_1,block_4b_bn_2/batchnorm/add_1,block_4b_conv_shortcut/convolution,block_4b_bn_shortcut/batchnorm/mul_1,block_4b_bn_shortcut/batchnorm/add_1,add_15/add,block_4b_relu/Relu,block_4c_conv_1/convolution,block_4c_bn_1/batchnorm/mul_1,block_4c_bn_1/batchnorm/add_1,block_4c_relu_1/Relu,block_4c_conv_2/convolution,block_4c_bn_2/batchnorm/mul_1,block_4c_bn_2/batchnorm/add_1,block_4c_conv_shortcut/convolution,block_4c_bn_shortcut/batchnorm/mul_1,block_4c_bn_shortcut/batchnorm/add_1,add_16/add,block_4c_relu/Relu}, {predictions/MatMul,predictions/BiasAdd},
[INFO] --------------- Layers running on GPU:
[INFO] flatten/Reshape, predictions/Softmax,

My question - in inference, are all the layers, except the flatten/Reshape and predictions/Softmax running on the DLA ? What about the lines with the warning ?

Thanks for the help !

That means some layers are not supported on DLA. They will be falling back to GPU.

Hi,

I understand that, but I was under the impression ResNet-34 can run on DLA - for example https://developer.nvidia.com/transfer-learning-toolkit shows ResNet-34 (as peoplenet) can be run on the DLA - I’m referring to the table of performance in middle of the page.

Is this the case ? I took the ResNet-34 TLT model “as-is”, just re-trained it and export following the developer guide, and still see lots of layers that are not supported. So - I am a bit confused …

What do you think ?

Thanks for the help !

See more info in Developer Guide :: NVIDIA Deep Learning TensorRT Documentation.

I read this resource few times. Unfortunately, it does not help.

Let me ask this in a different way:

Nvidia publishes (see for example https://developer.nvidia.com/transfer-learning-toolkit) that resnet18/34 runs on DLA. It shows nice spec numbers.

I tried to run Resnet-18/34 on DLA, and can not - see above - many layers fall back to GPU.
The only samples in the documentation are MNIST and AlexNet, which are very basic and less relevant for real-case scenario.

So again I wonder - can I run ResNet, or any other classification model which is a bit more than AlexNet and MNist, from TLT on the DLA without falling to GPU ?

Thanks for the help !

Not all layers can run on DLA. The link you mentioned does not mention that all layers run on DLA.

Hi,

So, just verifying:

  1. the numbers mentioned on the link (specifically, on Xavier) is when I run on GPU + 2xDLA AND some layers are falling to the GPU ?
  2. Other than basic networks (AlexNet, MNIst) no classification network in TLT can be run on DLA without falling to GPU ?

Thanks for the help !

  1. For the number of DLA, some layers should be falling to the GPU. And some layers are still on DLA. If using trtexec to test, just set in the command which DLA is run, then mark down its result.
  2. No matter classification or detection model, it can run on DLA. The same as 1), some layers should be falling to the GPU. And some layers are still on DLA.