TLT-Converter

Hi,

I would like to convert an .etlt model into an .engine using tlt-converter and without using any kind of dockers, however I’m having a bunch of issues…

I’m working on a x86 machine with CUDA 11.4 and TensorRT 8.2.4
This configuration is not included in the table shown here ( TensorRT — Transfer Learning Toolkit 3.0 documentation ).
So I do not know what I’m supposed to do … Is it even possible to infer the models without docker ?

Could anyone help ?

Yes. it is possible.

For tao-converter, could you use https://developer.nvidia.com/cuda112-cudnn80-trt72 ?

Already tried, I get the following error :

error while loading shared libraries: libnvinfer.so.7: cannot open shared object file: No such file or directory

However that’s because my version of TensorRT is 8.2.4 and not 7.x.x

You can copy the converter from the docker.
The location is as below.

root@f2e487b0b17a:/workspace# which converter
/opt/nvidia/tools/converter

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.