I would like to convert an .etlt model into an .engine using tlt-converter and without using any kind of dockers, however I’m having a bunch of issues…
I’m working on a x86 machine with CUDA 11.4 and TensorRT 8.2.4
This configuration is not included in the table shown here ( TensorRT — Transfer Learning Toolkit 3.0 documentation ).
So I do not know what I’m supposed to do … Is it even possible to infer the models without docker ?
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks