Welcome to the Transfer Learning Toolkit Forum

Thank you so much Andrey1984, I will try to use tlt-converter

Hello Andrey 1984, I want to ask, how to setup tlt-converter in my x86_64 PC?

Hi m.billson16,
For details info of “tlt-converter” tool, please see Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation .

There are two versions of tlt-converter.
For deployment platforms with an x86 based CPU and discrete GPU’s, the tlt-converter is distributed within the TLT docker.

For the Jetson platform, the tlt-converter is available to download in the dev zone https://developer.nvidia.com/tlt-converter

Hi,

I trained a faster-rcnn-resnet-50 model using Nvidia Transfer Learning Toolkit and now I want to deploy it with TensorRT and c++.

I have the model and weights saved as .tlt and .tltw formats. How can I generate .pb file out of these files? I can only get .etlt file …

I would like to develop a c++ application and build the engine myself using UFFParser (e.g. SampleUffFasterRCNN) without using tlt-* tools.

What I’m trying to do is go from .tlt/.etlt → .pb → .uff ->.trt (engine file built while using the c++ application without using tlt-export tool).

Thanks for helping.

Please create specific topic instead of this release thread. Thanks all.

1 Like