Nvidia Transfer Learning Toolkit .tlt to .pb/.uff file - Deployment

Hi,

I trained a faster-rcnn-resnet-50 model using Nvidia Transfer Learning Toolkit and now I want to deploy it with TensorRT and c++.

I have the model and weights saved as .tlt and .tltw formats. How can I generate .pb file out of these files? I can only get .etlt file …

I would like to develop a c++ application and build the engine myself using UFFParser (e.g. SampleUffFasterRCNN) without using tlt-* tools.

What I’m trying to do is go from .tlt/.etlt → .pb → .uff ->.trt (engine file built while using the c++ application without using tlt-export tool).

Thanks for helping.

Moving to Transfer Learning Toolkit forum so that TLT team can take a look.

Hi yh01,
In the process of TLT, the trt engine can be built from etlt model directly.
Please refer to TRT engine deployment for more info.

What if I don’t want to build it directly from TLT? I would like to build my own c++ application where the engine should be built within that application. Also building the engine using TLT would be within docker which runs using nvidia-docker and it is not supported on Windows.

So there is no way I can use TLT for outputting a frozen .pb format? Then TLT isn’t the solution for me …
Thanks.

Hi yh01,
Usually we build trt engine directly in nano or xavier or other boards via tlt-converter tool. It is outside of TLT docker.

Hi Morganh,
TLT is a very nice tool to train networks, but not having the opportunity to export it to .pb or .uff is unfortunate. Having the trianed model in such formats is useful for other use-cases …

Yes, TLT can only export model as an etlt format. User can also generate trt engine based on it.
Usually,
Download/copy the tlt-converter tool to nano or other boards
copy the etlt model into nano or other boards
run tlt-converter against the etlt model
The trt engine will be built directly

what about Windows? How can I build an engine for an NVIDIA graphics card on Windows?

See Integrating TAO Models into DeepStream — TAO Toolkit 3.22.05 documentation, there is tlt-converter tool inside the TLT docker by default. It can also generate trt engine.

For deployment platforms with an x86 based CPU and discrete GPU’s, the tlt-converter is distributed within the TLT docker. Therefore, it is suggested to use the docker to generate the engine.

I checked that link and I am using tlt-converter within docker on my ubuntu remote server. I want to generate engine file for a GPU in a Windows os.

In order to use NVIDIA GPU with docker containers, the runtime should be nvidia-docker, however, nvidia-docker isn’t supported on Windows.

So how can I use tlt-converter for building the engine under Windows?

The tlt-converter is not compatible with Windows system.

Hi yh01,
The trt engine which is generated in Ubuntu can also be used in Windows, as long as the trt version is the same.

Hi Morganh,
The generated engine is optimized to be used on the GPU architecture it was generated on, right? Would it still work well on another GPU architecture?

Machine specific optimizations are done as part of the engine creation process, so a distinct engine should be generated for each environment and hardware configuration.
If the inference environment’s TensorRT or CUDA libraries are updated – including minor version updates – new engines should be generated.
Running an engine that was generated with a different version of TensorRT and CUDA is not supported and will cause unknown behavior that affects inference speed, accuracy, and stability, or it may fail to run altogether.

The TRT engine plan does depend on the compute capability and TRT versions. To change GPU architecture you’d need something that can run TRT of the same version as you built with.

1 Like