Transfer Learning Toolkit v3.0 trtexec loading


Hi Team,

We have trained a object detection model[Retinanet model] using ngc transfer learning toolkit v3.0 and then generated the engine file using tlt-converter.
Then trying to load the engine file in trtexec built with tensorrt OSS 7.2 release using the same configurations as tlt.The below are the specifications from tlt toolkit.

ubuntu 18.04

Tensorrt built using the below docker image

The error is below:

Error:[F] [TRT] Assertion failed: d == a + length
Aborted (core dumped)

Can u please advise us on how to load the generated engine file from tlt toolkit into the local tensorrt for inference purposes?


TensorRT Version: 7.2
GPU Type: RTX 3000
Nvidia Driver Version: 460.73.01
CUDA Version: 11.2
CUDNN Version: 8.0.4
Operating System + Version: Ubuntu 18.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files


Steps To Reproduce

  1. Train a retinanet transfer learning toolkit and retrieve the .tlt file
  2. Convert the .tlt file to a .engine file using tlt tlt-converter (3.0)
  3. Download and build TensorRT OSS on branch release/7.2
  4. Load the engine using trtexec

facing the same issue. please help

Please copy etlt model to the device where you want to run inference, and then generated engine file in that device.

Hi Morganh,
Thanks for the reply, we converted the model in the same device as we tested on, in 2 docker images. The first image is the tlt toolkit image, while the second was a tensorrt docker image.

1 Like

Where did you run trtexec against the trt engine? Inside tensorrt docker, right?

If yes, please copy etlt model into the tensorrt docker.
$ docker cp xxx.etlt tensorrt-container-id

Then, in tensorrt-container,
download the tlt-converter, and then generate trt egnine.
Finally, run trtexec.

1 Like

Thank you for your recommendation, trtexec has successfully ran

1 Like

Thanks Morganh, this worked.