We have trained a object detection model[Retinanet model] using ngc transfer learning toolkit v3.0 and then generated the engine file using tlt-converter.
Then trying to load the engine file in trtexec built with tensorrt OSS 7.2 release using the same configurations as tlt.The below are the specifications from tlt toolkit.
Tensorrt built using the below docker image
The error is below:
Error:[F] [TRT] Assertion failed: d == a + length /opt/TensorRT/plugin/nmsPlugin/nmsPlugin.cpp:64 Aborting... Aborted (core dumped)
Can u please advise us on how to load the generated engine file from tlt toolkit into the local tensorrt for inference purposes?
TensorRT Version: 7.2
GPU Type: RTX 3000
Nvidia Driver Version: 460.73.01
CUDA Version: 11.2
CUDNN Version: 8.0.4
Operating System + Version: Ubuntu 18.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
- Train a retinanet transfer learning toolkit and retrieve the .tlt file
- Convert the .tlt file to a .engine file using tlt tlt-converter (3.0)
- Download and build TensorRT OSS on branch release/7.2
- Load the engine using trtexec