Run TLT model on CUDA 11 and tensorRT 7.2 container

I created a TensorRT engine of the PeopleNet model using the tlt-converter command within this container nvcr.io/nvidia/tlt-streamanalytics:v2.0_py3 (CUDA 10 and TensorRT 7.0).

Is there any way I can load the engine in Python running on this container nvcr.io/nvidia/tensorrt:20.10-py3(CUDA 11 and TensorRT 7.2)? My Python code works fine if I use nvcr.io/nvidia/tensorrt:20.03-py3which runs CUDA 10 and TensorRT 7.0 . However, it does not on the other container.

My ultimate goal is to run the TLT models inside a nvcr.io/nvidia/tensorrt:20.10-py3 container, but I don’t know if there is a way to export the TLT models using the same CUDA and TensorRT version of the nvcr.io/nvidia/tensorrt:20.10-py3 TensorRT container.

The new version of TLT container is underway. Please stay tune.

Great! Do you have a release date ?

Not sure yet.

Any news?

The new version of TLT container is underway. Please stay tune.