Convert tensorrt engine from version 7 to 8

Please generate trt engine again under your conda environment.

Ok Sure.How can i convert etlt model to trt engine under conda environment?
Is there any script?

Please download the corresponding tlt-converter and use it to generate trt engine.
See Overview — TAO Toolkit 3.22.05 documentation

TLT Converter Support Matrix for x86|

CUDA/CUDNN|TensorRT|Platform|
|10.2/8.0|7.2|cuda102-cudnn80-trt72|
|11.0/8.0|7.2|cuda110-cudnn80-trt72|
|11.1/8.0|7.2|cuda111-cudnn80-trt72|
|10.2/8.0|7.1|cuda102-cudnn80-trt71|
|11.0/8.0|7.1|cuda110-cudnn80-trt71|

Thank you much.i will try

Hi Morganh
I downloaded cuda11.1_cudnn8.0_trt7.2-20210304T191646Z-001. then i tried to run this command
./tlt-converter -h
Here i got error msg like this
./tlt-converter: symbol lookup error: ./tlt-converter: undefined symbol: initLibNvInferPlugins

Did you download it successfully?
wget https://developer.nvidia.com/cuda111-cudnn80-trt72

Yes i downloaded it successfully.size of the file is 33.3 KB

Can you provide the details about how to set up conda environment?
Do you mean you only meet issue when run tlt-converter under conda environment?

Before this i got error like this
./tlt-converter: error while loading shared libraries
libnvinfer.so.7 cannot open shared object file: No such file or directory

libnvonnxparser.so.7
libnvparsers.so.7
libcudnn.so.8
libmyelin.so.1
libnvrtc.so.11.1
libcublasLt.so.11
libcublas.so.11

So i copied all the libraries from conda environment to /usr/lib/x86_64-linux-gnu this path

created conda environment with this
conda create --name env python=3.6

Then installed all the libraries
pip3 install nvidia-pyindex
pip3 install nvidia-tensorrt
pip install pycuda

Do you mean you only meet issue when run tlt-converter under conda environment?

yes.

Please share all the log with me. Begin with setting up the conda environment, then how did you install libraries, how to run, etc.
If log is long, please save as a txt file and attach here.

1 Like

Ok

Hi @Morganh
Here I attached my installation loginstallationlog.txt (8.7 KB)

Can you install tlt launcher python package called nvidia-tlt?
See TLT Launcher — Transfer Learning Toolkit 3.0 documentation

$ pip3 install nvidia-tlt

$ tlt info --verbose

Then run tlt tlt-converter xxx command to generate trt engine.

Yea installed.From there only I created TensorRT engine.

tlt info --verbose
Configuration of the TLT Instance

dockers:
nvcr.io/nvidia/tlt-streamanalytics:
docker_tag: v3.0-dp-py3
tasks:
1. augment
2. classification
3. detectnet_v2
4. dssd
5. emotionnet
6. faster_rcnn
7. fpenet
8. gazenet
9. gesturenet
10. heartratenet
11. lprnet
12. mask_rcnn
13. retinanet
14. ssd
15. unet
16. yolo_v3
17. yolo_v4
18. tlt-converter
nvcr.io/nvidia/tlt-pytorch:
docker_tag: v3.0-dp-py3
tasks:
1. speech_to_text
2. text_classification
3. question_answering
4. token_classification
5. intent_slot_classification
6. punctuation_and_capitalization
format_version: 1.0
tlt_version: 3.0
published_date: 02/02/2021

So, the easier way for you to load the trt engine is that:
tlt yolo_v3 run yourscript ...

If you feel inconvenient, you can also login the docker and run the code inside it.
$ tlt yolo_v3 run /bin/bash
Then run your script.

Oh But I want to run it in realtime without docker.

Can you double check which version of CUDA/TRT are installed in you current host PC?
$ dpkg -l |grep cuda