Does `tao-converter` support CUDA 11.6

Please provide the following information when requesting support.

• Hardware (T4/V100/Xavier/Nano/etc)
Ubuntu, x64, RTX3060-12g
• Network Type (Detectnet_v2/Faster_rcnn/Yolo_v4/LPRnet/Mask_rcnn/Classification/etc)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
• Training spec file(If have, please share here)
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)

My machine info:

NVIDIA-SMI 510.47.03 Driver Version: 510.47.03 CUDA Version: 11.6

I’ve started the tritonserver:22.02-py3 with docker, and trying to convert my TAO retrained model to .plan file.
checked https://docs.nvidia.com/tao/tao-toolkit/text/tensorrt.html#installing-the-tao-converter only see the:

I’ve downloaded and run the ./tao-converter, it show error:

./tao-converter: error while loading shared libraries: libnvinfer.so.8: cannot open shared object file: No such file or directory

could you help?

Can you check if it is TRT8 inside your current environment( tritonserver:22.02-py3) ?

sorry it’s my mistake, I should run the tao-converter in the docker of tritonserver.

the target tao-converter (in original post, the red rectangle marked) can work in tritonserver:22.02-py3

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.