TAO converter


Installed TAO converter in x86 platform. Cuda 12.1 version installed in the system. downloaded v3.22.05_trt8.4_x86 from the nvidia website. While running the converter getting the following error:

./tao-converter: error while loading shared libraries: libnvrtc.so.11.2: cannot open shared object file: No such file or directory

Want to convert etlt model to trt engine using TAO converter.

Any insights on this will be helpful.

Thanks in Advance

Hi there @gayathri4!

I think you might want to ask in the TAO specific forum category. I am sure these kind of installation/usage issues can be addressed there.


Could you please download v4.0.0_trt8.5.2.2_x86 from
TAO Converter | NVIDIA NGC and retry? Thanks.

Hi @Morganh

I tried this 8.5 version. The engine file was generated. But when i try to infer the engine am getting tensorrt version conflict though both etlt to engine file conversion and inference of engine by python script both are happening in the same host device.

The error :
The engine plan file is not compatible with this version of TensorRT, expecting library version got, please rebuild.
[08/11/2023-10:18:18] [TRT] [E] 2: [engine.cpp::deserializeEngine::951] Error Code 2: Internal Error (Assertion engine->deserialize(start, size, allocator, runtime) failed. )

The error I got is prompting me to rebuild the engine file.

Thanks in advance

Could you try again inside a TAO docker?

  • pull a TAO docker
  • “docker run” in it
  • copy .etlt file and tao-converter
  • run tao-converter
  • run your python script

Can you provide the steps for running this inside docker?

Please check which network you are running.

                    docker_registry: nvcr.io
                        1. classification_tf2
                        2. efficientdet_tf2
                    docker_registry: nvcr.io
                        1. bpnet
                        2. classification_tf1
                        3. converter
                        4. detectnet_v2
                        5. dssd
                        6. efficientdet_tf1
                        7. faster_rcnn
                        8. fpenet
                        9. lprnet
                        10. mask_rcnn
                        11. multitask_classification
                        12. retinanet
                        13. ssd
                        14. unet
                        15. yolo_v3
                        16. yolo_v4
                        17. yolo_v4_tiny
                    docker_registry: nvcr.io
                        1. action_recognition
                        2. classification_pyt
                        3. deformable_detr
                        4. dino
                        5. mal
                        6. ml_recog
                        7. ocdnet
                        8. ocrnet
                        9. optical_inspection
                        10. pointpillars
                        11. pose_classification
                        12. re_identification
                        13. segformer

For example, if using yolo_v4, suggest you to login with
$ docker run --runtime=nvidia -it --rm -v your_local_dir:docker_dir nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash
Then, you can run something inside the docker.

I am using ess stereo dnn model.

The ess stereo dnn model is not introduced by TAO.

Please try with below.
$ docker run --runtime=nvidia -it --rm -v your_local_dir:docker_dir nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash
Then, you can run something inside the docker.

In order to convert from etlt to engine TAO converter is used.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

The .etlt model is encrypted onnx model. Please try to decode the .etlt model to .onnx model.
Refer to Access model before conversion to .tlt OR decode .tlt to .hdf5/.pb

Then you can use trtexec to generate tensorrt engine based on the onnx file.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.