Installed TAO converter in x86 platform. Cuda 12.1 version installed in the system. downloaded v3.22.05_trt8.4_x86 from the nvidia website. While running the converter getting the following error:
./tao-converter: error while loading shared libraries: libnvrtc.so.11.2: cannot open shared object file: No such file or directory
Want to convert etlt model to trt engine using TAO converter.
I tried this 8.5 version. The engine file was generated. But when i try to infer the engine am getting tensorrt version conflict though both etlt to engine file conversion and inference of engine by python script both are happening in the same host device.
The error :
The engine plan file is not compatible with this version of TensorRT, expecting library version 8.6.1.6 got 8.6.0.12, please rebuild.
[08/11/2023-10:18:18] [TRT] [E] 2: [engine.cpp::deserializeEngine::951] Error Code 2: Internal Error (Assertion engine->deserialize(start, size, allocator, runtime) failed. )
The error I got is prompting me to rebuild the engine file.
For example, if using yolo_v4, suggest you to login with
$ docker run --runtime=nvidia -it --rm -v your_local_dir:docker_dir nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash
Then, you can run something inside the docker.
The ess stereo dnn model is not introduced by TAO.
Please try with below.
$ docker run --runtime=nvidia -it --rm -v your_local_dir:docker_dir nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash
Then, you can run something inside the docker.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks