I tried this 8.5 version. The engine file was generated. But when i try to infer the engine am getting tensorrt version conflict though both etlt to engine file conversion and inference of engine by python script both are happening in the same host device.
The error :
The engine plan file is not compatible with this version of TensorRT, expecting library version 22.214.171.124 got 126.96.36.199, please rebuild.
[08/11/2023-10:18:18] [TRT] [E] 2: [engine.cpp::deserializeEngine::951] Error Code 2: Internal Error (Assertion engine->deserialize(start, size, allocator, runtime) failed. )
The error I got is prompting me to rebuild the engine file.
For example, if using yolo_v4, suggest you to login with
$ docker run --runtime=nvidia -it --rm -v your_local_dir:docker_dir nvcr.io/nvidia/tao/tao-toolkit:5.0.0-tf1.15.5 /bin/bash
Then, you can run something inside the docker.