createInferRuntime slow

Description

createInferRuntime api call blocked for ~ 8 seconds with 100% cpu loading on single cpu.

Environment

TensorRT Version: nvcr.io/nvidia/tensorrt 20.06-py3
GPU Type: Tesla T4
Nvidia Driver Version: 418.87.00
CUDA Version: 11.0
CUDNN Version:
Operating System + Version: Linux docker on nvcr.io/nvidia/tensorrt 20.06-py3 . (Host amazon ecs-gpu ami)

Steps To Reproduce

Write a C program and just call createInferRuntime(gLogger); iusing C API.
It takes ~8seconds for the call to finish.
Expected to be much faster.

Hi @ek9852,
You can try reproducing the issue after upgrading Nvidia Driver Version, as the latest one is r450.
Please find the support matrix to check for the same.

Support Matrix :: NVIDIA Deep Learning TensorRT Documentation .

Please share your code in case if the issue persist.

Thanks!

Tried again on Tesla T5
NVIDIA-SMI 450.51.05 Driver Version: 450.51.05 CUDA Version: 11.0

Same problem.createInferRuntime takes more than 10 seconds on Tesla T4.
While in TITAN X createInferRuntime takes ~ 1-2 seconds.

1 Like