Hi! I’m experiencing some issues in performing TensorRT inference.
Using the driver version 473.81 the inference times are stable and low, upgrading the drivers to the
last versions (ex. 537.13) and keeping fixed CUDA Toolkit and CuDNN, inference times are 4x higher and with no stability at all. I do inference with YOLOv5 exported engine model using torch.hub.load.
Can be a driver related issue?
TensorRT Version: 18.104.22.168
GPU Type: RTX Quadro T1000 4GB
Nvidia Driver Version: 473.81 and 537.13
CUDA Version: 11.6
CUDNN Version: 8.5
Operating System + Version: Windows 10
Python Version (if applicable): 3.10.10
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.13.1 + cu116
Baremetal or Container (if container which image + tag):
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
session = load(‘ultralytics/yolov5’, ‘custom’, model_path)
result_inference = session(images, size=(256,256))