Description
Recently we are trying to test RTX4090 by running yolov5 tensorrt int8 model engine, and found out the inference speed slower than RTX 3090 Ti, we can’t figure out what’s wrong with it, I want to know which TensorRT version begins to support RTX 4090 ?
Environment
TensorRT Version: TensorRT-8.5.1.7
GPU Type: RTX 4090
Nvidia Driver Version: 522.06 DCH/win10 64
CUDA Version: 11.8
CUDNN Version: 8.6.0
Operating System + Version: Win10
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered