Hello! I’ve recently been using the tensorRT7.0 accelerated YOLOv5 model.
I use project is: https://github.com/wang-xinyu/tensorrtx/tree/master/yolov
Following the tutorial, I can normally implement accelerated reasoning for the model.
However, when I evaluated the time consumption, I found that the inference time consumption test was carried out for the same frame of image for many times.
The time consumption was different and fluctuated greatly, as shown in the figure below:
TensorRT Version: 7.0.0
GPU Type: TeslaV100-SXM2-32GB
Nvidia Driver Version: 418.67
CUDA Version: 10.0
CUDNN Version: 7.6.5
Operating System + Version: Ubuntu18.04
Python Version (if applicable): python3.6
TensorFlow Version (if applicable): /
PyTorch Version (if applicable): 1.4
Baremetal or Container (if container which image + tag): /
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered