Description
here is my problem, when i get yolov4 detect, i used GPU to decode the video stream to frame, then put the frame to tensorRT to inference, here came the promble which like this
inference elasped time:0.5551ms
post elasped time:0.0714ms
pre elasped time:1.0675ms
ERROR: C:\source\rtSafe\cuda\cudaElementWiseRunner.cpp (164) - Cuda Error in nvinfer1::rt::cuda::ElementWiseRunner::execute: 400 (invalid resource handle)
ERROR: FAILED_EXECUTION: Unknown exception
inference elasped time:0.5422ms
post elasped time:0.0723ms
pre elasped time:1.0797ms
ERROR: C:\source\rtSafe\cuda\cudaElementWiseRunner.cpp (164) - Cuda Error in nvinfer1::rt::cuda::ElementWiseRunner::execute: 400 (invalid resource handle)
ERROR: FAILED_EXECUTION: Unknown exception
but when i use the cpu to cap the frame, where it is ok ,can work, i search baidu,where some guy tell me ,the gpu should init once, but i tried, it didnt work
Environment
TensorRT Version: TensorRT-7.1.3.4
GPU Type: 1080 8G
Nvidia Driver Version: 451
CUDA Version: 11.0
CUDNN Version: cudnn-11.0-windows-x64-v8.0.1.13
Operating System + Version: win10
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):
Relevant Files
Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)
Steps To Reproduce
Please include:
- Exact steps/commands to build your repro
- Exact steps/commands to run your repro
- Full traceback of errors encountered