I want to use NVIDIA VIDEO CODEC to decode video file.
Then, real-time detection was implemented with tensorrt.
but , if I do it , The tensorrt will report an error:
c:\p4sw\sw\gpgpu\MachineLearning\DIT\release.0\engine\cuda\caskConvolutionLayer.cpp (256) - Cuda Error in nvinfer1::rt::task::caskConvolutionLayer::execute: 33
This phenomenon is very strange.
I can use one of them alone without any problems, and use them together and you will get this error.
I hope to get the help of officials.
my environment:
gtx1050ti
win10
cuda10
cudnn7.3
tensorRT 5.0.4
NVIDIA VIDEO CODEC SDK 8.2
cuda error 33 is a resource handle error. Indicates that a resource handle passed to the API call was not valid. Can you double check if you are appropriately managing cuda resources when decoding video and inferencing together?
this is certainly unexpected. To help us debug, an you share a small repro that demonstrates when using video SDK and TRT together you get the Cuda 33 error, and separately you don’t get the error?
I have upload on github.a Fastercnn sample based on tensorrt, contains nvidia decoder code(add by me).the decoder code also come from nvidia decode demo.https://github.com/hset911/tensorrt_decode_bug_Report.git,you can git clone it.
In sampleFasterRCNN.cpp, line 352-394 code is add by me.you can run it ,if you changge line 352:#if 0 ->#if 1,it is sampleFasterRCNN Original code.
by the way,if run I add code ,tensorrt’s doInference will be crash and report the bug:“ERROR: c:\p4sw\sw\gpgpu\MachineLearning\DIT\release\5.0\engine\cuda\cudaPoolingLayer.cpp (133) - Cudnn Error in nvinfer1::rt::cuda::PoolingLayer::execute: 8”,like I report bug.
I hope you can solve the problem as soon as possible.
thank you.