TensorRT doesn't work with NVIDIA VIDEO CODEC SDK

I want to use NVIDIA VIDEO CODEC to decode video file.
Then, real-time detection was implemented with tensorrt.
but , if I do it , The tensorrt will report an error:

c:\p4sw\sw\gpgpu\MachineLearning\DIT\release.0\engine\cuda\caskConvolutionLayer.cpp (256) - Cuda Error in nvinfer1::rt::task::caskConvolutionLayer::execute: 33

This phenomenon is very strange.
I can use one of them alone without any problems, and use them together and you will get this error.

I hope to get the help of officials.

my environment:
gtx1050ti
win10
cuda10
cudnn7.3
tensorRT 5.0.4
NVIDIA VIDEO CODEC SDK 8.2

Hello,

cuda error 33 is a resource handle error. Indicates that a resource handle passed to the API call was not valid. Can you double check if you are appropriately managing cuda resources when decoding video and inferencing together?

Hello,

I have confirmed that the correct data has been obtained through NV decoding,
but, when I use it :

test_context->execute(BATCH_SIZE, test_buffers);

will get this error.
Before execution, I have finished:

test_context = test_engine->createExecutionContext();
test_context->setProfiler(&gProfiler);

I really don’t understand have anything about resource handle error.

Could you tell about more detail?

very thanks.

Hello,

this is certainly unexpected. To help us debug, an you share a small repro that demonstrates when using video SDK and TRT together you get the Cuda 33 error, and separately you don’t get the error?

Hello,

I want to share it.but it should be In what form to report to you?
The forums can’t upload the vs project.
Could you tell me in detail?

Hello,

you can either upload the files to a github, dropbox, or google drive and share the project with me, or attach files to devtalk (reference this post https://devtalk.nvidia.com/default/topic/1043356/tensorrt/attaching-files-to-forum-topics-posts/).

regards,
NVIDIA Enterprise Support

Hello,

I have upload on github.a Fastercnn sample based on tensorrt, contains nvidia decoder code(add by me).the decoder code also come from nvidia decode demo.https://github.com/hset911/tensorrt_decode_bug_Report.git,you can git clone it.
In sampleFasterRCNN.cpp, line 352-394 code is add by me.you can run it ,if you changge line 352:#if 0 ->#if 1,it is sampleFasterRCNN Original code.
by the way,if run I add code ,tensorrt’s doInference will be crash and report the bug:“ERROR: c:\p4sw\sw\gpgpu\MachineLearning\DIT\release\5.0\engine\cuda\cudaPoolingLayer.cpp (133) - Cudnn Error in nvinfer1::rt::cuda::PoolingLayer::execute: 8”,like I report bug.

I hope you can solve the problem as soon as possible.
thank you.

Hello,

  Did you follow up on this issue?

ERROR: cuda/caskConvolutionLayer.cpp (256) - Cuda Error in execute: 33

CUcontext cuContext = nullptr;
ck(cuCtxCreate(&cuContext, 0, cuDevice));

context = engine->createExecutionContext();
context->setProfiler(&gProfiler);

two context conflicts?

guys,
any update?
I have the same issue…ERROR: cuda/caskConvolutionLayer.cpp (256) - Cuda Error in execute: 33

thx

@NVES come on!