Hi,
I have converted my Pytorch model to ONNX and to TRT(with Custom Plugins).
But I am using another TRT(2) Model whose output will be input to the above TRT(1) Model. while doing an Inference on these two TRT Models, I am getting error as:
[TensorRT] ERROR: …/rtSafe/cuda/reformat.cu (925) - Cuda Error in NCHWToNCHHW2: 400 (invalid resource handle)
[TensorRT] ERROR: FAILED_EXECUTION: std::exception
It is to note that while converting ONNX to TRT(1) model, two Ops were not supported, so I have implemented the custom plugins for those unsupported Ops and converted the model and using those plugins for the inference.
And I am getting this error while I am running an Inference on a Video, not on a single frame.
Not sure what is going wrong. what is causing an issue here?
Can you please assist me in resolving this error?
Environment:
CUDA: 10.2
TRT version: 7.1.3
CUDNN: 7.6.5
GPU: RTX 2060
Thanks,
Darshan