[TensorRT] ERROR: ../rtSafe/cuda/reformat.cu (925) - Cuda Error in NCHWToNCHHW2: 400 (invalid resource handle)

Hi,

I have converted my Pytorch model to ONNX and to TRT(with Custom Plugins).
But I am using another TRT(2) Model whose output will be input to the above TRT(1) Model. while doing an Inference on these two TRT Models, I am getting error as:

[TensorRT] ERROR: …/rtSafe/cuda/reformat.cu (925) - Cuda Error in NCHWToNCHHW2: 400 (invalid resource handle)
[TensorRT] ERROR: FAILED_EXECUTION: std::exception

It is to note that while converting ONNX to TRT(1) model, two Ops were not supported, so I have implemented the custom plugins for those unsupported Ops and converted the model and using those plugins for the inference.

And I am getting this error while I am running an Inference on a Video, not on a single frame.

Not sure what is going wrong. what is causing an issue here?
Can you please assist me in resolving this error?

Environment:
CUDA: 10.2
TRT version: 7.1.3
CUDNN: 7.6.5
GPU: RTX 2060

Thanks,
Darshan

Hi @darshancganji12,

Hope following post will help you. Could you please let us know platform you’re running.

Thank you.

Hi @spolisetty,
Thanks for your reply.

For the first frame of the video, it is giving the output. But for the next frame, it is a throwing an error. Not Sure what is going wrong.

Model Info:

  1. Yolov3-spp-ultralytics.
  2. SlowFast.(with two Custom Plugins).

The bounding boxes outputed by 1st model will go as the input to the 2nd model(Slowfast).

As I said, for the first frame of the video, it is giving the output. But for the next frame, it is a throwing an error.

I am attaching the Plugin Code that i have Implemented below:
Einsum Plugin Code: Einsum - Google Drive
RoI Align Plugin Code: RoI_Align - Google Drive

Thanks,
Darshan

Hi @darshancganji12,

This is complex, may be you need to isolate the plugin implementation issue, just bypass the enqueue function in the plugin. Then you need to debug if it is a plugin implantation issue.

Thank you.