Inference with 5D input data

Description

1.When I prepare to use trt in nnUNet. I transform .pth into .onnx whose input data is 5D.

2.During I am making an engine from this onnx on TensorRT 6, I encountered a warning like below, but it still generated a .trt file:


3.I searched this on forum and some answer refered that it maybe caused due to version. So I took a try on TensorRT 7.0.0, still warning like above.

4.When I tried to do an inference with this .trt, error happened:
pycuda._driver.LogicError: cuMemcpyDtoHAsync failed: an illegal memory access was encountered
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: an illegal memory access was encountered
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuMemFree failed: an illegal memory access was encountered
PyCUDA WARNING: a clean-up operation failed (dead context maybe?)
cuStreamDestroy failed: an illegal memory access was encountered

How to deal with this?

Environment

TensorRT Version: 6
GPU Type: gtx1070
Nvidia Driver Version: driver-440
CUDA Version: 10.1
CUDNN Version: 7.6.5
Operating System + Version: ubuntu18.04
Python Version (if applicable): 3.7.6
TensorFlow Version (if applicable): none
PyTorch Version (if applicable): 1.5.1

Hi @joevaen,
Can you please share your onnx model and the script so that i can help you better.
Thanks!

1.This is my script of generating a .onnx from .pth:
img = torch.zeros([1, 1, 96, 160, 160]).to(“cuda”)
torch.onnx.export(self.network, (img),
‘tensorrt/model_final_checkpoint.onnx’,
input_names=[“data”], output_names=[“prediction”], verbose=True, opset_version=11,
operator_export_type=torch.onnx.OperatorExportTypes.ONNX
)
model = onnx.load(
‘tensorrt/model_final_checkpoint.onnx’)
model_simp, check = simplify(model)
assert check, “Simplified ONNX model could not be validated”
onnx.save(model_simp,
‘tensorrt/model_final_checkpoint.onnx’)

My input data is 5D because I am making the trt available on nnUNet. This input is correct in torch inference.

2.The .onnx is a little big(over 100m). Could you tell me how to upload the big blob? I want to show you the onnx and pth in Neutron but I am new here and not permitted to upload image. What should I do?

  1. The result of inference with this .trt is all wrong and filled with a lot of zero. I really want to figure it out. SOS!

4.OK, I found the parameters in onnx is different from ones in pth. What should I do?

Hi @joevaen,
You can DM your model or can upload it on drive and can share the link.
Also i would recommend you to try TRT latest release as 3D support is limited in TRT 6.
https://developer.nvidia.com/nvidia-tensorrt-7x-download

Thanks!