Crash during TensorRT conversion of ONNX model with Conv3D layer

Description

Hello,
I’m trying to convert an ONNX model with Conv3D layers to TensorRT format. Conversion crashes with CUDA memory error when input dimensions are (24,160,160,16), but works ok with (16,160,160,16).

I submitted an issue in TensorRT github repository, full description and steps to reproduce can be found there: Crash during model conversion with Conv3D layer · Issue #1816 · NVIDIA/TensorRT · GitHub

Environment

Using NGC TensorFlow container version 22.02

TensorRT Version: 8.2.3.0
GPU Type: Tesla T4 (multi-GPU)
Nvidia Driver Version: 510.39.01
CUDA Version: 11.6
Operating System + Version: Ubuntu 20.04.3 LTS
Python Version (if applicable): 3.8.10
TensorFlow Version (if applicable): 2.7.0

Hi,
Below link might help you with your query, Kindly check below link for all 3d support layers:

Thanks!

Hi,
Support matrix doesn’t seem to help with resolving the issue. The layer being used is just generic 3D convolution which is supported and it works with different input shapes. Can anybody please try to reproduce the issue? It should be easy to do, all necessary steps are in linked github page. I’d like to know at least whether the problem is reproducible or something is wrong on our side.

Hi @AlexM4,

Using the latest TensorRT version 8.4 EA, we could successfully generate a TRT engine.
We recommend you to please use the latest version.

#polygraphy convert toy_conv3d.onnx -o model.engine --workspace 4G
[W] ‘colored’ module is not installed, will not use colors when logging. To enable colors, please install the ‘colored’ module: python3 -m pip install colored
[W] Input tensor: Conv3D_input (dtype=DataType.FLOAT, shape=(-1, 24, 160, 160, 16)) | No shapes provided; Will use shape: [1, 24, 160, 160, 16] for min/opt/max in profile.
[W] This will cause the tensor to have a static shape. If this is incorrect, please set the range of shapes for this input tensor.
[I] Configuring with profiles: [Profile().add(Conv3D_input, min=[1, 24, 160, 160, 16], opt=[1, 24, 160, 160, 16], max=[1, 24, 160, 160, 16])]
[I] Building engine with configuration:
Workspace | 4294967296 bytes (4096.00 MiB)
Precision | TF32: False, FP16: False, INT8: False, Obey Precision Constraints: False, Strict Types: False
Tactic Sources | [‘CUBLAS’, ‘CUBLAS_LT’, ‘CUDNN’]
Safety Restricted | False
Profiles | 1 profile(s)
[03/10/2022-12:22:48] [TRT] [W] TensorRT was linked against cuBLAS/cuBLAS LT 11.8.0 but loaded cuBLAS/cuBLAS LT 11.7.3
[I] Finished engine building in 1.674 seconds

Thank you.

Thank you for reply.
It looks like there was a hardware issue with the GPU. Memory test returned some errors and switching to different GPU solved the problem.