Hello,
I’m trying to convert an ONNX model with Conv3D layers to TensorRT format. Conversion crashes with CUDA memory error when input dimensions are (24,160,160,16), but works ok with (16,160,160,16).
Hi,
Support matrix doesn’t seem to help with resolving the issue. The layer being used is just generic 3D convolution which is supported and it works with different input shapes. Can anybody please try to reproduce the issue? It should be easy to do, all necessary steps are in linked github page. I’d like to know at least whether the problem is reproducible or something is wrong on our side.
Using the latest TensorRT version 8.4 EA, we could successfully generate a TRT engine.
We recommend you to please use the latest version.
#polygraphy convert toy_conv3d.onnx -o model.engine --workspace 4G
[W] ‘colored’ module is not installed, will not use colors when logging. To enable colors, please install the ‘colored’ module: python3 -m pip install colored
[W] Input tensor: Conv3D_input (dtype=DataType.FLOAT, shape=(-1, 24, 160, 160, 16)) | No shapes provided; Will use shape: [1, 24, 160, 160, 16] for min/opt/max in profile.
[W] This will cause the tensor to have a static shape. If this is incorrect, please set the range of shapes for this input tensor.
[I] Configuring with profiles: [Profile().add(Conv3D_input, min=[1, 24, 160, 160, 16], opt=[1, 24, 160, 160, 16], max=[1, 24, 160, 160, 16])]
[I] Building engine with configuration:
Workspace | 4294967296 bytes (4096.00 MiB)
Precision | TF32: False, FP16: False, INT8: False, Obey Precision Constraints: False, Strict Types: False
Tactic Sources | [‘CUBLAS’, ‘CUBLAS_LT’, ‘CUDNN’]
Safety Restricted | False
Profiles | 1 profile(s)
[03/10/2022-12:22:48] [TRT] [W] TensorRT was linked against cuBLAS/cuBLAS LT 11.8.0 but loaded cuBLAS/cuBLAS LT 11.7.3
[I] Finished engine building in 1.674 seconds
Thank you for reply.
It looks like there was a hardware issue with the GPU. Memory test returned some errors and switching to different GPU solved the problem.