Encountered known unsupported method torch.max_pool3d

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
T4
• DeepStream Version
5.0
• JetPack Version (valid for Jetson only)
NA
• TensorRT Version
7.0.0.11
• NVIDIA GPU Driver Version (valid for GPU only)
440.64.00
• Issue Type( questions, new requirements, bugs)

x = torch.ones((1, 3, 64, 224, 224)).cuda()
model_trt = torch2trt(net, [x], max_batch_size=64)

I am using the above code to convert an I3D model to TRT. I am seeing the errors below:-

Warning: Encountered known unsupported method torch.max_pool3d
Warning: Encountered known unsupported method torch.nn.functional.max_pool3d

...


<ipython-input-66-544e069cecf1> in <module>
      1 x = torch.ones((1, 3, 64, 224, 224)).cuda()
      2 print(x.shape)
----> 3 model_trt = torch2trt(net, [x], max_batch_size=64)

/usr/local/lib/python3.6/dist-packages/torch2trt-0.1.0-py3.6.egg/torch2trt/torch2trt.py in torch2trt(module, inputs, input_names, output_names, log_level, max_batch_size, fp16_mode, max_workspace_size, strict_type_constraints, keep_network, int8_mode, int8_calib_dataset, int8_calib_algorithm, int8_calib_batch_size, use_onnx)
    538             if not isinstance(outputs, tuple) and not isinstance(outputs, list):
    539                 outputs = (outputs,)
--> 540             ctx.mark_outputs(outputs, output_names)
    541
    542     builder.max_workspace_size = max_workspace_size

/usr/local/lib/python3.6/dist-packages/torch2trt-0.1.0-py3.6.egg/torch2trt/torch2trt.py in mark_outputs(self, torch_outputs, names)
    404
    405         for i, torch_output in enumerate(torch_outputs):
--> 406             trt_tensor = torch_output._trt
    407             trt_tensor.name = names[i]
    408             trt_tensor.location = torch_device_to_trt(torch_output.device)

AttributeError: 'dict' object has no attribute '_trt'

The documentation Support Matrix :: NVIDIA Deep Learning TensorRT Documentation seems to indicate 3dconv is supported.

Hi @b.kowshik,
yes, trt supports 3dconv and 3dpool, but this failure has nothing to do with trt support.
3dconv and 3dpool supported by trt means trt supports to parse them from onnx model and run the with in inference.
but the failure (‘dict’ object has no attribute ‘_trt’) you are facing is in converting the model to onnx, so it’s the problem of the converter tool - torch2trt.

Thanks!

Thanks! I am now trying out ONNX method and I am facing this error.

AssertionError: In node 0 (importPad): UNSUPPORTED_NODE: Assertion failed: onnx_padding.size() == 8 && onnx_padding[0] == 0 && onnx_padding[1] == 0 && onnx_padding[4] == 0 && onnx_padding[5] == 0 && “This version of TensorRT only supports padding on the outer two dimensions on 4D tensors!”

How do I circumvent this issue?

you have got onnx model, right?
could you try to trtexec to run it, e.g.

$ trtexec --onnx=input.onnx --explicitBatch --workspace=4096 --fp16

Thanks!

Same issue with trtexec too.

/usr/src/tensorrt/bin/trtexec --onnx=i3d-legacy-pytorch.onn
x --explicitBatch --workspace=4096
root@2d4d4e894670:/opt/nvidia/deepstream# /usr/src/tensorrt/bin/trtexec --onnx=i3d-legacy-
pytorch.onnx  --explicitBatch --workspace=4096 --fp16
&&&& RUNNING TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=i3d-legacy-pytorch.on
nx --explicitBatch --workspace=4096 --fp16
[12/15/2020-13:48:57] [I] === Model Options ===
[12/15/2020-13:48:57] [I] Format: ONNX
[12/15/2020-13:48:57] [I] Model: i3d-legacy-pytorch.onnx
[12/15/2020-13:48:57] [I] Output:
[12/15/2020-13:48:57] [I] === Build Options ===
[12/15/2020-13:48:57] [I] Max batch: explicit
[12/15/2020-13:48:57] [I] Workspace: 4096 MB
[12/15/2020-13:48:57] [I] minTiming: 1
[12/15/2020-13:48:57] [I] avgTiming: 8
[12/15/2020-13:48:57] [I] Precision: FP16
[12/15/2020-13:48:57] [I] Calibration:
[12/15/2020-13:48:57] [I] Safe mode: Disabled
[12/15/2020-13:48:57] [I] Save engine:
[12/15/2020-13:48:57] [I] Load engine:
[12/15/2020-13:48:57] [I] Inputs format: fp32:CHW
[12/15/2020-13:48:57] [I] Outputs format: fp32:CHW
[12/15/2020-13:48:57] [I] Input build shapes: model
[12/15/2020-13:48:57] [I] === System Options ===
[12/15/2020-13:48:57] [I] Device: 0
[12/15/2020-13:48:57] [I] DLACore:
[12/15/2020-13:48:57] [I] Plugins:
[12/15/2020-13:48:57] [I] === Inference Options ===
[12/15/2020-13:48:57] [I] Batch: Explicit
[12/15/2020-13:48:57] [I] Iterations: 10
[12/15/2020-13:48:57] [I] Duration: 3s (+ 200ms warm up)
[12/15/2020-13:48:57] [I] Sleep time: 0ms
[12/15/2020-13:48:57] [I] Streams: 1
[12/15/2020-13:48:57] [I] ExposeDMA: Disabled
[12/15/2020-13:48:57] [I] Spin-wait: Disabled
[12/15/2020-13:48:57] [I] Multithreading: Disabled
[12/15/2020-13:48:57] [I] CUDA Graph: Disabled
[12/15/2020-13:48:57] [I] Skip inference: Disabled
[12/15/2020-13:48:57] [I] Inputs:
[12/15/2020-13:48:57] [I] === Reporting Options ===
[12/15/2020-13:48:57] [I] Verbose: Disabled
[12/15/2020-13:48:57] [I] Averages: 10 inferences
[12/15/2020-13:48:57] [I] Percentile: 99
[12/15/2020-13:48:57] [I] Dump output: Disabled
[12/15/2020-13:48:57] [I] Profile: Disabled
[12/15/2020-13:48:57] [I] Export timing to JSON file:
[12/15/2020-13:48:57] [I] Export output to JSON file:
[12/15/2020-13:48:57] [I] Export profile to JSON file:
[12/15/2020-13:48:57] [I]
----------------------------------------------------------------
Input filename:   i3d-legacy-pytorch.onnx
ONNX IR version:  0.0.6
Opset version:    9
Producer name:    pytorch
Producer version: 1.6
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
[12/15/2020-13:48:58] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/15/2020-13:48:58] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
While parsing node number 0 [Pad]:
ERROR: builtin_op_importers.cpp:2086 In function importPad:
[8] Assertion failed: onnx_padding.size() == 8 && onnx_padding[0] == 0 && onnx_padding[1] == 0 && onnx_padding[4] == 0 && onnx_padding[5] == 0 && "This version of TensorRT only supports padding on the outer two dimensions on 4D tensors!"
[12/15/2020-13:48:58] [E] Failed to parse onnx file
[12/15/2020-13:48:58] [E] Parsing model failed
[12/15/2020-13:48:58] [E] Engine creation failed
[12/15/2020-13:48:58] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=i3d-legacy-pytorch.onnx --explicitBatch --workspace=4096 --fp16

Could you install TensorRT 7.2.1 and try to run trtexec again ?
If the version can work well, you can wait next release of Deepstream.

I followed the instructions given in the link above.

dpkg -i nv-tensorrt-repo-ubuntu1804-cuda10.2-trt7.2.1.6-ga-20201006_1-1_amd64.deb
apt-get update
apt-get install tensorrt libcudnn8

I am getting the following errors:-

Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 tensorrt : Depends: libnvinfer7 (= 7.2.1-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
            Depends: libnvinfer-plugin7 (= 7.2.1-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
            Depends: libnvparsers7 (= 7.2.1-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
            Depends: libnvonnxparsers7 (= 7.2.1-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
            Depends: libnvinfer-bin (= 7.2.1-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
            Depends: libnvinfer-dev (= 7.2.1-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
            Depends: libnvinfer-plugin-dev (= 7.2.1-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
            Depends: libnvparsers-dev (= 7.2.1-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
            Depends: libnvonnxparsers-dev (= 7.2.1-1+cuda10.2) but 7.0.0-1+cuda10.2 is to be installed
            Depends: libnvinfer-samples (= 7.2.1-1+cuda10.2) but it is not going to be installed
            Depends: libnvinfer-doc (= 7.2.1-1+cuda10.2) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

I am currently using nvcr.io/nvidia/deepstream:5.0.1-20.09-triton docker image. The currently installed tensorRT is 7.0.0. I am not sure why apt is trying to install 7.0.0 version of cuda.

you can use the TRT tar package

Thanks for the information and the quick help. I tried it out and I see the same failure but a slightly different error message.

root@e8b2b496fdee:/opt/nvidia/deepstream/TensorRT-7.2.1.6# targets/x86_64-linux-gnu/bin/trtexec  --onnx=/opt/nvidia/deepstream/i3d-legacy-pytorch.onnx  --explicitBatch --workspace=4096 --fp16

Input filename:   /opt/nvidia/deepstream/i3d-legacy-pytorch.onnx
ONNX IR version:  0.0.6
Opset version:    9
Producer name:    pytorch
Producer version: 1.6
Domain:
Model version:    0
Doc string:
----------------------------------------------------------------
[12/17/2020-13:39:06] [W] [TRT] onnx2trt_utils.cpp:220: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
ERROR: builtin_op_importers.cpp:2263 In function importPad:
[8] Assertion failed: convertOnnxPadding(onnxPadding, &begPadding, &endPadding) && "This version of TensorRT only supports padding on the outer two dimensions!"
[12/17/2020-13:39:06] [E] Failed to parse onnx file
[12/17/2020-13:39:06] [E] Parsing model failed
[12/17/2020-13:39:06] [E] Engine creation failed
[12/17/2020-13:39:06] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec # targets/x86_64-linux-gnu/bin/trtexec --onnx=/opt/nvidia/deepstream/i3d-legacy-pytorch.onnx --explicitBatch --workspace=4096 --fp16

I am now using TensorRT 7.2.1 installed via tarball.

I faced cudNN issues since TensorRT 7.2.1 seems to assume cudNN 8.0 while the one on the DeepStream docker is cudNN 7.6.5. I then fixed up the TRT_LIB_DIR, CUDNN_INSTALL_DIR, CUDA_INSTALL_DIR, and LD_LIBRARY_PATH and I was able to run the tensorRT example mnist program.

Then I ran the onnx-to-trt conversion command and that failed.

For reference my LD_LIBRARY_PATH is below.

root@e8b2b496fdee:/opt/nvidia/deepstream/TensorRT-7.2.1.6# echo $LD_LIBRARY_PATH
/opt/nvidia/deepstream/cuda/lib64:/opt/nvidia/deepstream/tensorrt-7.2.1.6/lib:/opt/nvidia/deepstream/TensorRT-7.2.1.6/targets/x86_64-linux-gnu/lib

I checked source code and found that the latest TRT also requires that
if onnx_padding.size() == 8,
onnx_padding[0] == 0
onnx_padding[1] == 0
onnx_padding[4] == 0
onnx_padding[5] == 0

What is the value onnx_padding.size() and onnx_padding[*] on your side?
onnx_padding is attrs.get<std::vector<int>>("pads");

,

Hello leif, I am not sure I follow.

Are you asking me to make some code change and print the results? Is this change to be made in TensorRT or elsewhere? Or can I enable some extra debugging and get this info?

Hello b.kowshik,
Sorry for delay.
What I mean is that:
padding layer asserts that
onnx_padding.size() == 8,
onnx_padding[0] == 0
onnx_padding[1] == 0
onnx_padding[4] == 0
onnx_padding[5] == 0
If parameters of your padding layer cannot meet that, TRT will not run it.