Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
T4
• DeepStream Version
5.0
• JetPack Version (valid for Jetson only)
NA
• TensorRT Version
7.0.0.11
• NVIDIA GPU Driver Version (valid for GPU only)
440.64.00
• Issue Type( questions, new requirements, bugs)
x = torch.ones((1, 3, 64, 224, 224)).cuda()
model_trt = torch2trt(net, [x], max_batch_size=64)
I am using the above code to convert an I3D model to TRT. I am seeing the errors below:-
Warning: Encountered known unsupported method torch.max_pool3d
Warning: Encountered known unsupported method torch.nn.functional.max_pool3d
...
<ipython-input-66-544e069cecf1> in <module>
1 x = torch.ones((1, 3, 64, 224, 224)).cuda()
2 print(x.shape)
----> 3 model_trt = torch2trt(net, [x], max_batch_size=64)
/usr/local/lib/python3.6/dist-packages/torch2trt-0.1.0-py3.6.egg/torch2trt/torch2trt.py in torch2trt(module, inputs, input_names, output_names, log_level, max_batch_size, fp16_mode, max_workspace_size, strict_type_constraints, keep_network, int8_mode, int8_calib_dataset, int8_calib_algorithm, int8_calib_batch_size, use_onnx)
538 if not isinstance(outputs, tuple) and not isinstance(outputs, list):
539 outputs = (outputs,)
--> 540 ctx.mark_outputs(outputs, output_names)
541
542 builder.max_workspace_size = max_workspace_size
/usr/local/lib/python3.6/dist-packages/torch2trt-0.1.0-py3.6.egg/torch2trt/torch2trt.py in mark_outputs(self, torch_outputs, names)
404
405 for i, torch_output in enumerate(torch_outputs):
--> 406 trt_tensor = torch_output._trt
407 trt_tensor.name = names[i]
408 trt_tensor.location = torch_device_to_trt(torch_output.device)
AttributeError: 'dict' object has no attribute '_trt'
The documentation Support Matrix :: NVIDIA Deep Learning TensorRT Documentation seems to indicate 3dconv is supported.