Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU): Jetson Tx2 • DeepStream Version Deepstream-5.1 • JetPack Version (valid for Jetson only): Jetpack 4.5 • TensorRT Version TensorRT 7.1.3
I have an pytorch → ONNX converted file (.onnx) file. I am trying to run inference using nvinfer in Deepstream. I get an error "Assertion failed: node.output().size() == 1 && “TensorRT does not support the indices output in MaxPool!” and pipeline stops. Before this error, I get a warning " Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32". Request help in resolving the issue
The same issue persists even after upgrading the Jetpack version and running ‘$ /usr/src/tensorrt/bin/trtexec --onnx=[your/model]’. I get the error at MaxPool2D layer and it seems to be supported in TensorRT. Any alternative on debugging please?
Based on our document below, 2D pooling is supported but unpool is not.
Does your model meet the following requirement?
Conditions And Limitations
The number of input kernel dimensions determine 2D or 3D. For 2D pooling, input and output tensors should have three or more dimensions. For 3D pooling, input and output tensors should have four or more dimensions.