Description
I have a pytorch neural network which receives an image and produces a multi-dimensional output (e.g. size 30x15x15)
I managed to do the inference in TensorRT. I used the sampleOnnxMNIST sample on github.
What I can’t do is to extract the network outputs from the buffer:
float* output = static_cast<float*>(buffers.getHostBuffer(mParams.outputTensorNames[0]));
The pointer output
points to 30 values which I loop, but if I go outside the 30 bound, I get random numbers.
for (int i = 0; i < 30; i++)
{
std::cout << output[i] << "\n";
}
Environment
TensorRT Version : 8.4.1.5
GPU Type : Nvidia
Nvidia Driver Version :
CUDA Version : 11.5
CUDNN Version :
Operating System + Version : Ubuntu 20.04
Python Version (if applicable) : 3.7
TensorFlow Version (if applicable) :
PyTorch Version (if applicable) : 1.10.1
Baremetal or Container (if container which image + tag) :
NVES
July 1, 2022, 9:50am
2
Hi,
Can you try running your model with trtexec command, and share the “”–verbose"" log in case if the issue persist
You can refer below link for all the supported operators list, in case any operator is not supported you need to create a custom plugin to support that operation
<!--- SPDX-License-Identifier: Apache-2.0 -->
# Supported ONNX Operators
TensorRT 8.4 supports operators up to Opset 17. Latest information of ONNX operators can be found [here](https://github.com/onnx/onnx/blob/master/docs/Operators.md)
TensorRT supports the following ONNX data types: DOUBLE, FLOAT32, FLOAT16, INT8, and BOOL
> Note: There is limited support for INT32, INT64, and DOUBLE types. TensorRT will attempt to cast down INT64 to INT32 and DOUBLE down to FLOAT, clamping values to `+-INT_MAX` or `+-FLT_MAX` if necessary.
See below for the support matrix of ONNX operators in ONNX-TensorRT.
## Operator Support Matrix
| Operator | Supported | Supported Types | Restrictions |
|---------------------------|------------|-----------------|------------------------------------------------------------------------------------------------------------------------|
| Abs | Y | FP32, FP16, INT32 |
| Acos | Y | FP32, FP16 |
| Acosh | Y | FP32, FP16 |
| Add | Y | FP32, FP16, INT32 |
This file has been truncated. show original
Also, request you to share your model and script if not shared already so that we can help you better.
Meanwhile, for some common errors and queries please refer to below link:
Thanks!
1 Like