TensorRT Support for 5D input tensor


I am trying to load and serialize an ONNX-Model with 5D inputs [batch, a, x, y, z] and there should be no problem since the dynamic batch dimension is supported. Unfortunately the nvinfer1::Dims does not support Dims5, so I added the new class Dims5 inside NvInferLegacyDims.h (no need to recompile libnvinfer*.so since no dependencies change). Then I encountered the following error during serialization,

[E] [TRT] 4: [network.cpp::validate::2716] Error Code 4: Internal Error (input_tensor_name: number of dimensions is 5 but profile 0 has 4.)
[E] [TRT] 2: [builder.cpp::buildSerializedNetwork::417] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed.)

and it seems the profile within buildSerializedNetwork only reads up to 4 input dimensions even though it could be 5, hence the serialization fails as shown above. The builder.cpp is not part of TensorRT OSS so I cannot investigate more.


  1. Do TensorRT really support 5D input tensor? If yes, why there is not Dims5 available since the maximum tensor size is already fixed to 8?
  2. Is there ways or alternative to properly load/serialize model with 5 or more input dimensions?

Thank you.


TensorRT Version: 8.0.1-6
GPU Type: RTX2070
Nvidia Driver Version: 460.91.03
CUDA Version: 11.3.0
CUDNN Version: 8.2.1
Operating System + Version: Ubuntu20.04
Python Version (if applicable):
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):


No codes or model since I want to get more explanation first regarding the asked topic.

Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet


import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
2) Try running your model with trtexec command.
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging


There is no problem with model and validation and what not. The parsing using ONNXParser in TensorRT works, and the outputs from trtexec --verbose are as follows.

[I] Finish parsing network model
[I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 181, GPU 592 (MiB)
[TRT] [MemUsageSnapshot] Builder begin: CPU 181 MiB, GPU 592 MiB
[E] Error[2]: [standardEngineBuilder.cpp::buildEngine::2302] Error Code 2: Internal Error (Builder failed while analyzing shapes.)
[E] Error[2]: [builder.cpp::buildSerializedNetwork::417] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed.)

It seems really has something to do with shapes/dimensions of the model, where the profile is being set only up to 4 dimensions even though the model requires 5 input tensor.

ONNX-Model: model_with_5d_input.onnx (14.6 MB)


trtexec --onnx=model.onnx --verbose

Thank you.

Hi @mibra,

This looks like a known issue. Please allow us sometime to work on this.

Thank you.

If we want to make a 5D tensor, we can just use the Dims class to do it.
Dims2 , Dims3 , Dims4 are convenience classes for very common cases.
If the ONNX model is 4D, that’s what TensorRT is going to build. It is not possible to construct a TensorRT engine where the input has variable rank.
It is however legal to import a model, set the input dimensions (or the input optimization profile) to 5D, and then build it.

Hi @spolisetty,

That is exactly what I did. I created Dims5 class, set up the optimization profile with input tensor using Dims5, and try to build the network.

auto profile = builder->createOptimizationProfile();
profile->setDimensions("input_name", OptProfileSelector::kOPT, Dims5{1, 2, 3, 4, 5});

From the error as shown in the first post, during the building process,

[network.cpp::validate::2716] Error Code 4: Internal Error (input_tensor_name: number of dimensions is 5 but profile 0 has 4.)

you can see that the number of input dimensions is 5, but somehow the profile has only 4. There are no other errors during creating optimization profiles or somewhere else.

If TensorRT do support 5D input tensor, what could have caused this error?

Thank you.

Hi @mibra ,
Can you please try changing “Dims5{1, 2, 3, 4, 5}” to “Dims{5, {1, 2, 3, 4, 5}}”?


Hi @AakankshaS,

Thanks for the suggestion. It seems the dimensions are read correctly this time, but the serialization still fails.

[E] [TRT] 2: [standardEngineBuilder.cpp::buildEngine::2302] Error Code 2: Internal Error (Builder failed while analyzing shapes.)
[E] [TRT] 2: [builder.cpp::buildSerializedNetwork::417] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed.)

The error is still about shapes, but this time at different point - line 2302 in builder.cpp instead of line 2716 as in the first post.

Is there any way to investigate this?
Thank you.

Could you please share us complete verbose logs.

Hi @spolisetty,

The verbose logs are not really interesting beside the errors. Nevertheless, the rough codes are shown below

auto builder = TrtUniquePtr<nvinfer1::IBuilder>(nvinfer1::createInferBuilder(logger.getTRTLogger()));
const auto explicitBatch = 1U << static_cast<uint32_t>(nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
auto network = TrtUniquePtr<nvinfer1::INetworkDefinition>(builder->createNetworkV2(explicitBatch));

auto config = TrtUniquePtr<nvinfer1::IBuilderConfig>(builder->createBuilderConfig());
auto profile = builder->createOptimizationProfile();
profile->setDimensions("input_name", nvinfer1::OptProfileSelector::kMIN, nvinfer1::Dims{5, {1, 2, 3, 4, 5}});
profile->setDimensions("input_name", nvinfer1::OptProfileSelector::kOPT, nvinfer1::Dims{5, {1, 2, 3, 4, 5}});
profile->setDimensions("input_name", nvinfer1::OptProfileSelector::kMAX, nvinfer1::Dims{5, {1, 2, 3, 4, 5}});

auto parser = TrtUniquePtr<nvonnxparser::IParser>(nvonnxparser::createParser(*network, logger.getTRTLogger()));
auto parsed = parser->parseFromFile(modelfile.c_str(), static_cast<int>(logger.getReportableSeverity()));
TrtUniquePtr<nvinfer1::IHostMemory> plan{builder->buildSerializedNetwork(*network, *config)};

TrtUniquePtr<nvinfer1::IRuntime> runtime{nvinfer1::createInferRuntime(logger.getTRTLogger())};
auto mEngine = std::shared_ptr<nvinfer1::ICudaEngine>(runtime->deserializeCudaEngine(plan->data(), plan->size()), InferDeleter());

and the “complete” verbose logs are as follows

 [W] [TRT] onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
 [W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
 [W] [TRT] ShapedWeights.cpp:173: Weights sequential/output_name/dense/MatMul/ReadVariableOp:0 has been transposed with permutation of (1, 0)! If you plan on overwriting the weights with the Refitter API, the new weights must be pre-transposed.
 [E] [TRT] 2: [standardEngineBuilder.cpp::buildEngine::2302] Error Code 2: Internal Error (Builder failed while analyzing shapes.)
 [E] [TRT] 2: [builder.cpp::buildSerializedNetwork::417] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed.)

There are no more logs that I can show you since those are the only logs that I got. The error occurs during “deserializeCudaEngine” (the last line in the shown codes). I am very baffled on the errors since I believed I have done everything correctly referencing sampleOnnxMNIST in the samples folder.
Thank you.