Tensorrt from onnx ERR: atleast 4 dimensions are required for input

When trying to convert onnx to TensorRt I get the follwoing error in jetson nano with jetpack 4.3, TensorRt 6.0.1

[TensorRT] ERROR: (unnamed layer* 0) [Convolution]: at least 4 dimensions are required for input
[TensorRT] ERROR: (unnamed layer* 0) [Activation]: at least 1 dimensions are required for input

link to my onnx model
https://drive.google.com/open?id=1AYpqwT2uJvd1GZx_Yqkc7kIHGsIV4zvS

I also tried with onnx-tensorRt backend. Still getting same error.

How to solve this issue?

Hi,

This specific issue is arising because the ONNX Parser isn’t currently compatible with the ONNX models exported from Pytorch 1.3 - If you downgrade to Pytorch 1.2, this issue should go away.

TRT 7 supports pytorch 1.3. But latest Jetpack 4.3 just support TRT 6.
Please refer below link for more details:
https://docs.nvidia.com/deeplearning/sdk/tensorrt-release-notes/tensorrt-7.html#tensorrt-7

Thanks

Hi,

I ran with

with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
      with open("trk2.onnx", 'rb') as model:             
          config = builder.create_builder_config()
          sd = parser.parse(model.read())
          print("sd = ",sd)

I got following error in jetson nano with jetpack 4.3

[TensorRT] VERBOSE: 696:Transpose -> (1, 4, 13, 13)
[TensorRT] VERBOSE: 697:Exp -> (1, 4, 13, 13)
[TensorRT] VERBOSE: 698:ReduceSum -> (1, 4, 13, 13)
[TensorRT] VERBOSE: 699:Div -> (1, 4, 13, 13)
WARNING: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Successfully casted down to INT32.
[TensorRT] VERBOSE: /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:1028: Using Gather axis: 0
[TensorRT] VERBOSE: 701:Gather -> (4, 13, 13)
[TensorRT] VERBOSE: /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:1981: Unsqueezing from (4, 13, 13) to (4, 13, 13, 0)
[TensorRT] ERROR: (Unnamed Layer* 154) [Shuffle]: uninferred dimensions are not an exact divisor of input dimensions, so inferred dimension cannot be calculated
sd =  False

How to solve this issue?

I converted onnx model with torch 1.2 ,when I ran it with trt I get the following error,

[TensorRT] VERBOSE: /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (1, 1), padding: (1, 1), dilations: (1, 1), numOutputs: 512
[TensorRT] VERBOSE: /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (512, 10, 18)
[TensorRT] VERBOSE: 652:Conv -> (512, 10, 18)
[TensorRT] VERBOSE: 653:Concat -> (536, 10, 18)
[TensorRT] VERBOSE: 654:Slice -> (24, 10, 18)
[TensorRT] VERBOSE: 655:Slice -> (0, 10, 18)
[TensorRT] VERBOSE: 656:Constant -> 
[TensorRT] VERBOSE: 657:Shape -> (4)
WARNING: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Successfully casted down to INT32.
sd =  False
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
  File "tst_trt.py", line 21, in <module>
    context = engine.create_execution_context()
AttributeError: 'NoneType' object has no attribute 'create_execution_context'

any update?

Hi,

Model seems to work on TRT7. Will recommend you to use latest TRT release.

Also, it seems the model is produced using Pytorch 1.3 which is not supported in TRT 6.
If you want to use TRT 6, please try downgrading pytorch model to 1.2 to generate a new model.

Thanks

Hi,

Tried with pytorch 1.2 in Jetson nano but it doesn’t work. I got same error, also tried with different opset_version=9,10.

----------------------------------------------------------------
Input filename:  new_trkk1.onnx
ONNX IR version:  0.0.4
Opset version:    10
Producer name:    pytorch
Producer version: 1.2
Domain:           
Model version:    0
Doc string:       
----------------------------------------------------------------
WARNING: ONNX model has a newer ir_version (0.0.4) than this parser was built against (0.0.3).
WARNING: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Successfully casted down to INT32.
While parsing node number 204 [Gather]:
ERROR: onnx2trt_utils.hpp:347 In function convert_axis:
[8] Assertion failed: axis >= 0 && axis < nbDims
[01/07/2020-21:51:44] [E] Failed to parse onnx file
[01/07/2020-21:51:44] [E] Parsing model failed
[01/07/2020-21:51:44] [E] Engine could not be created

model link

https://drive.google.com/open?id=1KqxMW-8NcPl4w8bT1YCm4cw4LU-bgvKE opset_Version 10

https://drive.google.com/open?id=155HFtuv9xS35RyLKw1ga52x6WQiciNUL opset_Version 9

any update?

Hi,

The model seems to be working of TRT 7 but fails on TRT 6 with below error:

Unsqueezing from (4, 13, 13) to (4, 13, 13, 0)
Floating point exception

Seems to be similar to below issue, could you please try the resolution suggested in below link:
https://github.com/NVIDIA/TensorRT/issues/190

Thanks

Hi,

It will be more clear if u mention in this link https://github.com/NVIDIA/TensorRT/issues/190
in which comment there is solution
Do I need to install OSS component in Jetson nano?

Hi,

Please refer to below link for steps to install OSS in Jetson Nano:
https://devtalk.nvidia.com/default/topic/1067542/jetson-agx-xavier/indexerror-list-index-out-of-range-object-detection-and-instance-segmentations-with-a-tensorflow-ma-/post/5408940/#5408940

Thanks

Hi,
I cannot able to install it in Jetson nano,

I get below error,

-- ******** Summary ********
--   CMake version         : 3.13.0
--   CMake command         : /usr/local/bin/cmake
--   System                : Linux
--   C++ compiler          : /usr/bin/g++
--   C++ compiler version  : 7.4.0
--   CXX flags             : -Wno-deprecated-declarations  -DBUILD_SYSTEM=cmake_oss -Wall -Wno-deprecated-declarations -Wno-unused-function -Wnon-virtual-dtor
--   Build type            : Release
--   Compile definitions   : _PROTOBUF_INSTALL_DIR=/home/install/tensorrt/TensorRT-6.0/build/third_party.protobuf;ONNX_NAMESPACE=onnx2trt_onnx
--   CMAKE_PREFIX_PATH     : 
--   CMAKE_INSTALL_PREFIX  : /home/install/tensorrt/TensorRT-6.0/build/..
--   CMAKE_MODULE_PATH     : 
-- 
--   ONNX version          : 1.6.0
--   ONNX NAMESPACE        : onnx2trt_onnx
--   ONNX_BUILD_TESTS      : OFF
--   ONNX_BUILD_BENCHMARKS : OFF
--   ONNX_USE_LITE_PROTO   : OFF
--   ONNXIFI_DUMMY_BACKEND : OFF
--   ONNXIFI_ENABLE_EXT    : OFF
-- 
--   Protobuf compiler     : 
--   Protobuf includes     : 
--   Protobuf libraries    : 
--   BUILD_ONNX_PYTHON     : OFF
-- Found TensorRT headers at /home/install/tensorrt/TensorRT-6.0/include
-- Find TensorRT libs at /usr/lib/aarch64-linux-gnu/libnvinfer.so;/usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so;TENSORRT_LIBRARY_MYELIN-NOTFOUND
-- Could NOT find TENSORRT (missing: TENSORRT_LIBRARY) 
ERRORCannot find TensorRT library.
-- Adding new sample: sample_char_rnn
--     - Parsers Used: none
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_dynamic_reshape
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_fasterRCNN
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_googlenet
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_int8
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_int8_api
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mlp
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mnist
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_mnist_api
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_movielens
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_movielens_mps
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_nmt
--     - Parsers Used: none
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_onnx_mnist
--     - Parsers Used: onnx
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_plugin
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_reformat_free_io
--     - Parsers Used: caffe
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_ssd
--     - Parsers Used: caffe
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_fasterRCNN
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_maskRCNN
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: sample_uff_mnist
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_uff_plugin_v2_ext
--     - Parsers Used: uff
--     - InferPlugin Used: OFF
--     - Licensing: opensource
-- Adding new sample: sample_uff_ssd
--     - Parsers Used: uff
--     - InferPlugin Used: ON
--     - Licensing: opensource
-- Adding new sample: trtexec
--     - Parsers Used: caffe;uff;onnx
--     - InferPlugin Used: ON
--     - Licensing: opensource
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
TENSORRT_LIBRARY_MYELIN
    linked by target "nvonnxparser_static" in directory /home/install/tensorrt/TensorRT-6.0/parsers/onnx
    linked by target "nvonnxparser" in directory /home/install/tensorrt/TensorRT-6.0/parsers/onnx

-- Configuring incomplete, errors occurred!

Hi,
TENSORRT_LIBRARY_MYELIN is from TRT7.0.
You need to checkout the TRT6.0 branch.
​​​​​​​
Thanks