Hi,
I ran with
with trt.Builder(TRT_LOGGER) as builder, builder.create_network() as network, trt.OnnxParser(network, TRT_LOGGER) as parser:
with open("trk2.onnx", 'rb') as model:
config = builder.create_builder_config()
sd = parser.parse(model.read())
print("sd = ",sd)
I got following error in jetson nano with jetpack 4.3
[TensorRT] VERBOSE: 696:Transpose -> (1, 4, 13, 13)
[TensorRT] VERBOSE: 697:Exp -> (1, 4, 13, 13)
[TensorRT] VERBOSE: 698:ReduceSum -> (1, 4, 13, 13)
[TensorRT] VERBOSE: 699:Div -> (1, 4, 13, 13)
WARNING: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Successfully casted down to INT32.
[TensorRT] VERBOSE: /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:1028: Using Gather axis: 0
[TensorRT] VERBOSE: 701:Gather -> (4, 13, 13)
[TensorRT] VERBOSE: /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:1981: Unsqueezing from (4, 13, 13) to (4, 13, 13, 0)
[TensorRT] ERROR: (Unnamed Layer* 154) [Shuffle]: uninferred dimensions are not an exact divisor of input dimensions, so inferred dimension cannot be calculated
sd = False
How to solve this issue?
I converted onnx model with torch 1.2 ,when I ran it with trt I get the following error,
[TensorRT] VERBOSE: /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:840: Using kernel: (3, 3), strides: (1, 1), padding: (1, 1), dilations: (1, 1), numOutputs: 512
[TensorRT] VERBOSE: /home/jenkins/workspace/TensorRT/helpers/rel-6.0/L1_Nightly/build/source/parsers/onnxOpenSource/builtin_op_importers.cpp:841: Convolution output dimensions: (512, 10, 18)
[TensorRT] VERBOSE: 652:Conv -> (512, 10, 18)
[TensorRT] VERBOSE: 653:Concat -> (536, 10, 18)
[TensorRT] VERBOSE: 654:Slice -> (24, 10, 18)
[TensorRT] VERBOSE: 655:Slice -> (0, 10, 18)
[TensorRT] VERBOSE: 656:Constant ->
[TensorRT] VERBOSE: 657:Shape -> (4)
WARNING: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
Successfully casted down to INT32.
sd = False
[TensorRT] ERROR: Network must have at least one output
Traceback (most recent call last):
File "tst_trt.py", line 21, in <module>
context = engine.create_execution_context()
AttributeError: 'NoneType' object has no attribute 'create_execution_context'