Convert onnx to trt format using trtexec

Description

I had tried to convert onnx file to tensorRT (.trt file) using trtexec program.
There are something weird problems. So I report this bugs

When I set opset version to 10 for making onnx format file, the message is printed
UserWarning: You are trying to export the model with onnx:Resize for ONNX opset version 10. This operator might cause results to not match the expected results by PyTorch.
ONNX’s Upsample/Resize operator did not match Pytorch’s Interpolation until opset 11. Attributes to determine how to transform the input were added in onnx:Resize in opset 11 to support Pytorch’s behavior (like coordinate_transformation_mode and nearest_mode).
We recommend using opset 11 and above for models using this operator.

then, I tried to convert onnx to trt using trtexec, I got this warning message
[08/05/2021-14:16:17] [W] [TRT] Can’t fuse pad and convolution with same pad mode
[08/05/2021-14:16:17] [W] [TRT] Can’t fuse pad and convolution with caffe pad mode

The result trt file is generated but I think that there are some problems about layer optimization.
So I changed opset version from 10 to 11, then above warning message which printed when extracting onnx file is disappeared.
But when converting onnx with opset 11 to trt file, I got this error message and trt file is not generated.
[08/05/2021-14:23:04] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
terminate called after throwing an instance of ‘std::out_of_range’
what(): Attribute not found: pads
Aborted (core dumped)

Are there some solutions for this error?

Environment

TensorRT Version: 7.0.0.11
GPU Type: Geforce RTX 2080
Nvidia Driver Version: GeForce RTX 2080 Ti
CUDA Version: 10.2.89
CUDNN Version: 7.6.5
Operating System + Version: ubuntu 18.04
Python Version (if applicable): 3.6
TensorFlow Version (if applicable):
PyTorch Version (if applicable): 1.6
+
ONNX IR version: 0.0.6
Opset version: 11

Relevant Files

model for converting: depth_decoder of monodepth2

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.
https://github.com/NVIDIA/TensorRT/tree/master/samples/opensource/trtexec
In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Hi,

I already using onnx.checker.check_model(model) method in my extract_onnx.py code

f0_onnx = torch.ones((1, 64, 160, 256)).cuda()
f1_onnx = torch.ones((1, 64, 80, 128)).cuda()
f2_onnx = torch.ones((1, 128, 40, 64)).cuda()
f3_onnx = torch.ones((1, 256, 20, 32)).cuda()
f4_onnx = torch.ones((1, 512, 10, 16)).cuda()

torch.onnx.export(depth_decoder,
					(f0_onnx, f1_onnx, f2_onnx, f3_onnx, f4_onnx),
	  			           "md2_decoder.onnx",   
							export_params=True,        
							do_constant_folding=True,  
							opset_version=11,
							input_names = ['encoder_output_0', 'encoder_output_1', 'encoder_output_2', 'encoder_output_3', 'encoder_output_4'],
							output_names = ['decoder_output_0', 'decoder_output_1', 'decoder_output_2', 'decoder_output_final'],
							dynamic_axes={'encoder_output_0' : {0 : 'batch_size'},
										  'encoder_output_1' : {0 : 'batch_size'},
										  'encoder_output_2' : {0 : 'batch_size'},
										  'encoder_output_3' : {0 : 'batch_size'},
										  'encoder_output_4' : {0 : 'batch_size'},
										  'decoder_output_0' : {0 : 'batch_size'},
										  'decoder_output_1' : {0 : 'batch_size'},
										  'decoder_output_2' : {0 : 'batch_size'},
										  'decoder_output_final' : {0 : 'batch_size'}})

		onnx_decoder = onnx.load("md2_decoder.onnx")
		onnx.checker.check_model(onnx_decoder)
		print("Done: converting decoder to onnx format!")

And there is no error message. Also I try to new text with onnx file using check_model.py then there is no warning or error message.

Printed message from trtexec with --verbose option is as follows,

[08/05/2021-14:53:14] [I] === Model Options ===
[08/05/2021-14:53:14] [I] Format: ONNX
[08/05/2021-14:53:14] [I] Model: /home/jinho-sesol/monodepth2_trt/md2_decoder.onnx
[08/05/2021-14:53:14] [I] Output:
[08/05/2021-14:53:14] [I] === Build Options ===
[08/05/2021-14:53:14] [I] Max batch: explicit
[08/05/2021-14:53:14] [I] Workspace: 16 MB
[08/05/2021-14:53:14] [I] minTiming: 1
[08/05/2021-14:53:14] [I] avgTiming: 8
[08/05/2021-14:53:14] [I] Precision: FP16
**[08/05/2021-14:53:14] [I] Calibration: **
[08/05/2021-14:53:14] [I] Safe mode: Disabled
[08/05/2021-14:53:14] [I] Save engine: /home/jinho-sesol/monodepth2_trt/md2_decoder.trt
**[08/05/2021-14:53:14] [I] Load engine: **
[08/05/2021-14:53:14] [I] Inputs format: fp32:CHW
[08/05/2021-14:53:14] [I] Outputs format: fp32:CHW
[08/05/2021-14:53:14] [I] Input build shape: encoder_output_1=1x64x80x128+1x64x80x128+1x64x80x128
[08/05/2021-14:53:14] [I] Input build shape: encoder_output_0=1x64x160x256+1x64x160x256+1x64x160x256
[08/05/2021-14:53:14] [I] Input build shape: encoder_output_4=1x512x10x16+1x512x10x16+1x512x10x16
[08/05/2021-14:53:14] [I] Input build shape: encoder_output_2=1x128x40x64+1x128x40x64+1x128x40x64
[08/05/2021-14:53:14] [I] Input build shape: encoder_output_3=1x256x20x32+1x256x20x32+1x256x20x32
[08/05/2021-14:53:14] [I] === System Options ===
[08/05/2021-14:53:14] [I] Device: 0
**[08/05/2021-14:53:14] [I] DLACore: **
[08/05/2021-14:53:14] [I] Plugins:
[08/05/2021-14:53:14] [I] === Inference Options ===
[08/05/2021-14:53:14] [I] Batch: Explicit
[08/05/2021-14:53:14] [I] Iterations: 10
[08/05/2021-14:53:14] [I] Duration: 3s (+ 200ms warm up)
[08/05/2021-14:53:14] [I] Sleep time: 0ms
[08/05/2021-14:53:14] [I] Streams: 1
[08/05/2021-14:53:14] [I] ExposeDMA: Disabled
[08/05/2021-14:53:14] [I] Spin-wait: Disabled
[08/05/2021-14:53:14] [I] Multithreading: Disabled
[08/05/2021-14:53:14] [I] CUDA Graph: Disabled
[08/05/2021-14:53:14] [I] Skip inference: Disabled
[08/05/2021-14:53:14] [I] Inputs:
[08/05/2021-14:53:14] [I] === Reporting Options ===
[08/05/2021-14:53:14] [I] Verbose: Enabled
[08/05/2021-14:53:14] [I] Averages: 10 inferences
[08/05/2021-14:53:14] [I] Percentile: 99
[08/05/2021-14:53:14] [I] Dump output: Disabled
[08/05/2021-14:53:14] [I] Profile: Disabled
**[08/05/2021-14:53:14] [I] Export timing to JSON file: **
**[08/05/2021-14:53:14] [I] Export output to JSON file: **
**[08/05/2021-14:53:14] [I] Export profile to JSON file: **
**[08/05/2021-14:53:14] [I] **
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::GridAnchor_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::NMS_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::Reorg_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::Region_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::Clip_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::LReLU_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::PriorBox_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::Normalize_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::RPROI_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::BatchedNMS_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::FlattenConcat_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::CropAndResize
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::DetectionLayer_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::Proposal
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::ProposalLayer_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::PyramidROIAlign_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::ResizeNearest_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::Split
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::SpecialSlice_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ::InstanceNormalization_TRT
----------------------------------------------------------------
Input filename: /home/jinho-sesol/monodepth2_trt/md2_decoder.onnx
ONNX IR version: 0.0.6
Opset version: 11
Producer name: pytorch
Producer version: 1.6
**Domain: **
Model version: 0
**Doc string: **
----------------------------------------------------------------
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::GridAnchor_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::NMS_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Reorg_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Region_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Clip_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::LReLU_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::PriorBox_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Normalize_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::RPROI_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::BatchedNMS_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::FlattenConcat_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::CropAndResize
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::DetectionLayer_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Proposal
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::ProposalLayer_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::PyramidROIAlign_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::ResizeNearest_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::Split
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::SpecialSlice_TRT
[08/05/2021-14:53:14] [V] [TRT] Plugin creator registration succeeded - ONNXTRT_NAMESPACE::InstanceNormalization_TRT
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:203: Adding network input: encoder_output_0 with dtype: float32, dimensions: (-1, 64, 160, 256)
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: encoder_output_0 for ONNX tensor: encoder_output_0
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:203: Adding network input: encoder_output_1 with dtype: float32, dimensions: (-1, 64, 80, 128)
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: encoder_output_1 for ONNX tensor: encoder_output_1
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:203: Adding network input: encoder_output_2 with dtype: float32, dimensions: (-1, 128, 40, 64)
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: encoder_output_2 for ONNX tensor: encoder_output_2
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:203: Adding network input: encoder_output_3 with dtype: float32, dimensions: (-1, 256, 20, 32)
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: encoder_output_3 for ONNX tensor: encoder_output_3
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:203: Adding network input: encoder_output_4 with dtype: float32, dimensions: (-1, 512, 10, 16)
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: encoder_output_4 for ONNX tensor: encoder_output_4
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 463
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 464
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 469
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 473
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 474
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 478
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 479
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 484
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 488
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 489
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 493
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 494
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 498
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 499
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 504
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 508
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 509
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 513
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 514
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 518
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 519
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 524
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 528
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 529
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 533
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 534
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 538
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 539
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 544
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 548
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 549
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 553
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: 554
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.0.conv.conv.bias
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.0.conv.conv.weight
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.1.conv.conv.bias
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.1.conv.conv.weight
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.10.conv.bias
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.10.conv.weight
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.11.conv.bias
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.11.conv.weight
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.12.conv.bias
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.12.conv.weight
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.13.conv.bias
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.13.conv.weight
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.2.conv.conv.bias
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.2.conv.conv.weight
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.3.conv.conv.bias
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.3.conv.conv.weight
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.4.conv.conv.bias
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.4.conv.conv.weight
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.5.conv.conv.bias
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.5.conv.conv.weight
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.6.conv.conv.bias
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.6.conv.conv.weight
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.7.conv.conv.bias
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.7.conv.conv.weight
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.8.conv.conv.bias
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.8.conv.conv.weight
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.9.conv.conv.bias
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:90: Importing initializer: decoder.9.conv.conv.weight
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:107: Parsing node: ConstantOfShape_0 [ConstantOfShape]
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 463
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:129: ConstantOfShape_0 [ConstantOfShape] inputs: [463 → (1)], **
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: ConstantOfShape_0 for ONNX node: ConstantOfShape_0
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 42 for ONNX tensor: 42
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:180: ConstantOfShape_0 [ConstantOfShape] outputs: [42 → (-1)], **
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:107: Parsing node: Concat_1 [Concat]
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 464
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 42
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:129: Concat_1 [Concat] inputs: [464 → (4)], [42 → (-1)], **
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: Concat_1 for ONNX node: Concat_1
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 43 for ONNX tensor: 43
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:180: Concat_1 [Concat] outputs: [43 → (-1)], **
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:107: Parsing node: Constant_2 [Constant]
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:129: Constant_2 [Constant] inputs: **
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:180: Constant_2 [Constant] outputs: [44 → (2)], **
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:107: Parsing node: Reshape_3 [Reshape]
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 43
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 44
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:129: Reshape_3 [Reshape] inputs: [43 → (-1)], [44 → (2)], **
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: Reshape_3 for ONNX node: Reshape_3
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 45 for ONNX tensor: 45
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:180: Reshape_3 [Reshape] outputs: [45 → (-1, 2)], **
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:107: Parsing node: Constant_4 [Constant]
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:129: Constant_4 [Constant] inputs: **
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:180: Constant_4 [Constant] outputs: [46 → (1)], **
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:107: Parsing node: Constant_5 [Constant]
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:129: Constant_5 [Constant] inputs: **
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:180: Constant_5 [Constant] outputs: [47 → (1)], **
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:107: Parsing node: Constant_6 [Constant]
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:129: Constant_6 [Constant] inputs: **
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] onnx2trt_utils.cpp:212: Weight at index 0: -9223372036854775807 is out of range. Clamping to: -2147483648
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:222: One or more weights outside the range of INT32 was clamped
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:180: Constant_6 [Constant] outputs: [48 → (1)], **
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:107: Parsing node: Constant_7 [Constant]
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:129: Constant_7 [Constant] inputs: **
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:180: Constant_7 [Constant] outputs: [49 → (1)], **
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:107: Parsing node: Slice_8 [Slice]
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 45
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 47
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 48
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 46
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 49
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:129: Slice_8 [Slice] inputs: [45 → (-1, 2)], [47 → (1)], [48 → (1)], [46 → (1)], [49 → (1)], **
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: Slice_8 for ONNX node: Slice_8
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 50 for ONNX tensor: 50
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:180: Slice_8 [Slice] outputs: [50 → (-1, -1)], **
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:107: Parsing node: Transpose_9 [Transpose]
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 50
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:129: Transpose_9 [Transpose] inputs: [50 → (-1, -1)], **
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: Transpose_9 for ONNX node: Transpose_9
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 51 for ONNX tensor: 51
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:180: Transpose_9 [Transpose] outputs: [51 → (-1, -1)], **
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:107: Parsing node: Constant_10 [Constant]
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:129: Constant_10 [Constant] inputs: **
[08/05/2021-14:53:14] [W] [TRT] onnx2trt_utils.cpp:198: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:180: Constant_10 [Constant] outputs: [52 → (1)], **
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:107: Parsing node: Reshape_11 [Reshape]
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 51
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 52
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:129: Reshape_11 [Reshape] inputs: [51 → (-1, -1)], [52 → (1)], **
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: Reshape_11 for ONNX node: Reshape_11
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 53 for ONNX tensor: 53
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:180: Reshape_11 [Reshape] outputs: [53 → (-1)], **
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:107: Parsing node: Cast_12 [Cast]
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 53
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:129: Cast_12 [Cast] inputs: [53 → (-1)], **
[08/05/2021-14:53:14] [V] [TRT] builtin_op_importers.cpp:315: Casting to type: int32
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:122: Registering layer: Cast_12 for ONNX node: Cast_12
[08/05/2021-14:53:14] [V] [TRT] ImporterContext.hpp:97: Registering tensor: 54 for ONNX tensor: 54
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:180: Cast_12 [Cast] outputs: [54 → (-1)], **
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:107: Parsing node: Constant_13 [Constant]
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:129: Constant_13 [Constant] inputs: **
**[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:180: Constant_13 [Constant] outputs: [55 → ()], **
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:107: Parsing node: Pad_14 [Pad]
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: encoder_output_4
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 54
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:123: Searching for input: 55
[08/05/2021-14:53:14] [V] [TRT] ModelImporter.cpp:129: Pad_14 [Pad] inputs: [encoder_output_4 → (-1, 512, 10, 16)], [54 → (-1)], [55 → ()], **
terminate called after throwing an instance of ‘std::out_of_range’
** what(): Attribute not found: pads

Aborted (core dumped)

Thanks!!!

Hi @pjhkb083gak9,

We recommend you to please try on latest TensorRT version 8.0.1. If you still face this issue please share us ONNX model to try from our end for better assistance.

Thank you.

Hi, @spolisetty ,
Is it possible to check decoder model with weight parameter file not onnx file?
Because of security policy of my company, it is hard to take a file out of the company…

Thank you!!

Hi,

It will be hard to say based on the weight parameters without onnx file. you can also use polygraphy tool Polygraphy — Polygraphy 0.38.0 documentation for better debugging.

Thank you.