Int8 TensorCores for Jetson

Hello,

I would like the ask a question on TensorCores of Nvidia Jetson AGX Developer Kit. I made an int8 quantization with the pytorch_quantization library and convert the calibrated .pt model to an onnx format and after that to an .engine. I hoped that this would fuse the fake-quantization layer I saw in the Netron onnx. However the inference with the int8.engine was slower than the inference with the same model with fp16 format. So my question is the following. Does the Nvidia Jetson AGX Developer Kit provides “cheap 8Bit TensorCores” as thoose mentioned here (1) or doesn’t it provides thoose. I know that the Xavier does have the following core structure 512-core NVIDIA with 64 TensorCores but are these 64 TensorCores able to process int8 models faster?

If they are able to, than my quantization potencially failed. But I believe it doesn’t because the model size has reduced to the half of the fp16 model size. Is there a guide for quantization beside of the documentation here (2)?

Thanks for your help.
Patrick

1: Achieving FP32 Accuracy for INT8 Inference Using Quantization Aware Training with NVIDIA TensorRT | NVIDIA Technical Blog
2: Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

Hi,

Xavier has Tensor Core.
Could you try the model with trtexec first?

$ /usr/src/tensorrt/bin/trtexec --onnx=[model] --fp16
$ /usr/src/tensorrt/bin/trtexec --onnx=[model] --int8

Thanks.

Hello,

I tried to run the trtexec command on the onnx model. To be more precise. I ran the trtexec --onnx --int8 command on a int8 calibrated onnx model and the trtexec --onnx --fp16 on a fp16 trained onnx model. TensorRT failed to run the int8 version and passed the fp16 test. Do you have any idea?

Thanks for your help.

Yours
Patrick

FP16 Output when run trtexec:

/usr/src/tensorrt/bin/trtexec --onnx=/media/edp-agx/SDC/Models/best.onnx --fp16
&&&& RUNNING TensorRT.trtexec [TensorRT v8201] # /usr/src/tensorrt/bin/trtexec --onnx=/media/edp-agx/SDC/Models/best.onnx --fp16
[02/17/2023-17:41:50] [I] === Model Options ===
[02/17/2023-17:41:50] [I] Format: ONNX
[02/17/2023-17:41:50] [I] Model: /media/edp-agx/SDC/Models/best.onnx
[02/17/2023-17:41:50] [I] Output:
[02/17/2023-17:41:50] [I] === Build Options ===
[02/17/2023-17:41:50] [I] Max batch: explicit batch
[02/17/2023-17:41:50] [I] Workspace: 16 MiB
[02/17/2023-17:41:50] [I] minTiming: 1
[02/17/2023-17:41:50] [I] avgTiming: 8
[02/17/2023-17:41:50] [I] Precision: FP32+FP16
[02/17/2023-17:41:50] [I] Calibration: 
[02/17/2023-17:41:50] [I] Refit: Disabled
[02/17/2023-17:41:50] [I] Sparsity: Disabled
[02/17/2023-17:41:50] [I] Safe mode: Disabled
[02/17/2023-17:41:50] [I] DirectIO mode: Disabled
[02/17/2023-17:41:50] [I] Restricted mode: Disabled
[02/17/2023-17:41:50] [I] Save engine: 
[02/17/2023-17:41:50] [I] Load engine: 
[02/17/2023-17:41:50] [I] Profiling verbosity: 0
[02/17/2023-17:41:50] [I] Tactic sources: Using default tactic sources
[02/17/2023-17:41:50] [I] timingCacheMode: local
[02/17/2023-17:41:50] [I] timingCacheFile: 
[02/17/2023-17:41:50] [I] Input(s)s format: fp32:CHW
[02/17/2023-17:41:50] [I] Output(s)s format: fp32:CHW
[02/17/2023-17:41:50] [I] Input build shapes: model
[02/17/2023-17:41:50] [I] Input calibration shapes: model
[02/17/2023-17:41:50] [I] === System Options ===
[02/17/2023-17:41:50] [I] Device: 0
[02/17/2023-17:41:50] [I] DLACore: 
[02/17/2023-17:41:50] [I] Plugins:
[02/17/2023-17:41:50] [I] === Inference Options ===
[02/17/2023-17:41:50] [I] Batch: Explicit
[02/17/2023-17:41:50] [I] Input inference shapes: model
[02/17/2023-17:41:50] [I] Iterations: 10
[02/17/2023-17:41:50] [I] Duration: 3s (+ 200ms warm up)
[02/17/2023-17:41:50] [I] Sleep time: 0ms
[02/17/2023-17:41:50] [I] Idle time: 0ms
[02/17/2023-17:41:50] [I] Streams: 1
[02/17/2023-17:41:50] [I] ExposeDMA: Disabled
[02/17/2023-17:41:50] [I] Data transfers: Enabled
[02/17/2023-17:41:50] [I] Spin-wait: Disabled
[02/17/2023-17:41:50] [I] Multithreading: Disabled
[02/17/2023-17:41:50] [I] CUDA Graph: Disabled
[02/17/2023-17:41:50] [I] Separate profiling: Disabled
[02/17/2023-17:41:50] [I] Time Deserialize: Disabled
[02/17/2023-17:41:50] [I] Time Refit: Disabled
[02/17/2023-17:41:50] [I] Skip inference: Disabled
[02/17/2023-17:41:50] [I] Inputs:
[02/17/2023-17:41:50] [I] === Reporting Options ===
[02/17/2023-17:41:50] [I] Verbose: Disabled
[02/17/2023-17:41:50] [I] Averages: 10 inferences
[02/17/2023-17:41:50] [I] Percentile: 99
[02/17/2023-17:41:50] [I] Dump refittable layers:Disabled
[02/17/2023-17:41:50] [I] Dump output: Disabled
[02/17/2023-17:41:50] [I] Profile: Disabled
[02/17/2023-17:41:50] [I] Export timing to JSON file: 
[02/17/2023-17:41:50] [I] Export output to JSON file: 
[02/17/2023-17:41:50] [I] Export profile to JSON file: 
[02/17/2023-17:41:50] [I] 
[02/17/2023-17:41:50] [I] === Device Information ===
[02/17/2023-17:41:50] [I] Selected Device: Xavier
[02/17/2023-17:41:50] [I] Compute Capability: 7.2
[02/17/2023-17:41:50] [I] SMs: 8
[02/17/2023-17:41:50] [I] Compute Clock Rate: 1.377 GHz
[02/17/2023-17:41:50] [I] Device Global Memory: 31920 MiB
[02/17/2023-17:41:50] [I] Shared Memory per SM: 96 KiB
[02/17/2023-17:41:50] [I] Memory Bus Width: 256 bits (ECC disabled)
[02/17/2023-17:41:50] [I] Memory Clock Rate: 1.377 GHz
[02/17/2023-17:41:50] [I] 
[02/17/2023-17:41:50] [I] TensorRT version: 8.2.1
[02/17/2023-17:41:52] [I] [TRT] [MemUsageChange] Init CUDA: CPU +362, GPU +0, now: CPU 381, GPU 17774 (MiB)
[02/17/2023-17:41:52] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 381 MiB, GPU 17775 MiB
[02/17/2023-17:41:52] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 486 MiB, GPU 17880 MiB
[02/17/2023-17:41:52] [I] Start parsing network model
[02/17/2023-17:41:53] [I] [TRT] ----------------------------------------------------------------
[02/17/2023-17:41:53] [I] [TRT] Input filename:   /media/edp-agx/SDC/Models/best.onnx
[02/17/2023-17:41:53] [I] [TRT] ONNX IR version:  0.0.6
[02/17/2023-17:41:53] [I] [TRT] Opset version:    13
[02/17/2023-17:41:53] [I] [TRT] Producer name:    pytorch
[02/17/2023-17:41:53] [I] [TRT] Producer version: 1.8
[02/17/2023-17:41:53] [I] [TRT] Domain:           
[02/17/2023-17:41:53] [I] [TRT] Model version:    0
[02/17/2023-17:41:53] [I] [TRT] Doc string:       
[02/17/2023-17:41:53] [I] [TRT] ----------------------------------------------------------------
[02/17/2023-17:41:53] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/17/2023-17:41:53] [W] [TRT] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[02/17/2023-17:41:53] [W] [TRT] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[02/17/2023-17:41:53] [W] [TRT] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[02/17/2023-17:41:53] [I] Finish parsing network model
[02/17/2023-17:41:54] [I] [TRT] ---------- Layers Running on DLA ----------
[02/17/2023-17:41:54] [I] [TRT] ---------- Layers Running on GPU ----------
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 478
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 491
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 544
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 557
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 610
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 623
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_0
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_1), Mul_2)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_3
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_4), Mul_5)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_6 || Conv_23
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_7), Mul_8)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_24), Mul_25)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_9
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_10), Mul_11)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_12
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_13), Mul_14), Add_15)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_16
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_17), Mul_18)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_19
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_20), Mul_21), Add_22)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 188 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_27
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_28), Mul_29)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_30
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_31), Mul_32)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_33 || Conv_64
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_34), Mul_35)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_65), Mul_66)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_36
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_37), Mul_38)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_39
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_40), Mul_41), Add_42)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_43
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_44), Mul_45)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_46
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_47), Mul_48), Add_49)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_50
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_51), Mul_52)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_53
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_54), Mul_55), Add_56)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_57
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_58), Mul_59)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_60
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_61), Mul_62), Add_63)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_68
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_69), Mul_70)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_71
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_72), Mul_73)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_74 || Conv_119
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_75), Mul_76)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_120), Mul_121)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_77
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_78), Mul_79)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_80
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_81), Mul_82), Add_83)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_84
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_85), Mul_86)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_87
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_88), Mul_89), Add_90)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_91
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_92), Mul_93)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_94
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_95), Mul_96), Add_97)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_98
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_99), Mul_100)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_101
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_102), Mul_103), Add_104)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_105
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_106), Mul_107)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_108
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_109), Mul_110), Add_111)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_112
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_113), Mul_114)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_115
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_116), Mul_117), Add_118)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_123
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_124), Mul_125)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_126
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_127), Mul_128)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_129 || Conv_146
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_130), Mul_131)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_147), Mul_148)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_132
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_133), Mul_134)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_135
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_136), Mul_137), Add_138)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_139
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_140), Mul_141)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_142
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(Sigmoid_143), Mul_144), Add_145)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_150
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_151), Mul_152)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_153
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_154), Mul_155)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] MaxPool_156
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] MaxPool_157
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] MaxPool_158
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 321 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 322 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 323 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 324 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_160
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_161), Mul_162)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_163
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_164), Mul_165)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Resize_166
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 336 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_168 || Conv_183
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_169), Mul_170)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_184), Mul_185)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_171
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_172), Mul_173)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_174
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_175), Mul_176)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_177
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_178), Mul_179)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_180
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_181), Mul_182)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_187
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_188), Mul_189)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_190
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_191), Mul_192)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Resize_193
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 367 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_195 || Conv_210
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_196), Mul_197)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_211), Mul_212)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_198
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_199), Mul_200)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_201
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_202), Mul_203)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_204
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_205), Mul_206)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_207
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_208), Mul_209)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_214
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_215), Mul_216)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_217
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_269
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_218), Mul_219)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 362 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_221 || Conv_236
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Reshape_286 + Transpose_287
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_222), Mul_223)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_237), Mul_238)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_224
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(Sigmoid_288)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Slice_293
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Slice_306
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Slice_317
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_225), Mul_226)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_227
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(487 + (Unnamed Layer* 302) [Shuffle] + Mul_308, PWN(489 + (Unnamed Layer* 305) [Shuffle], Pow_310)), Mul_312)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_228), Mul_229)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(474 + (Unnamed Layer* 290) [Shuffle] + Mul_295, PWN(476 + (Unnamed Layer* 293) [Shuffle], Sub_297)), Add_299), 480 + (Unnamed Layer* 298) [Shuffle] + Mul_301)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_230
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 481 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 492 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 497 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Reshape_322
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_231), Mul_232)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_233
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_234), Mul_235)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_240
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_241), Mul_242)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_243
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_323
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_244), Mul_245)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 331 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_247 || Conv_262
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Reshape_340 + Transpose_341
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_248), Mul_249)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_263), Mul_264)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_250
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(Sigmoid_342)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Slice_347
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Slice_360
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Slice_371
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_251), Mul_252)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_253
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(553 + (Unnamed Layer* 349) [Shuffle] + Mul_362, PWN(555 + (Unnamed Layer* 352) [Shuffle], Pow_364)), Mul_366)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_254), Mul_255)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(540 + (Unnamed Layer* 337) [Shuffle] + Mul_349, PWN(542 + (Unnamed Layer* 340) [Shuffle], Sub_351)), Add_353), 546 + (Unnamed Layer* 345) [Shuffle] + Mul_355)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_256
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 547 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 558 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 563 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Reshape_376
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_257), Mul_258)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_259
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_260), Mul_261)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_266
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_267), Mul_268)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Conv_377
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Reshape_394 + Transpose_395
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(Sigmoid_396)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Slice_401
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Slice_414
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Slice_425
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(619 + (Unnamed Layer* 396) [Shuffle] + Mul_416, PWN(621 + (Unnamed Layer* 399) [Shuffle], Pow_418)), Mul_420)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] PWN(PWN(PWN(606 + (Unnamed Layer* 384) [Shuffle] + Mul_403, PWN(608 + (Unnamed Layer* 387) [Shuffle], Sub_405)), Add_407), 612 + (Unnamed Layer* 392) [Shuffle] + Mul_409)
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 613 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 624 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 629 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] Reshape_430
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 508 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 574 copy
[02/17/2023-17:41:54] [I] [TRT] [GpuLayer] 640 copy
[02/17/2023-17:41:54] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +227, GPU +154, now: CPU 802, GPU 18278 (MiB)
[02/17/2023-17:41:56] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +307, GPU +308, now: CPU 1109, GPU 18586 (MiB)
[02/17/2023-17:41:56] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.
[02/17/2023-17:43:42] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[02/17/2023-18:00:31] [I] [TRT] Detected 1 inputs and 7 output network tensors.
[02/17/2023-18:00:31] [I] [TRT] Total Host Persistent Memory: 206176
[02/17/2023-18:00:31] [I] [TRT] Total Device Persistent Memory: 42741760
[02/17/2023-18:00:31] [I] [TRT] Total Scratch Memory: 0
[02/17/2023-18:00:31] [I] [TRT] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 48 MiB, GPU 156 MiB
[02/17/2023-18:00:31] [I] [TRT] [BlockAssignment] Algorithm ShiftNTopDown took 80.5574ms to assign 9 blocks to 182 nodes requiring 25958401 bytes.
[02/17/2023-18:00:31] [I] [TRT] Total Activation Memory: 25958401
[02/17/2023-18:00:31] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1631, GPU 18555 (MiB)
[02/17/2023-18:00:31] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1631, GPU 18555 (MiB)
[02/17/2023-18:00:31] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +40, GPU +64, now: CPU 40, GPU 64 (MiB)
[02/17/2023-18:00:31] [I] [TRT] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 1615, GPU 18580 (MiB)
[02/17/2023-18:00:31] [I] [TRT] Loaded engine size: 44 MiB
[02/17/2023-18:00:32] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1629, GPU 18580 (MiB)
[02/17/2023-18:00:32] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1629, GPU 18580 (MiB)
[02/17/2023-18:00:32] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +41, now: CPU 0, GPU 41 (MiB)
[02/17/2023-18:00:32] [I] Engine built in 1121.2 sec.
[02/17/2023-18:00:32] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 1398, GPU 18535 (MiB)
[02/17/2023-18:00:32] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1398, GPU 18535 (MiB)
[02/17/2023-18:00:32] [I] [TRT] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +65, now: CPU 0, GPU 106 (MiB)
[02/17/2023-18:00:32] [I] Using random values for input images
[02/17/2023-18:00:32] [I] Created input binding for images with dimensions 1x3x640x640
[02/17/2023-18:00:32] [I] Using random values for output 467
[02/17/2023-18:00:32] [I] Created output binding for 467 with dimensions 1x3x80x80x7
[02/17/2023-18:00:32] [I] Using random values for output 533
[02/17/2023-18:00:32] [I] Created output binding for 533 with dimensions 1x3x40x40x7
[02/17/2023-18:00:32] [I] Using random values for output 599
[02/17/2023-18:00:32] [I] Created output binding for 599 with dimensions 1x3x20x20x7
[02/17/2023-18:00:32] [I] Using random values for output output
[02/17/2023-18:00:32] [I] Created output binding for output with dimensions 1x25200x7
[02/17/2023-18:00:32] [I] Starting inference
[02/17/2023-18:00:35] [I] Warmup completed 10 queries over 200 ms
[02/17/2023-18:00:35] [I] Timing trace has 182 queries over 3.02537 s
[02/17/2023-18:00:35] [I] 
[02/17/2023-18:00:35] [I] === Trace details ===
[02/17/2023-18:00:35] [I] Trace averages of 10 runs:
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 16.3738 ms - Host latency: 16.5792 ms (end to end 16.803 ms, enqueue 3.80698 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 15.3128 ms - Host latency: 15.5031 ms (end to end 15.5116 ms, enqueue 3.91868 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 15.659 ms - Host latency: 15.8507 ms (end to end 15.8601 ms, enqueue 3.69341 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 15.3408 ms - Host latency: 15.5322 ms (end to end 15.5412 ms, enqueue 3.77231 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 16.7342 ms - Host latency: 16.929 ms (end to end 16.9402 ms, enqueue 3.88034 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 16.5907 ms - Host latency: 16.7866 ms (end to end 16.9033 ms, enqueue 3.70605 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 15.3008 ms - Host latency: 15.4916 ms (end to end 15.5003 ms, enqueue 3.55458 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 16.3745 ms - Host latency: 16.566 ms (end to end 16.5737 ms, enqueue 2.95776 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 15.8855 ms - Host latency: 16.0812 ms (end to end 16.1165 ms, enqueue 2.4389 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 15.2876 ms - Host latency: 15.4792 ms (end to end 15.4878 ms, enqueue 2.26376 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 15.9842 ms - Host latency: 16.1748 ms (end to end 16.1837 ms, enqueue 2.36772 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 15.9489 ms - Host latency: 16.1402 ms (end to end 16.1798 ms, enqueue 2.39794 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 18.5327 ms - Host latency: 18.7275 ms (end to end 18.7927 ms, enqueue 2.84158 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 17.3019 ms - Host latency: 17.4985 ms (end to end 17.5424 ms, enqueue 2.46084 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 17.3921 ms - Host latency: 17.5882 ms (end to end 17.6677 ms, enqueue 2.61152 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 16.8009 ms - Host latency: 16.9915 ms (end to end 17.0077 ms, enqueue 2.47915 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 17.0613 ms - Host latency: 17.2539 ms (end to end 17.3023 ms, enqueue 2.38882 ms)
[02/17/2023-18:00:35] [I] Average on 10 runs - GPU latency: 16.9947 ms - Host latency: 17.1867 ms (end to end 17.1963 ms, enqueue 2.37808 ms)
[02/17/2023-18:00:35] [I] 
[02/17/2023-18:00:35] [I] === Performance summary ===
[02/17/2023-18:00:35] [I] Throughput: 60.1579 qps
[02/17/2023-18:00:35] [I] Latency: min = 15.3619 ms, max = 23.6719 ms, mean = 16.5816 ms, median = 16.6302 ms, percentile(99%) = 21.4287 ms
[02/17/2023-18:00:35] [I] End-to-End Host Latency: min = 15.3732 ms, max = 23.6792 ms, mean = 16.6229 ms, median = 16.6866 ms, percentile(99%) = 21.4717 ms
[02/17/2023-18:00:35] [I] Enqueue Time: min = 2.01562 ms, max = 4.52274 ms, mean = 2.98968 ms, median = 2.63354 ms, percentile(99%) = 4.31274 ms
[02/17/2023-18:00:35] [I] H2D Latency: min = 0.133057 ms, max = 0.177734 ms, mean = 0.13982 ms, median = 0.138397 ms, percentile(99%) = 0.162476 ms
[02/17/2023-18:00:35] [I] GPU Compute Time: min = 15.172 ms, max = 23.4729 ms, mean = 16.3881 ms, median = 16.4409 ms, percentile(99%) = 21.22 ms
[02/17/2023-18:00:35] [I] D2H Latency: min = 0.048584 ms, max = 0.059021 ms, mean = 0.053661 ms, median = 0.0534668 ms, percentile(99%) = 0.0588379 ms
[02/17/2023-18:00:35] [I] Total Host Walltime: 3.02537 s
[02/17/2023-18:00:35] [I] Total GPU Compute Time: 2.98264 s
[02/17/2023-18:00:35] [I] Explanations of the performance metrics are printed in the verbose logs.
[02/17/2023-18:00:35] [I] 
&&&& PASSED TensorRT.trtexec [TensorRT v8201] # /usr/src/tensorrt/bin/trtexec --onnx=/media/edp-agx/SDC/Models/best.onnx --fp16

Int8 Output when running trtexec command

/usr/src/tensorrt/bin/trtexec --onnx=/media/edp-agx/SDC/Models/INT8/best_bs32_int8.onnx --int8
&&&& RUNNING TensorRT.trtexec [TensorRT v8201] # /usr/src/tensorrt/bin/trtexec --onnx=/media/edp-agx/SDC/Models/INT8/best_bs32_int8.onnx --int8
[02/17/2023-17:38:35] [I] === Model Options ===
[02/17/2023-17:38:35] [I] Format: ONNX
[02/17/2023-17:38:35] [I] Model: /media/edp-agx/SDC/Models/INT8/best_bs32_int8.onnx
[02/17/2023-17:38:35] [I] Output:
[02/17/2023-17:38:35] [I] === Build Options ===
[02/17/2023-17:38:35] [I] Max batch: explicit batch
[02/17/2023-17:38:35] [I] Workspace: 16 MiB
[02/17/2023-17:38:35] [I] minTiming: 1
[02/17/2023-17:38:35] [I] avgTiming: 8
[02/17/2023-17:38:35] [I] Precision: FP32+INT8
[02/17/2023-17:38:35] [I] Calibration: Dynamic
[02/17/2023-17:38:35] [I] Refit: Disabled
[02/17/2023-17:38:35] [I] Sparsity: Disabled
[02/17/2023-17:38:35] [I] Safe mode: Disabled
[02/17/2023-17:38:35] [I] DirectIO mode: Disabled
[02/17/2023-17:38:35] [I] Restricted mode: Disabled
[02/17/2023-17:38:35] [I] Save engine: 
[02/17/2023-17:38:35] [I] Load engine: 
[02/17/2023-17:38:35] [I] Profiling verbosity: 0
[02/17/2023-17:38:35] [I] Tactic sources: Using default tactic sources
[02/17/2023-17:38:35] [I] timingCacheMode: local
[02/17/2023-17:38:35] [I] timingCacheFile: 
[02/17/2023-17:38:35] [I] Input(s)s format: fp32:CHW
[02/17/2023-17:38:35] [I] Output(s)s format: fp32:CHW
[02/17/2023-17:38:35] [I] Input build shapes: model
[02/17/2023-17:38:35] [I] Input calibration shapes: model
[02/17/2023-17:38:35] [I] === System Options ===
[02/17/2023-17:38:35] [I] Device: 0
[02/17/2023-17:38:35] [I] DLACore: 
[02/17/2023-17:38:35] [I] Plugins:
[02/17/2023-17:38:35] [I] === Inference Options ===
[02/17/2023-17:38:35] [I] Batch: Explicit
[02/17/2023-17:38:35] [I] Input inference shapes: model
[02/17/2023-17:38:35] [I] Iterations: 10
[02/17/2023-17:38:35] [I] Duration: 3s (+ 200ms warm up)
[02/17/2023-17:38:35] [I] Sleep time: 0ms
[02/17/2023-17:38:35] [I] Idle time: 0ms
[02/17/2023-17:38:35] [I] Streams: 1
[02/17/2023-17:38:35] [I] ExposeDMA: Disabled
[02/17/2023-17:38:35] [I] Data transfers: Enabled
[02/17/2023-17:38:35] [I] Spin-wait: Disabled
[02/17/2023-17:38:35] [I] Multithreading: Disabled
[02/17/2023-17:38:35] [I] CUDA Graph: Disabled
[02/17/2023-17:38:35] [I] Separate profiling: Disabled
[02/17/2023-17:38:35] [I] Time Deserialize: Disabled
[02/17/2023-17:38:35] [I] Time Refit: Disabled
[02/17/2023-17:38:35] [I] Skip inference: Disabled
[02/17/2023-17:38:35] [I] Inputs:
[02/17/2023-17:38:35] [I] === Reporting Options ===
[02/17/2023-17:38:35] [I] Verbose: Disabled
[02/17/2023-17:38:35] [I] Averages: 10 inferences
[02/17/2023-17:38:35] [I] Percentile: 99
[02/17/2023-17:38:35] [I] Dump refittable layers:Disabled
[02/17/2023-17:38:35] [I] Dump output: Disabled
[02/17/2023-17:38:35] [I] Profile: Disabled
[02/17/2023-17:38:35] [I] Export timing to JSON file: 
[02/17/2023-17:38:35] [I] Export output to JSON file: 
[02/17/2023-17:38:35] [I] Export profile to JSON file: 
[02/17/2023-17:38:35] [I] 
[02/17/2023-17:38:35] [I] === Device Information ===
[02/17/2023-17:38:35] [I] Selected Device: Xavier
[02/17/2023-17:38:35] [I] Compute Capability: 7.2
[02/17/2023-17:38:35] [I] SMs: 8
[02/17/2023-17:38:35] [I] Compute Clock Rate: 1.377 GHz
[02/17/2023-17:38:35] [I] Device Global Memory: 31920 MiB
[02/17/2023-17:38:35] [I] Shared Memory per SM: 96 KiB
[02/17/2023-17:38:35] [I] Memory Bus Width: 256 bits (ECC disabled)
[02/17/2023-17:38:35] [I] Memory Clock Rate: 1.377 GHz
[02/17/2023-17:38:35] [I] 
[02/17/2023-17:38:35] [I] TensorRT version: 8.2.1
[02/17/2023-17:38:37] [I] [TRT] [MemUsageChange] Init CUDA: CPU +362, GPU +0, now: CPU 381, GPU 16291 (MiB)
[02/17/2023-17:38:37] [I] [TRT] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 381 MiB, GPU 16291 MiB
[02/17/2023-17:38:37] [I] [TRT] [MemUsageSnapshot] End constructing builder kernel library: CPU 486 MiB, GPU 16396 MiB
[02/17/2023-17:38:37] [I] Start parsing network model
[02/17/2023-17:38:37] [I] [TRT] ----------------------------------------------------------------
[02/17/2023-17:38:37] [I] [TRT] Input filename:   /media/edp-agx/SDC/Models/INT8/best_bs32_int8.onnx
[02/17/2023-17:38:37] [I] [TRT] ONNX IR version:  0.0.7
[02/17/2023-17:38:37] [I] [TRT] Opset version:    13
[02/17/2023-17:38:37] [I] [TRT] Producer name:    pytorch
[02/17/2023-17:38:37] [I] [TRT] Producer version: 1.10
[02/17/2023-17:38:37] [I] [TRT] Domain:           
[02/17/2023-17:38:37] [I] [TRT] Model version:    0
[02/17/2023-17:38:37] [I] [TRT] Doc string:       
[02/17/2023-17:38:37] [I] [TRT] ----------------------------------------------------------------
[02/17/2023-17:38:37] [W] [TRT] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[02/17/2023-17:38:40] [W] [TRT] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[02/17/2023-17:38:40] [W] [TRT] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[02/17/2023-17:38:40] [W] [TRT] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[02/17/2023-17:38:41] [W] [TRT] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[02/17/2023-17:38:41] [W] [TRT] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[02/17/2023-17:38:42] [W] [TRT] onnx2trt_utils.cpp:392: One or more weights outside the range of INT32 was clamped
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [W] [TRT] Output type must be INT32 for shape outputs
[02/17/2023-17:38:42] [I] Finish parsing network model
[02/17/2023-17:38:42] [I] FP32 and INT8 precisions have been specified - more performance might be enabled by additionally specifying --fp16 or --best
[02/17/2023-17:38:43] [W] [TRT] Calibrator won't be used in explicit precision mode. Use quantization aware training to generate network with Quantize/Dequantize nodes.
[02/17/2023-17:38:44] [I] [TRT] ---------- Layers Running on DLA ----------
[02/17/2023-17:38:44] [I] [TRT] ---------- Layers Running on GPU ----------
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_2_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.0.conv.weight + QuantizeLinear_7_quantize_scale_node + Conv_9
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1731
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1724
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1717
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1710
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1703
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1606
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1599
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1592
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1585
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1578
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1455
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1448
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1441
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1434
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1427
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1330
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1323
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1316
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1309
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1302
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1179
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1172
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1165
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1158
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1151
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1054
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1047
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1040
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1033
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Range_1026
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_10, Mul_11)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.1.conv.weight + QuantizeLinear_19_quantize_scale_node + Conv_21
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_22, Mul_23)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.2.cv2.conv.weight + QuantizeLinear_93_quantize_scale_node + Conv_95
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.2.cv1.conv.weight + QuantizeLinear_31_quantize_scale_node + Conv_33
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_34, Mul_35)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_38_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.2.m.0.cv1.conv.weight + QuantizeLinear_43_quantize_scale_node + Conv_45
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_46, Mul_47)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.2.m.0.cv2.conv.weight + QuantizeLinear_55_quantize_scale_node + Conv_57
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_58, Mul_59), Add_60)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_63_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.2.m.1.cv1.conv.weight + QuantizeLinear_68_quantize_scale_node + Conv_70
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_71, Mul_72)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.2.m.1.cv2.conv.weight + QuantizeLinear_80_quantize_scale_node + Conv_82
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_96, Mul_97)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_83, Mul_84), Add_85)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] 423 copy
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.2.cv3.conv.weight + QuantizeLinear_106_quantize_scale_node + Conv_108
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_109, Mul_110)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.3.conv.weight + QuantizeLinear_118_quantize_scale_node + Conv_120
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_121, Mul_122)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.4.cv2.conv.weight + QuantizeLinear_242_quantize_scale_node + Conv_244
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.4.cv1.conv.weight + QuantizeLinear_130_quantize_scale_node + Conv_132
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_133, Mul_134)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_137_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.4.m.0.cv1.conv.weight + QuantizeLinear_142_quantize_scale_node + Conv_144
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_145, Mul_146)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.4.m.0.cv2.conv.weight + QuantizeLinear_154_quantize_scale_node + Conv_156
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_157, Mul_158), Add_159)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_162_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.4.m.1.cv1.conv.weight + QuantizeLinear_167_quantize_scale_node + Conv_169
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_170, Mul_171)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.4.m.1.cv2.conv.weight + QuantizeLinear_179_quantize_scale_node + Conv_181
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_182, Mul_183), Add_184)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_187_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.4.m.2.cv1.conv.weight + QuantizeLinear_192_quantize_scale_node + Conv_194
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_195, Mul_196)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.4.m.2.cv2.conv.weight + QuantizeLinear_204_quantize_scale_node + Conv_206
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_207, Mul_208), Add_209)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_212_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.4.m.3.cv1.conv.weight + QuantizeLinear_217_quantize_scale_node + Conv_219
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_220, Mul_221)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.4.m.3.cv2.conv.weight + QuantizeLinear_229_quantize_scale_node + Conv_231
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_245, Mul_246)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_232, Mul_233), Add_234)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.4.cv3.conv.weight + QuantizeLinear_255_quantize_scale_node + Conv_257
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_258, Mul_259)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_262_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.5.conv.weight + QuantizeLinear_267_quantize_scale_node + Conv_269
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_270, Mul_271)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.6.cv2.conv.weight + QuantizeLinear_441_quantize_scale_node + Conv_443
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.6.cv1.conv.weight + QuantizeLinear_279_quantize_scale_node + Conv_281
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_282, Mul_283)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_286_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.6.m.0.cv1.conv.weight + QuantizeLinear_291_quantize_scale_node + Conv_293
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_294, Mul_295)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.6.m.0.cv2.conv.weight + QuantizeLinear_303_quantize_scale_node + Conv_305
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_306, Mul_307), Add_308)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_311_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.6.m.1.cv1.conv.weight + QuantizeLinear_316_quantize_scale_node + Conv_318
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_319, Mul_320)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.6.m.1.cv2.conv.weight + QuantizeLinear_328_quantize_scale_node + Conv_330
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_331, Mul_332), Add_333)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_336_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.6.m.2.cv1.conv.weight + QuantizeLinear_341_quantize_scale_node + Conv_343
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_344, Mul_345)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.6.m.2.cv2.conv.weight + QuantizeLinear_353_quantize_scale_node + Conv_355
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_356, Mul_357), Add_358)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_361_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.6.m.3.cv1.conv.weight + QuantizeLinear_366_quantize_scale_node + Conv_368
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_369, Mul_370)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.6.m.3.cv2.conv.weight + QuantizeLinear_378_quantize_scale_node + Conv_380
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_381, Mul_382), Add_383)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_386_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.6.m.4.cv1.conv.weight + QuantizeLinear_391_quantize_scale_node + Conv_393
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_394, Mul_395)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.6.m.4.cv2.conv.weight + QuantizeLinear_403_quantize_scale_node + Conv_405
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_406, Mul_407), Add_408)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_411_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.6.m.5.cv1.conv.weight + QuantizeLinear_416_quantize_scale_node + Conv_418
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_419, Mul_420)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.6.m.5.cv2.conv.weight + QuantizeLinear_428_quantize_scale_node + Conv_430
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_444, Mul_445)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_431, Mul_432), Add_433)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.6.cv3.conv.weight + QuantizeLinear_454_quantize_scale_node + Conv_456
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_457, Mul_458)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_461_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.7.conv.weight + QuantizeLinear_466_quantize_scale_node + Conv_468
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_469, Mul_470)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.8.cv2.conv.weight + QuantizeLinear_540_quantize_scale_node + Conv_542
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.8.cv1.conv.weight + QuantizeLinear_478_quantize_scale_node + Conv_480
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_481, Mul_482)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_485_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.8.m.0.cv1.conv.weight + QuantizeLinear_490_quantize_scale_node + Conv_492
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_493, Mul_494)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.8.m.0.cv2.conv.weight + QuantizeLinear_502_quantize_scale_node + Conv_504
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_505, Mul_506), Add_507)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_510_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.8.m.1.cv1.conv.weight + QuantizeLinear_515_quantize_scale_node + Conv_517
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_518, Mul_519)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.8.m.1.cv2.conv.weight + QuantizeLinear_527_quantize_scale_node + Conv_529
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_543, Mul_544)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(PWN(Sigmoid_530, Mul_531), Add_532)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.8.cv3.conv.weight + QuantizeLinear_553_quantize_scale_node + Conv_555
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_556, Mul_557)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.9.cv1.conv.weight + QuantizeLinear_565_quantize_scale_node + Conv_567
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_568, Mul_569)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] MaxPool_570
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] MaxPool_571
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] MaxPool_572
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_576_quantize_scale_node_clone_3
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_576_quantize_scale_node_clone_2
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_576_quantize_scale_node_clone_1
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_576_quantize_scale_node_clone_0
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.9.cv2.conv.weight + QuantizeLinear_581_quantize_scale_node + Conv_583
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_584, Mul_585)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.10.conv.weight + QuantizeLinear_593_quantize_scale_node + Conv_595
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_596, Mul_597)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Resize_598
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_602_quantize_scale_node_clone_1
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_602_quantize_scale_node_clone_0
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.13.cv2.conv.weight + QuantizeLinear_667_quantize_scale_node + Conv_669
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.13.cv1.conv.weight + QuantizeLinear_607_quantize_scale_node + Conv_609
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_610, Mul_611)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.13.m.0.cv1.conv.weight + QuantizeLinear_619_quantize_scale_node + Conv_621
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_622, Mul_623)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.13.m.0.cv2.conv.weight + QuantizeLinear_631_quantize_scale_node + Conv_633
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_634, Mul_635)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.13.m.1.cv1.conv.weight + QuantizeLinear_643_quantize_scale_node + Conv_645
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_646, Mul_647)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.13.m.1.cv2.conv.weight + QuantizeLinear_655_quantize_scale_node + Conv_657
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_670, Mul_671)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_658, Mul_659)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.13.cv3.conv.weight + QuantizeLinear_680_quantize_scale_node + Conv_682
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_683, Mul_684)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.14.conv.weight + QuantizeLinear_692_quantize_scale_node + Conv_694
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_695, Mul_696)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Resize_697
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_701_quantize_scale_node_clone_1
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_701_quantize_scale_node_clone_0
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.17.cv2.conv.weight + QuantizeLinear_766_quantize_scale_node + Conv_768
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.17.cv1.conv.weight + QuantizeLinear_706_quantize_scale_node + Conv_708
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_709, Mul_710)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.17.m.0.cv1.conv.weight + QuantizeLinear_718_quantize_scale_node + Conv_720
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_721, Mul_722)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.17.m.0.cv2.conv.weight + QuantizeLinear_730_quantize_scale_node + Conv_732
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_733, Mul_734)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.17.m.1.cv1.conv.weight + QuantizeLinear_742_quantize_scale_node + Conv_744
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_745, Mul_746)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.17.m.1.cv2.conv.weight + QuantizeLinear_754_quantize_scale_node + Conv_756
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_769, Mul_770)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_757, Mul_758)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.17.cv3.conv.weight + QuantizeLinear_779_quantize_scale_node + Conv_781
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_782, Mul_783)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_786_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.18.conv.weight + QuantizeLinear_791_quantize_scale_node + Conv_793
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Conv_980
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_794, Mul_795)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_799_quantize_scale_node_clone_1
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.20.cv2.conv.weight + QuantizeLinear_864_quantize_scale_node + Conv_866
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.20.cv1.conv.weight + QuantizeLinear_804_quantize_scale_node + Conv_806
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_807, Mul_808)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.20.m.0.cv1.conv.weight + QuantizeLinear_816_quantize_scale_node + Conv_818
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_819, Mul_820)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.20.m.0.cv2.conv.weight + QuantizeLinear_828_quantize_scale_node + Conv_830
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_831, Mul_832)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.20.m.1.cv1.conv.weight + QuantizeLinear_840_quantize_scale_node + Conv_842
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_843, Mul_844)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.20.m.1.cv2.conv.weight + QuantizeLinear_852_quantize_scale_node + Conv_854
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_867, Mul_868)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_855, Mul_856)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.20.cv3.conv.weight + QuantizeLinear_877_quantize_scale_node + Conv_879
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_880, Mul_881)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_884_quantize_scale_node
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.21.conv.weight + QuantizeLinear_889_quantize_scale_node + Conv_891
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Conv_1256
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_892, Mul_893)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] QuantizeLinear_897_quantize_scale_node_clone_1
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.23.cv2.conv.weight + QuantizeLinear_962_quantize_scale_node + Conv_964
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.23.cv1.conv.weight + QuantizeLinear_902_quantize_scale_node + Conv_904
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_905, Mul_906)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.23.m.0.cv1.conv.weight + QuantizeLinear_914_quantize_scale_node + Conv_916
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_917, Mul_918)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.23.m.0.cv2.conv.weight + QuantizeLinear_926_quantize_scale_node + Conv_928
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_929, Mul_930)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.23.m.1.cv1.conv.weight + QuantizeLinear_938_quantize_scale_node + Conv_940
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_941, Mul_942)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.23.m.1.cv2.conv.weight + QuantizeLinear_950_quantize_scale_node + Conv_952
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_965, Mul_966)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_953, Mul_954)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] model.model.23.cv3.conv.weight + QuantizeLinear_975_quantize_scale_node + Conv_977
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] PWN(Sigmoid_978, Mul_979)
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] Conv_1532
[02/17/2023-17:38:44] [I] [TRT] [GpuLayer] {ForeignNode[Reshape_997 + Transpose_998...Concat_1808]}
[02/17/2023-17:38:45] [I] [TRT] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +226, GPU +223, now: CPU 884, GPU 16799 (MiB)
[02/17/2023-17:38:46] [I] [TRT] [MemUsageChange] Init cuDNN: CPU +307, GPU +306, now: CPU 1191, GPU 17105 (MiB)
[02/17/2023-17:38:46] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.
[02/17/2023-17:45:14] [I] [TRT] Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[02/17/2023-17:45:21] [W] [TRT] Skipping tactic 0 due to insuficient memory on requested size of 100454400 detected for tactic 0.
[02/17/2023-17:45:21] [E] Error[10]: [optimizer.cpp::computeCosts::2011] Error Code 10: Internal Error (Could not find any implementation for node {ForeignNode[Reshape_997 + Transpose_998...Concat_1808]}.)
[02/17/2023-17:45:21] [E] Error[2]: [builder.cpp::buildSerializedNetwork::609] Error Code 2: Internal Error (Assertion enginePtr != nullptr failed. )
[02/17/2023-17:45:21] [E] Engine could not be created from network
[02/17/2023-17:45:21] [E] Building engine failed
[02/17/2023-17:45:21] [E] Failed to create engine from model.
[02/17/2023-17:45:21] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8201] # /usr/src/tensorrt/bin/trtexec --onnx=/media/edp-agx/SDC/Models/INT8/best_bs32_int8.onnx --int8

Hi,

Could you try the command on our latest TensorRT 8.5 release (JetPack 5.1)?
Thanks.

Hi,

I updated the jetson Xavier Developer Kit to the newest available version 5.1-b147. This JetPack Version uses TensorRT 8.5.2.2.

I tried the command above and they worked out just fine. I so, I got an other problem. In the same area.

I used the following command to create an engines from pytorch onnx model. The model is an yolov5s and was trained on fp16 already. Now I want to create int8 and fp16 engine.

/usr/src/tensorrt/bin/trtexec --onnx=/path/to/onnx --int8 --saveEngine=path
/usr/src/tensorrt/bin/trtexec --onnx=/path/to/onnx --fp16 --saveEngine=path

The created networks have proper sizes. That mean that the int8 engine is half as big as the fp16 engine. The problem now is, that the inference time from the int8 engine is only a bit better than the inference time of the fp16 engine. (fp16 engine: 1.2s; int8 engine: 1.1s)

Do you have any idea where the problem could be?

Yours
Patrick

Hi,

This depends on the layer used in the model.
It’s recommended to enable the log with --verbose to see the layer placement in detail.

Based on our doc here: https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#reduced-precision

There are three precision flags: FP16, INT8, and TF32, and they may be enabled independently. Note that TensorRT will still choose a higher-precision kernel if it results in overall lower runtime, or if no low-precision implementation exists.

Thanks.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.

@Patrick_1234 You may also be interested in looking into the deep learning accelerator DLA