&&&& RUNNING TensorRT.trtexec [TensorRT v8401] # /usr/src/tensorrt/bin/trtexec --onnx=model_resnetunet.onnx --shapes=input:1x3x576x960 --saveEngine=model_resnetunet.engine --exportProfile=model_resnetunet.json --int8 --useDLACore=0 --allowGPUFallback --useSpinWait --separateProfileRun [11/03/2022-12:10:16] [I] === Model Options === [11/03/2022-12:10:16] [I] Format: ONNX [11/03/2022-12:10:16] [I] Model: model_resnetunet.onnx [11/03/2022-12:10:16] [I] Output: [11/03/2022-12:10:16] [I] === Build Options === [11/03/2022-12:10:16] [I] Max batch: explicit batch [11/03/2022-12:10:16] [I] Memory Pools: workspace: default, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default [11/03/2022-12:10:16] [I] minTiming: 1 [11/03/2022-12:10:16] [I] avgTiming: 8 [11/03/2022-12:10:16] [I] Precision: FP32+INT8 [11/03/2022-12:10:16] [I] LayerPrecisions: [11/03/2022-12:10:16] [I] Calibration: Dynamic [11/03/2022-12:10:16] [I] Refit: Disabled [11/03/2022-12:10:16] [I] Sparsity: Disabled [11/03/2022-12:10:16] [I] Safe mode: Disabled [11/03/2022-12:10:16] [I] DirectIO mode: Disabled [11/03/2022-12:10:16] [I] Restricted mode: Disabled [11/03/2022-12:10:16] [I] Build only: Disabled [11/03/2022-12:10:16] [I] Save engine: model_resnetunet.engine [11/03/2022-12:10:16] [I] Load engine: [11/03/2022-12:10:16] [I] Profiling verbosity: 0 [11/03/2022-12:10:16] [I] Tactic sources: Using default tactic sources [11/03/2022-12:10:16] [I] timingCacheMode: local [11/03/2022-12:10:16] [I] timingCacheFile: [11/03/2022-12:10:16] [I] Input(s)s format: fp32:CHW [11/03/2022-12:10:16] [I] Output(s)s format: fp32:CHW [11/03/2022-12:10:16] [I] Input build shape: input=1x3x576x960+1x3x576x960+1x3x576x960 [11/03/2022-12:10:16] [I] Input calibration shapes: model [11/03/2022-12:10:16] [I] === System Options === [11/03/2022-12:10:16] [I] Device: 0 [11/03/2022-12:10:16] [I] DLACore: 0(With GPU fallback) [11/03/2022-12:10:16] [I] Plugins: [11/03/2022-12:10:16] [I] === Inference Options === [11/03/2022-12:10:16] [I] Batch: Explicit [11/03/2022-12:10:16] [I] Input inference shape: input=1x3x576x960 [11/03/2022-12:10:16] [I] Iterations: 10 [11/03/2022-12:10:16] [I] Duration: 3s (+ 200ms warm up) [11/03/2022-12:10:16] [I] Sleep time: 0ms [11/03/2022-12:10:16] [I] Idle time: 0ms [11/03/2022-12:10:16] [I] Streams: 1 [11/03/2022-12:10:16] [I] ExposeDMA: Disabled [11/03/2022-12:10:16] [I] Data transfers: Enabled [11/03/2022-12:10:16] [I] Spin-wait: Enabled [11/03/2022-12:10:16] [I] Multithreading: Disabled [11/03/2022-12:10:16] [I] CUDA Graph: Disabled [11/03/2022-12:10:16] [I] Separate profiling: Enabled [11/03/2022-12:10:16] [I] Time Deserialize: Disabled [11/03/2022-12:10:16] [I] Time Refit: Disabled [11/03/2022-12:10:16] [I] Inputs: [11/03/2022-12:10:16] [I] === Reporting Options === [11/03/2022-12:10:16] [I] Verbose: Disabled [11/03/2022-12:10:16] [I] Averages: 10 inferences [11/03/2022-12:10:16] [I] Percentile: 99 [11/03/2022-12:10:16] [I] Dump refittable layers:Disabled [11/03/2022-12:10:16] [I] Dump output: Disabled [11/03/2022-12:10:16] [I] Profile: Disabled [11/03/2022-12:10:16] [I] Export timing to JSON file: [11/03/2022-12:10:16] [I] Export output to JSON file: [11/03/2022-12:10:16] [I] Export profile to JSON file: model_resnetunet.json [11/03/2022-12:10:16] [I] [11/03/2022-12:10:16] [I] === Device Information === [11/03/2022-12:10:16] [I] Selected Device: Orin [11/03/2022-12:10:16] [I] Compute Capability: 8.7 [11/03/2022-12:10:16] [I] SMs: 16 [11/03/2022-12:10:16] [I] Compute Clock Rate: 1.3 GHz [11/03/2022-12:10:16] [I] Device Global Memory: 30535 MiB [11/03/2022-12:10:16] [I] Shared Memory per SM: 164 KiB [11/03/2022-12:10:16] [I] Memory Bus Width: 128 bits (ECC disabled) [11/03/2022-12:10:16] [I] Memory Clock Rate: 1.3 GHz [11/03/2022-12:10:16] [I] [11/03/2022-12:10:16] [I] TensorRT version: 8.4.1 [11/03/2022-12:10:16] [I] [TRT] [MemUsageChange] Init CUDA: CPU +218, GPU +0, now: CPU 242, GPU 7178 (MiB) [11/03/2022-12:10:19] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +351, GPU +331, now: CPU 612, GPU 7527 (MiB) [11/03/2022-12:10:19] [I] Start parsing network model [11/03/2022-12:10:19] [I] [TRT] ---------------------------------------------------------------- [11/03/2022-12:10:19] [I] [TRT] Input filename: model_resnetunet.onnx [11/03/2022-12:10:19] [I] [TRT] ONNX IR version: 0.0.6 [11/03/2022-12:10:19] [I] [TRT] Opset version: 11 [11/03/2022-12:10:19] [I] [TRT] Producer name: pytorch [11/03/2022-12:10:19] [I] [TRT] Producer version: 1.12.0 [11/03/2022-12:10:19] [I] [TRT] Domain: [11/03/2022-12:10:19] [I] [TRT] Model version: 0 [11/03/2022-12:10:19] [I] [TRT] Doc string: [11/03/2022-12:10:19] [I] [TRT] ---------------------------------------------------------------- [11/03/2022-12:10:19] [I] Finish parsing network model &&&& FAILED TensorRT.trtexec [TensorRT v8401] # /usr/src/tensorrt/bin/trtexec --onnx=model_resnetunet.onnx --shapes=input:1x3x576x960 --saveEngine=model_resnetunet.engine --exportProfile=model_resnetunet.json --int8 --useDLACore=0 --allowGPUFallback --useSpinWait --separateProfileRun