Parse onnx file failed: Parameter check failed. condition:condition: allDimsGtEq(windowSize, 1) && volume(windowSize) < MAX_KERNEL_DIMS_PRODUCT(nbSpat

Description

Converted pb to onnx file successfully and check it runs well

When trying to load onnx file into Tensor RT, it returns some errors:

root@rt:/usr/src/tensorrt/bin# ./trtexec --onnx=../data/iharm/iharm_onnx.onnx 
&&&& RUNNING TensorRT.trtexec [TensorRT v8001] # ./trtexec --onnx=../data/iharm/iharm_onnx.onnx
[05/13/2022-18:01:21] [I] === Model Options ===
[05/13/2022-18:01:21] [I] Format: ONNX
[05/13/2022-18:01:21] [I] Model: ../data/iharm/iharm_onnx.onnx
[05/13/2022-18:01:21] [I] Output:
[05/13/2022-18:01:21] [I] === Build Options ===
[05/13/2022-18:01:21] [I] Max batch: explicit
[05/13/2022-18:01:21] [I] Workspace: 16 MiB
[05/13/2022-18:01:21] [I] minTiming: 1
[05/13/2022-18:01:21] [I] avgTiming: 8
[05/13/2022-18:01:21] [I] Precision: FP32
[05/13/2022-18:01:21] [I] Calibration: 
[05/13/2022-18:01:21] [I] Refit: Disabled
[05/13/2022-18:01:21] [I] Sparsity: Disabled
[05/13/2022-18:01:21] [I] Safe mode: Disabled
[05/13/2022-18:01:21] [I] Restricted mode: Disabled
[05/13/2022-18:01:21] [I] Save engine: 
[05/13/2022-18:01:21] [I] Load engine: 
[05/13/2022-18:01:21] [I] NVTX verbosity: 0
[05/13/2022-18:01:21] [I] Tactic sources: Using default tactic sources
[05/13/2022-18:01:21] [I] timingCacheMode: local
[05/13/2022-18:01:21] [I] timingCacheFile: 
[05/13/2022-18:01:21] [I] Input(s)s format: fp32:CHW
[05/13/2022-18:01:21] [I] Output(s)s format: fp32:CHW
[05/13/2022-18:01:21] [I] Input build shapes: model
[05/13/2022-18:01:21] [I] Input calibration shapes: model
[05/13/2022-18:01:21] [I] === System Options ===
[05/13/2022-18:01:21] [I] Device: 0
[05/13/2022-18:01:21] [I] DLACore: 
[05/13/2022-18:01:21] [I] Plugins:
[05/13/2022-18:01:21] [I] === Inference Options ===
[05/13/2022-18:01:21] [I] Batch: Explicit
[05/13/2022-18:01:21] [I] Input inference shapes: model
[05/13/2022-18:01:21] [I] Iterations: 10
[05/13/2022-18:01:21] [I] Duration: 3s (+ 200ms warm up)
[05/13/2022-18:01:21] [I] Sleep time: 0ms
[05/13/2022-18:01:21] [I] Streams: 1
[05/13/2022-18:01:21] [I] ExposeDMA: Disabled
[05/13/2022-18:01:21] [I] Data transfers: Enabled
[05/13/2022-18:01:21] [I] Spin-wait: Disabled
[05/13/2022-18:01:21] [I] Multithreading: Disabled
[05/13/2022-18:01:21] [I] CUDA Graph: Disabled
[05/13/2022-18:01:21] [I] Separate profiling: Disabled
[05/13/2022-18:01:21] [I] Time Deserialize: Disabled
[05/13/2022-18:01:21] [I] Time Refit: Disabled
[05/13/2022-18:01:21] [I] Skip inference: Disabled
[05/13/2022-18:01:21] [I] Inputs:
[05/13/2022-18:01:21] [I] === Reporting Options ===
[05/13/2022-18:01:21] [I] Verbose: Disabled
[05/13/2022-18:01:21] [I] Averages: 10 inferences
[05/13/2022-18:01:21] [I] Percentile: 99
[05/13/2022-18:01:21] [I] Dump refittable layers:Disabled
[05/13/2022-18:01:21] [I] Dump output: Disabled
[05/13/2022-18:01:21] [I] Profile: Disabled
[05/13/2022-18:01:21] [I] Export timing to JSON file: 
[05/13/2022-18:01:21] [I] Export output to JSON file: 
[05/13/2022-18:01:21] [I] Export profile to JSON file: 
[05/13/2022-18:01:21] [I] 
[05/13/2022-18:01:21] [I] === Device Information ===
[05/13/2022-18:01:21] [I] Selected Device: Xavier
[05/13/2022-18:01:21] [I] Compute Capability: 7.2
[05/13/2022-18:01:21] [I] SMs: 8
[05/13/2022-18:01:21] [I] Compute Clock Rate: 1.377 GHz
[05/13/2022-18:01:21] [I] Device Global Memory: 15816 MiB
[05/13/2022-18:01:21] [I] Shared Memory per SM: 96 KiB
[05/13/2022-18:01:21] [I] Memory Bus Width: 256 bits (ECC disabled)
[05/13/2022-18:01:21] [I] Memory Clock Rate: 1.377 GHz
[05/13/2022-18:01:21] [I] 
[05/13/2022-18:01:21] [I] TensorRT version: 8001
[05/13/2022-18:01:22] [I] [TRT] [MemUsageChange] Init CUDA: CPU +353, GPU +0, now: CPU 371, GPU 5392 (MiB)
[05/13/2022-18:01:22] [I] Start parsing network model
[05/13/2022-18:01:22] [I] [TRT] ----------------------------------------------------------------
[05/13/2022-18:01:22] [I] [TRT] Input filename:   ../data/iharm/iharm_onnx.onnx
[05/13/2022-18:01:22] [I] [TRT] ONNX IR version:  0.0.6
[05/13/2022-18:01:22] [I] [TRT] Opset version:    12
[05/13/2022-18:01:22] [I] [TRT] Producer name:    pytorch
[05/13/2022-18:01:22] [I] [TRT] Producer version: 1.7
[05/13/2022-18:01:22] [I] [TRT] Domain:           
[05/13/2022-18:01:22] [I] [TRT] Model version:    0
[05/13/2022-18:01:22] [I] [TRT] Doc string:       
[05/13/2022-18:01:22] [I] [TRT] ----------------------------------------------------------------
[05/13/2022-18:01:22] [W] [TRT] onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[05/13/2022-18:01:22] [W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[05/13/2022-18:01:22] [E] Error[3]: [network.cpp::addPoolingNd::884] Error Code 3: Internal Error (Parameter check failed at: optimizer/api/network.cpp::addPoolingNd::884, condition: allDimsGtEq(windowSize, 1) && volume(windowSize) < MAX_KERNEL_DIMS_PRODUCT(nbSpatialDims)
)
Segmentation fault (core dumped)

Environment

TensorRT Version: 8.0.1.6
GPU Type:
Nvidia Driver Version: AGX Xavier JetPack 4.6
CUDA Version: 10.2.89
CUDNN Version: 8.2.1
Operating System + Version: ubuntu 18.04
Python Version (if applicable):python3
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

iharm_onnx.onnx (8.6 MB)

Steps To Reproduce

root@rt:/usr/src/tensorrt/bin# ./trtexec --onnx=iharm_onnx.onnx

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi,
Request you to share the ONNX model and the script if not shared already so that we can assist you better.
Alongside you can try few things:

  1. validating your model with the below snippet

check_model.py

import sys
import onnx
filename = yourONNXmodel
model = onnx.load(filename)
onnx.checker.check_model(model).
2) Try running your model with trtexec command.

In case you are still facing issue, request you to share the trtexec “”–verbose"" log for further debugging
Thanks!

Thank you for your reply. There are still problems following your steps.
I had tried to check the onnx model and it runs well.
According to your request, this is the complete log collected:

&&&& RUNNING TensorRT.trtexec [TensorRT v8001] # ./trtexec --onnx=../data/iharm/iharm_onnx.onnx --verbose
[05/16/2022-11:14:32] [I] === Model Options ===
[05/16/2022-11:14:32] [I] Format: ONNX
[05/16/2022-11:14:32] [I] Model: ../data/iharm/iharm_onnx.onnx
[05/16/2022-11:14:32] [I] Output:
[05/16/2022-11:14:32] [I] === Build Options ===
[05/16/2022-11:14:32] [I] Max batch: explicit
[05/16/2022-11:14:32] [I] Workspace: 16 MiB
[05/16/2022-11:14:32] [I] minTiming: 1
[05/16/2022-11:14:32] [I] avgTiming: 8
[05/16/2022-11:14:32] [I] Precision: FP32
[05/16/2022-11:14:32] [I] Calibration: 
[05/16/2022-11:14:32] [I] Refit: Disabled
[05/16/2022-11:14:32] [I] Sparsity: Disabled
[05/16/2022-11:14:32] [I] Safe mode: Disabled
[05/16/2022-11:14:32] [I] Restricted mode: Disabled
[05/16/2022-11:14:32] [I] Save engine: 
[05/16/2022-11:14:32] [I] Load engine: 
[05/16/2022-11:14:32] [I] NVTX verbosity: 0
[05/16/2022-11:14:32] [I] Tactic sources: Using default tactic sources
[05/16/2022-11:14:32] [I] timingCacheMode: local
[05/16/2022-11:14:32] [I] timingCacheFile: 
[05/16/2022-11:14:32] [I] Input(s)s format: fp32:CHW
[05/16/2022-11:14:32] [I] Output(s)s format: fp32:CHW
[05/16/2022-11:14:32] [I] Input build shapes: model
[05/16/2022-11:14:32] [I] Input calibration shapes: model
[05/16/2022-11:14:32] [I] === System Options ===
[05/16/2022-11:14:32] [I] Device: 0
[05/16/2022-11:14:32] [I] DLACore: 
[05/16/2022-11:14:32] [I] Plugins:
[05/16/2022-11:14:32] [I] === Inference Options ===
[05/16/2022-11:14:32] [I] Batch: Explicit
[05/16/2022-11:14:32] [I] Input inference shapes: model
[05/16/2022-11:14:32] [I] Iterations: 10
[05/16/2022-11:14:32] [I] Duration: 3s (+ 200ms warm up)
[05/16/2022-11:14:32] [I] Sleep time: 0ms
[05/16/2022-11:14:32] [I] Streams: 1
[05/16/2022-11:14:32] [I] ExposeDMA: Disabled
[05/16/2022-11:14:32] [I] Data transfers: Enabled
[05/16/2022-11:14:32] [I] Spin-wait: Disabled
[05/16/2022-11:14:32] [I] Multithreading: Disabled
[05/16/2022-11:14:32] [I] CUDA Graph: Disabled
[05/16/2022-11:14:32] [I] Separate profiling: Disabled
[05/16/2022-11:14:32] [I] Time Deserialize: Disabled
[05/16/2022-11:14:32] [I] Time Refit: Disabled
[05/16/2022-11:14:32] [I] Skip inference: Disabled
[05/16/2022-11:14:32] [I] Inputs:
[05/16/2022-11:14:32] [I] === Reporting Options ===
[05/16/2022-11:14:32] [I] Verbose: Enabled
[05/16/2022-11:14:32] [I] Averages: 10 inferences
[05/16/2022-11:14:32] [I] Percentile: 99
[05/16/2022-11:14:32] [I] Dump refittable layers:Disabled
[05/16/2022-11:14:32] [I] Dump output: Disabled
[05/16/2022-11:14:32] [I] Profile: Disabled
[05/16/2022-11:14:32] [I] Export timing to JSON file: 
[05/16/2022-11:14:32] [I] Export output to JSON file: 
[05/16/2022-11:14:32] [I] Export profile to JSON file: 
[05/16/2022-11:14:32] [I] 
[05/16/2022-11:14:32] [I] === Device Information ===
[05/16/2022-11:14:32] [I] Selected Device: Xavier
[05/16/2022-11:14:32] [I] Compute Capability: 7.2
[05/16/2022-11:14:32] [I] SMs: 8
[05/16/2022-11:14:32] [I] Compute Clock Rate: 1.377 GHz
[05/16/2022-11:14:32] [I] Device Global Memory: 15816 MiB
[05/16/2022-11:14:32] [I] Shared Memory per SM: 96 KiB
[05/16/2022-11:14:32] [I] Memory Bus Width: 256 bits (ECC disabled)
[05/16/2022-11:14:32] [I] Memory Clock Rate: 1.377 GHz
[05/16/2022-11:14:32] [I] 
[05/16/2022-11:14:32] [I] TensorRT version: 8001
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::GridAnchor_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::GridAnchorRect_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::NMS_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::Reorg_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::Region_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::Clip_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::LReLU_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::PriorBox_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::Normalize_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::ScatterND version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::RPROI_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::BatchedNMS_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::BatchedNMSDynamic_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::FlattenConcat_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::CropAndResize version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::DetectionLayer_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::EfficientNMS_ONNX_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::EfficientNMS_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::Proposal version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::ProposalLayer_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::PyramidROIAlign_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::ResizeNearest_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::Split version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::SpecialSlice_TRT version 1
[05/16/2022-11:14:32] [V] [TRT] Registered plugin creator - ::InstanceNormalization_TRT version 1
[05/16/2022-11:14:34] [I] [TRT] [MemUsageChange] Init CUDA: CPU +354, GPU +0, now: CPU 372, GPU 5115 (MiB)
[05/16/2022-11:14:34] [I] Start parsing network model
[05/16/2022-11:14:34] [I] [TRT] ----------------------------------------------------------------
[05/16/2022-11:14:34] [I] [TRT] Input filename:   ../data/iharm/iharm_onnx.onnx
[05/16/2022-11:14:34] [I] [TRT] ONNX IR version:  0.0.6
[05/16/2022-11:14:34] [I] [TRT] Opset version:    12
[05/16/2022-11:14:34] [I] [TRT] Producer name:    pytorch
[05/16/2022-11:14:34] [I] [TRT] Producer version: 1.7
[05/16/2022-11:14:34] [I] [TRT] Domain:           
[05/16/2022-11:14:34] [I] [TRT] Model version:    0
[05/16/2022-11:14:34] [I] [TRT] Doc string:       
[05/16/2022-11:14:34] [I] [TRT] ----------------------------------------------------------------
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::GridAnchor_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::GridAnchorRect_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::NMS_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::Reorg_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::Region_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::Clip_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::LReLU_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::PriorBox_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::Normalize_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::ScatterND version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::RPROI_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::BatchedNMS_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::BatchedNMSDynamic_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::FlattenConcat_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::CropAndResize version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::DetectionLayer_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::EfficientNMS_ONNX_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::EfficientNMS_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::Proposal version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::ProposalLayer_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::PyramidROIAlign_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::ResizeNearest_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::Split version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::SpecialSlice_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Plugin creator already registered - ::InstanceNormalization_TRT version 1
[05/16/2022-11:14:34] [V] [TRT] Adding network input: input_img with dtype: float32, dimensions: (1, 3, 1080, 1920)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: input_img for ONNX tensor: input_img
[05/16/2022-11:14:34] [V] [TRT] Adding network input: input_mask with dtype: float32, dimensions: (1, 1, 1080, 1920)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: input_mask for ONNX tensor: input_mask
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 388
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 389
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 391
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 392
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 394
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 395
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 397
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 398
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 400
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 401
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 403
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 404
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 406
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 407
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 409
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 410
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 412
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 413
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 415
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 416
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 418
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 419
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 421
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 422
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 424
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 425
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 427
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 428
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 429
[05/16/2022-11:14:34] [W] [TRT] onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 430
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 431
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 432
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 433
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 434
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 435
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 436
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: 437
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: dbp1.conv_attention.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: dbp1.conv_attention.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: dbp1.to_rgb.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: dbp1.to_rgb.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.conv_attention.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.conv_attention.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.to_rgb.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.to_rgb.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.0.upconv.1.block.0.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.0.upconv.1.block.0.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.1.upconv.1.block.0.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.1.upconv.1.block.0.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.2.attention.background_gate.attention_transform.0.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.2.attention.background_gate.attention_transform.0.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.2.attention.background_gate.attention_transform.2.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.2.attention.background_gate.attention_transform.2.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.2.attention.foreground_gate.attention_transform.0.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.2.attention.foreground_gate.attention_transform.0.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.2.attention.foreground_gate.attention_transform.2.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.2.attention.foreground_gate.attention_transform.2.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.2.attention.mask_blurring.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.2.attention.mix_gate.attention_transform.0.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.2.attention.mix_gate.attention_transform.0.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.2.attention.mix_gate.attention_transform.2.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.2.attention.mix_gate.attention_transform.2.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.2.upconv.1.block.0.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: decoder.up_blocks.2.upconv.1.block.0.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: encoder.block0.convs.block.0.block.0.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: encoder.block0.convs.block.0.block.0.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: encoder.block0.convs.block.1.block.0.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: encoder.block0.convs.block.1.block.0.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: encoder.block1.convs.block.0.block.0.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: encoder.block1.convs.block.0.block.0.weight
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: encoder.block1.convs.block.1.block.0.bias
[05/16/2022-11:14:34] [V] [TRT] Importing initializer: encoder.block1.convs.block.1.block.0.weight
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_0 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_0 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_0 [Constant] outputs: [141 -> (1, 3, 1, 1)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Sub_1 [Sub]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: input_img
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 141
[05/16/2022-11:14:34] [V] [TRT] Sub_1 [Sub] inputs: [input_img -> (1, 3, 1080, 1920)[FLOAT]], [141 -> (1, 3, 1, 1)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: 141 for ONNX node: 141
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Sub_1 for ONNX node: Sub_1
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 142 for ONNX tensor: 142
[05/16/2022-11:14:34] [V] [TRT] Sub_1 [Sub] outputs: [142 -> (1, 3, 1080, 1920)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_2 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_2 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_2 [Constant] outputs: [143 -> (1, 3, 1, 1)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Div_3 [Div]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 142
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 143
[05/16/2022-11:14:34] [V] [TRT] Div_3 [Div] inputs: [142 -> (1, 3, 1080, 1920)[FLOAT]], [143 -> (1, 3, 1, 1)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: 143 for ONNX node: 143
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Div_3 for ONNX node: Div_3
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 144 for ONNX tensor: 144
[05/16/2022-11:14:34] [V] [TRT] Div_3 [Div] outputs: [144 -> (1, 3, 1080, 1920)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Concat_4 [Concat]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 144
[05/16/2022-11:14:34] [V] [TRT] Searching for input: input_mask
[05/16/2022-11:14:34] [V] [TRT] Concat_4 [Concat] inputs: [144 -> (1, 3, 1080, 1920)[FLOAT]], [input_mask -> (1, 1, 1080, 1920)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Concat_4 for ONNX node: Concat_4
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 145 for ONNX tensor: 145
[05/16/2022-11:14:34] [V] [TRT] Concat_4 [Concat] outputs: [145 -> (1, 4, 1080, 1920)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_5 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_5 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_5 [Constant] outputs: [147 -> ()[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Shape_6 [Shape]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 145
[05/16/2022-11:14:34] [V] [TRT] Shape_6 [Shape] inputs: [145 -> (1, 4, 1080, 1920)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Shape_6 for ONNX node: Shape_6
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 148 for ONNX tensor: 148
[05/16/2022-11:14:34] [V] [TRT] Shape_6 [Shape] outputs: [148 -> (4)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_7 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_7 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_7 [Constant] outputs: [149 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_8 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_8 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_8 [Constant] outputs: [150 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_9 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_9 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_9 [Constant] outputs: [151 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Slice_10 [Slice]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 148
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 150
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 151
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 149
[05/16/2022-11:14:34] [V] [TRT] Slice_10 [Slice] inputs: [148 -> (4)[INT32]], [150 -> (1)[INT32]], [151 -> (1)[INT32]], [149 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Slice_10 for ONNX node: Slice_10
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 152 for ONNX tensor: 152
[05/16/2022-11:14:34] [V] [TRT] Slice_10 [Slice] outputs: [152 -> (2)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Concat_11 [Concat]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 152
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 429
[05/16/2022-11:14:34] [V] [TRT] Concat_11 [Concat] inputs: [152 -> (2)[INT32]], [429 -> (2)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: 429 for ONNX node: 429
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Concat_11 for ONNX node: Concat_11
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 154 for ONNX tensor: 154
[05/16/2022-11:14:34] [V] [TRT] Concat_11 [Concat] outputs: [154 -> (4)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_12 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_12 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_12 [Constant] outputs: [155 -> ()[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Resize_13 [Resize]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 145
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 147
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 155
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 154
[05/16/2022-11:14:34] [V] [TRT] Resize_13 [Resize] inputs: [145 -> (1, 4, 1080, 1920)[FLOAT]], [147 -> ()[FLOAT]], [155 -> ()[FLOAT]], [154 -> (4)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Resize_13 for ONNX node: Resize_13
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 156 for ONNX tensor: 156
[05/16/2022-11:14:34] [V] [TRT] Resize_13 [Resize] outputs: [156 -> (1, 4, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_14 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 156
[05/16/2022-11:14:34] [V] [TRT] Searching for input: encoder.block0.convs.block.0.block.0.weight
[05/16/2022-11:14:34] [V] [TRT] Searching for input: encoder.block0.convs.block.0.block.0.bias
[05/16/2022-11:14:34] [V] [TRT] Conv_14 [Conv] inputs: [156 -> (1, 4, 512, 512)[FLOAT]], [encoder.block0.convs.block.0.block.0.weight -> (32, 4, 3, 3)[FLOAT]], [encoder.block0.convs.block.0.block.0.bias -> (32)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 4, 512, 512)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_14 for ONNX node: Conv_14
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 32
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 32, 512, 512)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 157 for ONNX tensor: 157
[05/16/2022-11:14:34] [V] [TRT] Conv_14 [Conv] outputs: [157 -> (1, 32, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Relu_15 [Relu]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 157
[05/16/2022-11:14:34] [V] [TRT] Relu_15 [Relu] inputs: [157 -> (1, 32, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Relu_15 for ONNX node: Relu_15
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 158 for ONNX tensor: 158
[05/16/2022-11:14:34] [V] [TRT] Relu_15 [Relu] outputs: [158 -> (1, 32, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_16 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 158
[05/16/2022-11:14:34] [V] [TRT] Searching for input: encoder.block0.convs.block.1.block.0.weight
[05/16/2022-11:14:34] [V] [TRT] Searching for input: encoder.block0.convs.block.1.block.0.bias
[05/16/2022-11:14:34] [V] [TRT] Conv_16 [Conv] inputs: [158 -> (1, 32, 512, 512)[FLOAT]], [encoder.block0.convs.block.1.block.0.weight -> (32, 32, 3, 3)[FLOAT]], [encoder.block0.convs.block.1.block.0.bias -> (32)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 32, 512, 512)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_16 for ONNX node: Conv_16
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 32
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 32, 512, 512)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 159 for ONNX tensor: 159
[05/16/2022-11:14:34] [V] [TRT] Conv_16 [Conv] outputs: [159 -> (1, 32, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Relu_17 [Relu]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 159
[05/16/2022-11:14:34] [V] [TRT] Relu_17 [Relu] inputs: [159 -> (1, 32, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Relu_17 for ONNX node: Relu_17
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 160 for ONNX tensor: 160
[05/16/2022-11:14:34] [V] [TRT] Relu_17 [Relu] outputs: [160 -> (1, 32, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: MaxPool_18 [MaxPool]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 160
[05/16/2022-11:14:34] [V] [TRT] MaxPool_18 [MaxPool] inputs: [160 -> (1, 32, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: MaxPool_18 for ONNX node: MaxPool_18
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 161 for ONNX tensor: 161
[05/16/2022-11:14:34] [V] [TRT] MaxPool_18 [MaxPool] outputs: [161 -> (1, 32, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_19 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 161
[05/16/2022-11:14:34] [V] [TRT] Searching for input: encoder.block1.convs.block.0.block.0.weight
[05/16/2022-11:14:34] [V] [TRT] Searching for input: encoder.block1.convs.block.0.block.0.bias
[05/16/2022-11:14:34] [V] [TRT] Conv_19 [Conv] inputs: [161 -> (1, 32, 256, 256)[FLOAT]], [encoder.block1.convs.block.0.block.0.weight -> (64, 32, 3, 3)[FLOAT]], [encoder.block1.convs.block.0.block.0.bias -> (64)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 32, 256, 256)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_19 for ONNX node: Conv_19
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 64, 256, 256)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 162 for ONNX tensor: 162
[05/16/2022-11:14:34] [V] [TRT] Conv_19 [Conv] outputs: [162 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Relu_20 [Relu]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 162
[05/16/2022-11:14:34] [V] [TRT] Relu_20 [Relu] inputs: [162 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Relu_20 for ONNX node: Relu_20
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 163 for ONNX tensor: 163
[05/16/2022-11:14:34] [V] [TRT] Relu_20 [Relu] outputs: [163 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_21 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 163
[05/16/2022-11:14:34] [V] [TRT] Searching for input: encoder.block1.convs.block.1.block.0.weight
[05/16/2022-11:14:34] [V] [TRT] Searching for input: encoder.block1.convs.block.1.block.0.bias
[05/16/2022-11:14:34] [V] [TRT] Conv_21 [Conv] inputs: [163 -> (1, 64, 256, 256)[FLOAT]], [encoder.block1.convs.block.1.block.0.weight -> (64, 64, 3, 3)[FLOAT]], [encoder.block1.convs.block.1.block.0.bias -> (64)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 64, 256, 256)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_21 for ONNX node: Conv_21
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 64, 256, 256)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 164 for ONNX tensor: 164
[05/16/2022-11:14:34] [V] [TRT] Conv_21 [Conv] outputs: [164 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Relu_22 [Relu]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 164
[05/16/2022-11:14:34] [V] [TRT] Relu_22 [Relu] inputs: [164 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Relu_22 for ONNX node: Relu_22
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 165 for ONNX tensor: 165
[05/16/2022-11:14:34] [V] [TRT] Relu_22 [Relu] outputs: [165 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: MaxPool_23 [MaxPool]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 165
[05/16/2022-11:14:34] [V] [TRT] MaxPool_23 [MaxPool] inputs: [165 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: MaxPool_23 for ONNX node: MaxPool_23
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 166 for ONNX tensor: 166
[05/16/2022-11:14:34] [V] [TRT] MaxPool_23 [MaxPool] outputs: [166 -> (1, 64, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_24 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 166
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 388
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 389
[05/16/2022-11:14:34] [V] [TRT] Conv_24 [Conv] inputs: [166 -> (1, 64, 128, 128)[FLOAT]], [388 -> (128, 64, 3, 3)[FLOAT]], [389 -> (128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 64, 128, 128)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_24 for ONNX node: Conv_24
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 128
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 128, 128, 128)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 387 for ONNX tensor: 387
[05/16/2022-11:14:34] [V] [TRT] Conv_24 [Conv] outputs: [387 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Relu_25 [Relu]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 387
[05/16/2022-11:14:34] [V] [TRT] Relu_25 [Relu] inputs: [387 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Relu_25 for ONNX node: Relu_25
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 169 for ONNX tensor: 169
[05/16/2022-11:14:34] [V] [TRT] Relu_25 [Relu] outputs: [169 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_26 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 169
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 391
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 392
[05/16/2022-11:14:34] [V] [TRT] Conv_26 [Conv] inputs: [169 -> (1, 128, 128, 128)[FLOAT]], [391 -> (128, 128, 3, 3)[FLOAT]], [392 -> (128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 128, 128, 128)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_26 for ONNX node: Conv_26
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 128
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 128, 128, 128)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 390 for ONNX tensor: 390
[05/16/2022-11:14:34] [V] [TRT] Conv_26 [Conv] outputs: [390 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Relu_27 [Relu]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 390
[05/16/2022-11:14:34] [V] [TRT] Relu_27 [Relu] inputs: [390 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Relu_27 for ONNX node: Relu_27
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 172 for ONNX tensor: 172
[05/16/2022-11:14:34] [V] [TRT] Relu_27 [Relu] outputs: [172 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: MaxPool_28 [MaxPool]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 172
[05/16/2022-11:14:34] [V] [TRT] MaxPool_28 [MaxPool] inputs: [172 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: MaxPool_28 for ONNX node: MaxPool_28
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 173 for ONNX tensor: 173
[05/16/2022-11:14:34] [V] [TRT] MaxPool_28 [MaxPool] outputs: [173 -> (1, 128, 64, 64)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_29 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 173
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 394
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 395
[05/16/2022-11:14:34] [V] [TRT] Conv_29 [Conv] inputs: [173 -> (1, 128, 64, 64)[FLOAT]], [394 -> (256, 128, 3, 3)[FLOAT]], [395 -> (256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 128, 64, 64)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_29 for ONNX node: Conv_29
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 256
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 256, 64, 64)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 393 for ONNX tensor: 393
[05/16/2022-11:14:34] [V] [TRT] Conv_29 [Conv] outputs: [393 -> (1, 256, 64, 64)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Relu_30 [Relu]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 393
[05/16/2022-11:14:34] [V] [TRT] Relu_30 [Relu] inputs: [393 -> (1, 256, 64, 64)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Relu_30 for ONNX node: Relu_30
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 176 for ONNX tensor: 176
[05/16/2022-11:14:34] [V] [TRT] Relu_30 [Relu] outputs: [176 -> (1, 256, 64, 64)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_31 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 176
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 397
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 398
[05/16/2022-11:14:34] [V] [TRT] Conv_31 [Conv] inputs: [176 -> (1, 256, 64, 64)[FLOAT]], [397 -> (256, 256, 3, 3)[FLOAT]], [398 -> (256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 256, 64, 64)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_31 for ONNX node: Conv_31
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 256
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 256, 64, 64)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 396 for ONNX tensor: 396
[05/16/2022-11:14:34] [V] [TRT] Conv_31 [Conv] outputs: [396 -> (1, 256, 64, 64)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Relu_32 [Relu]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 396
[05/16/2022-11:14:34] [V] [TRT] Relu_32 [Relu] inputs: [396 -> (1, 256, 64, 64)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Relu_32 for ONNX node: Relu_32
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 179 for ONNX tensor: 179
[05/16/2022-11:14:34] [V] [TRT] Relu_32 [Relu] outputs: [179 -> (1, 256, 64, 64)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_33 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_33 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_33 [Constant] outputs: [180 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_34 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_34 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_34 [Constant] outputs: [181 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_35 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_35 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_35 [Constant] outputs: [182 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_36 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_36 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_36 [Constant] outputs: [183 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Slice_37 [Slice]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 156
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 181
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 182
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 180
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 183
[05/16/2022-11:14:34] [V] [TRT] Slice_37 [Slice] inputs: [156 -> (1, 4, 512, 512)[FLOAT]], [181 -> (1)[INT32]], [182 -> (1)[INT32]], [180 -> (1)[INT32]], [183 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Slice_37 for ONNX node: Slice_37
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 184 for ONNX tensor: 184
[05/16/2022-11:14:34] [V] [TRT] Slice_37 [Slice] outputs: [184 -> (1, 3, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_38 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_38 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_38 [Constant] outputs: [185 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_39 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_39 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_39 [Constant] outputs: [186 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_40 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_40 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Weight at index 0: 9223372036854775807 is out of range. Clamping to: 2147483647
[05/16/2022-11:14:34] [W] [TRT] onnx2trt_utils.cpp:390: One or more weights outside the range of INT32 was clamped
[05/16/2022-11:14:34] [V] [TRT] Constant_40 [Constant] outputs: [187 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_41 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_41 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_41 [Constant] outputs: [188 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Slice_42 [Slice]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 156
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 186
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 187
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 185
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 188
[05/16/2022-11:14:34] [V] [TRT] Slice_42 [Slice] inputs: [156 -> (1, 4, 512, 512)[FLOAT]], [186 -> (1)[INT32]], [187 -> (1)[INT32]], [185 -> (1)[INT32]], [188 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Slice_42 for ONNX node: Slice_42
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 189 for ONNX tensor: 189
[05/16/2022-11:14:34] [V] [TRT] Slice_42 [Slice] outputs: [189 -> (1, 1, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_43 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_43 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_43 [Constant] outputs: [193 -> ()[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Resize_44 [Resize]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 179
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 193
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 430
[05/16/2022-11:14:34] [V] [TRT] Resize_44 [Resize] inputs: [179 -> (1, 256, 64, 64)[FLOAT]], [193 -> ()[FLOAT]], [430 -> (4)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Resize_44 for ONNX node: Resize_44
[05/16/2022-11:14:34] [V] [TRT] Running resize layer with: 
Transformation mode: align_corners
Resize mode: linear

[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 194 for ONNX tensor: 194
[05/16/2022-11:14:34] [V] [TRT] Resize_44 [Resize] outputs: [194 -> (1, 256, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_45 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 194
[05/16/2022-11:14:34] [V] [TRT] Searching for input: decoder.up_blocks.0.upconv.1.block.0.weight
[05/16/2022-11:14:34] [V] [TRT] Searching for input: decoder.up_blocks.0.upconv.1.block.0.bias
[05/16/2022-11:14:34] [V] [TRT] Conv_45 [Conv] inputs: [194 -> (1, 256, 128, 128)[FLOAT]], [decoder.up_blocks.0.upconv.1.block.0.weight -> (128, 256, 3, 3)[FLOAT]], [decoder.up_blocks.0.upconv.1.block.0.bias -> (128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 256, 128, 128)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_45 for ONNX node: Conv_45
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 128
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 128, 128, 128)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 195 for ONNX tensor: 195
[05/16/2022-11:14:34] [V] [TRT] Conv_45 [Conv] outputs: [195 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Relu_46 [Relu]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 195
[05/16/2022-11:14:34] [V] [TRT] Relu_46 [Relu] inputs: [195 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Relu_46 for ONNX node: Relu_46
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 196 for ONNX tensor: 196
[05/16/2022-11:14:34] [V] [TRT] Relu_46 [Relu] outputs: [196 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Concat_47 [Concat]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 172
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 196
[05/16/2022-11:14:34] [V] [TRT] Concat_47 [Concat] inputs: [172 -> (1, 128, 128, 128)[FLOAT]], [196 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Concat_47 for ONNX node: Concat_47
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 197 for ONNX tensor: 197
[05/16/2022-11:14:34] [V] [TRT] Concat_47 [Concat] outputs: [197 -> (1, 256, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_48 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 197
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 400
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 401
[05/16/2022-11:14:34] [V] [TRT] Conv_48 [Conv] inputs: [197 -> (1, 256, 128, 128)[FLOAT]], [400 -> (128, 256, 3, 3)[FLOAT]], [401 -> (128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 256, 128, 128)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_48 for ONNX node: Conv_48
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 128
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 128, 128, 128)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 399 for ONNX tensor: 399
[05/16/2022-11:14:34] [V] [TRT] Conv_48 [Conv] outputs: [399 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Relu_49 [Relu]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 399
[05/16/2022-11:14:34] [V] [TRT] Relu_49 [Relu] inputs: [399 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Relu_49 for ONNX node: Relu_49
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 200 for ONNX tensor: 200
[05/16/2022-11:14:34] [V] [TRT] Relu_49 [Relu] outputs: [200 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_50 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 200
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 403
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 404
[05/16/2022-11:14:34] [V] [TRT] Conv_50 [Conv] inputs: [200 -> (1, 128, 128, 128)[FLOAT]], [403 -> (128, 128, 3, 3)[FLOAT]], [404 -> (128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 128, 128, 128)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_50 for ONNX node: Conv_50
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 128
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 128, 128, 128)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 402 for ONNX tensor: 402
[05/16/2022-11:14:34] [V] [TRT] Conv_50 [Conv] outputs: [402 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Relu_51 [Relu]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 402
[05/16/2022-11:14:34] [V] [TRT] Relu_51 [Relu] inputs: [402 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Relu_51 for ONNX node: Relu_51
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 203 for ONNX tensor: 203
[05/16/2022-11:14:34] [V] [TRT] Relu_51 [Relu] outputs: [203 -> (1, 128, 128, 128)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_52 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_52 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_52 [Constant] outputs: [207 -> ()[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Resize_53 [Resize]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 203
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 207
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 431
[05/16/2022-11:14:34] [V] [TRT] Resize_53 [Resize] inputs: [203 -> (1, 128, 128, 128)[FLOAT]], [207 -> ()[FLOAT]], [431 -> (4)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Resize_53 for ONNX node: Resize_53
[05/16/2022-11:14:34] [V] [TRT] Running resize layer with: 
Transformation mode: align_corners
Resize mode: linear

[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 208 for ONNX tensor: 208
[05/16/2022-11:14:34] [V] [TRT] Resize_53 [Resize] outputs: [208 -> (1, 128, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_54 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 208
[05/16/2022-11:14:34] [V] [TRT] Searching for input: decoder.up_blocks.1.upconv.1.block.0.weight
[05/16/2022-11:14:34] [V] [TRT] Searching for input: decoder.up_blocks.1.upconv.1.block.0.bias
[05/16/2022-11:14:34] [V] [TRT] Conv_54 [Conv] inputs: [208 -> (1, 128, 256, 256)[FLOAT]], [decoder.up_blocks.1.upconv.1.block.0.weight -> (64, 128, 3, 3)[FLOAT]], [decoder.up_blocks.1.upconv.1.block.0.bias -> (64)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 128, 256, 256)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_54 for ONNX node: Conv_54
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 64, 256, 256)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 209 for ONNX tensor: 209
[05/16/2022-11:14:34] [V] [TRT] Conv_54 [Conv] outputs: [209 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Relu_55 [Relu]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 209
[05/16/2022-11:14:34] [V] [TRT] Relu_55 [Relu] inputs: [209 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Relu_55 for ONNX node: Relu_55
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 210 for ONNX tensor: 210
[05/16/2022-11:14:34] [V] [TRT] Relu_55 [Relu] outputs: [210 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Concat_56 [Concat]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 165
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 210
[05/16/2022-11:14:34] [V] [TRT] Concat_56 [Concat] inputs: [165 -> (1, 64, 256, 256)[FLOAT]], [210 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Concat_56 for ONNX node: Concat_56
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 211 for ONNX tensor: 211
[05/16/2022-11:14:34] [V] [TRT] Concat_56 [Concat] outputs: [211 -> (1, 128, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_57 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 211
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 406
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 407
[05/16/2022-11:14:34] [V] [TRT] Conv_57 [Conv] inputs: [211 -> (1, 128, 256, 256)[FLOAT]], [406 -> (64, 128, 3, 3)[FLOAT]], [407 -> (64)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 128, 256, 256)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_57 for ONNX node: Conv_57
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 64, 256, 256)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 405 for ONNX tensor: 405
[05/16/2022-11:14:34] [V] [TRT] Conv_57 [Conv] outputs: [405 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Relu_58 [Relu]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 405
[05/16/2022-11:14:34] [V] [TRT] Relu_58 [Relu] inputs: [405 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Relu_58 for ONNX node: Relu_58
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 214 for ONNX tensor: 214
[05/16/2022-11:14:34] [V] [TRT] Relu_58 [Relu] outputs: [214 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_59 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 214
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 409
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 410
[05/16/2022-11:14:34] [V] [TRT] Conv_59 [Conv] inputs: [214 -> (1, 64, 256, 256)[FLOAT]], [409 -> (64, 64, 3, 3)[FLOAT]], [410 -> (64)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 64, 256, 256)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_59 for ONNX node: Conv_59
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 64
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 64, 256, 256)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 408 for ONNX tensor: 408
[05/16/2022-11:14:34] [V] [TRT] Conv_59 [Conv] outputs: [408 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Relu_60 [Relu]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 408
[05/16/2022-11:14:34] [V] [TRT] Relu_60 [Relu] inputs: [408 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Relu_60 for ONNX node: Relu_60
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 217 for ONNX tensor: 217
[05/16/2022-11:14:34] [V] [TRT] Relu_60 [Relu] outputs: [217 -> (1, 64, 256, 256)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_61 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_61 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_61 [Constant] outputs: [221 -> ()[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Resize_62 [Resize]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 217
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 221
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 432
[05/16/2022-11:14:34] [V] [TRT] Resize_62 [Resize] inputs: [217 -> (1, 64, 256, 256)[FLOAT]], [221 -> ()[FLOAT]], [432 -> (4)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Resize_62 for ONNX node: Resize_62
[05/16/2022-11:14:34] [V] [TRT] Running resize layer with: 
Transformation mode: align_corners
Resize mode: linear

[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 222 for ONNX tensor: 222
[05/16/2022-11:14:34] [V] [TRT] Resize_62 [Resize] outputs: [222 -> (1, 64, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_63 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 222
[05/16/2022-11:14:34] [V] [TRT] Searching for input: decoder.up_blocks.2.upconv.1.block.0.weight
[05/16/2022-11:14:34] [V] [TRT] Searching for input: decoder.up_blocks.2.upconv.1.block.0.bias
[05/16/2022-11:14:34] [V] [TRT] Conv_63 [Conv] inputs: [222 -> (1, 64, 512, 512)[FLOAT]], [decoder.up_blocks.2.upconv.1.block.0.weight -> (32, 64, 3, 3)[FLOAT]], [decoder.up_blocks.2.upconv.1.block.0.bias -> (32)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 64, 512, 512)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_63 for ONNX node: Conv_63
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (3, 3), strides: (1, 1), prepadding: (1, 1), postpadding: (1, 1), dilations: (1, 1), numOutputs: 32
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 32, 512, 512)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 223 for ONNX tensor: 223
[05/16/2022-11:14:34] [V] [TRT] Conv_63 [Conv] outputs: [223 -> (1, 32, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Relu_64 [Relu]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 223
[05/16/2022-11:14:34] [V] [TRT] Relu_64 [Relu] inputs: [223 -> (1, 32, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Relu_64 for ONNX node: Relu_64
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 224 for ONNX tensor: 224
[05/16/2022-11:14:34] [V] [TRT] Relu_64 [Relu] outputs: [224 -> (1, 32, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Concat_65 [Concat]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 160
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 224
[05/16/2022-11:14:34] [V] [TRT] Concat_65 [Concat] inputs: [160 -> (1, 32, 512, 512)[FLOAT]], [224 -> (1, 32, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Concat_65 for ONNX node: Concat_65
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 225 for ONNX tensor: 225
[05/16/2022-11:14:34] [V] [TRT] Concat_65 [Concat] outputs: [225 -> (1, 64, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Shape_66 [Shape]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 225
[05/16/2022-11:14:34] [V] [TRT] Shape_66 [Shape] inputs: [225 -> (1, 64, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Shape_66 for ONNX node: Shape_66
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 226 for ONNX tensor: 226
[05/16/2022-11:14:34] [V] [TRT] Shape_66 [Shape] outputs: [226 -> (4)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_67 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_67 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_67 [Constant] outputs: [227 -> ()[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Gather_68 [Gather]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 226
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 227
[05/16/2022-11:14:34] [V] [TRT] Gather_68 [Gather] inputs: [226 -> (4)[INT32]], [227 -> ()[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: 227 for ONNX node: 227
[05/16/2022-11:14:34] [V] [TRT] Using Gather axis: 0
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Gather_68 for ONNX node: Gather_68
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 228 for ONNX tensor: 228
[05/16/2022-11:14:34] [V] [TRT] Gather_68 [Gather] outputs: [228 -> ()[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Shape_69 [Shape]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 225
[05/16/2022-11:14:34] [V] [TRT] Shape_69 [Shape] inputs: [225 -> (1, 64, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Shape_69 for ONNX node: Shape_69
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 229 for ONNX tensor: 229
[05/16/2022-11:14:34] [V] [TRT] Shape_69 [Shape] outputs: [229 -> (4)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_70 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_70 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_70 [Constant] outputs: [230 -> ()[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Gather_71 [Gather]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 229
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 230
[05/16/2022-11:14:34] [V] [TRT] Gather_71 [Gather] inputs: [229 -> (4)[INT32]], [230 -> ()[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: 230 for ONNX node: 230
[05/16/2022-11:14:34] [V] [TRT] Using Gather axis: 0
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Gather_71 for ONNX node: Gather_71
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 231 for ONNX tensor: 231
[05/16/2022-11:14:34] [V] [TRT] Gather_71 [Gather] outputs: [231 -> ()[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Unsqueeze_72 [Unsqueeze]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 228
[05/16/2022-11:14:34] [V] [TRT] Unsqueeze_72 [Unsqueeze] inputs: [228 -> ()[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Original shape: (), unsqueezing to: (1,)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Unsqueeze_72 for ONNX node: Unsqueeze_72
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 232 for ONNX tensor: 232
[05/16/2022-11:14:34] [V] [TRT] Unsqueeze_72 [Unsqueeze] outputs: [232 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Unsqueeze_73 [Unsqueeze]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 231
[05/16/2022-11:14:34] [V] [TRT] Unsqueeze_73 [Unsqueeze] inputs: [231 -> ()[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Original shape: (), unsqueezing to: (1,)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Unsqueeze_73 for ONNX node: Unsqueeze_73
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 233 for ONNX tensor: 233
[05/16/2022-11:14:34] [V] [TRT] Unsqueeze_73 [Unsqueeze] outputs: [233 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Concat_74 [Concat]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 232
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 233
[05/16/2022-11:14:34] [V] [TRT] Concat_74 [Concat] inputs: [232 -> (1)[INT32]], [233 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Concat_74 for ONNX node: Concat_74
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 234 for ONNX tensor: 234
[05/16/2022-11:14:34] [V] [TRT] Concat_74 [Concat] outputs: [234 -> (2)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_75 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_75 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_75 [Constant] outputs: [235 -> ()[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Shape_76 [Shape]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 189
[05/16/2022-11:14:34] [V] [TRT] Shape_76 [Shape] inputs: [189 -> (1, 1, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Shape_76 for ONNX node: Shape_76
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 236 for ONNX tensor: 236
[05/16/2022-11:14:34] [V] [TRT] Shape_76 [Shape] outputs: [236 -> (4)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_77 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_77 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_77 [Constant] outputs: [237 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_78 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_78 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_78 [Constant] outputs: [238 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_79 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_79 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_79 [Constant] outputs: [239 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Slice_80 [Slice]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 236
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 238
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 239
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 237
[05/16/2022-11:14:34] [V] [TRT] Slice_80 [Slice] inputs: [236 -> (4)[INT32]], [238 -> (1)[INT32]], [239 -> (1)[INT32]], [237 -> (1)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Slice_80 for ONNX node: Slice_80
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 240 for ONNX tensor: 240
[05/16/2022-11:14:34] [V] [TRT] Slice_80 [Slice] outputs: [240 -> (2)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Cast_81 [Cast]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 234
[05/16/2022-11:14:34] [V] [TRT] Cast_81 [Cast] inputs: [234 -> (2)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Casting to type: int32
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Cast_81 for ONNX node: Cast_81
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 241 for ONNX tensor: 241
[05/16/2022-11:14:34] [V] [TRT] Cast_81 [Cast] outputs: [241 -> (2)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Concat_82 [Concat]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 240
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 241
[05/16/2022-11:14:34] [V] [TRT] Concat_82 [Concat] inputs: [240 -> (2)[INT32]], [241 -> (2)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Concat_82 for ONNX node: Concat_82
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 242 for ONNX tensor: 242
[05/16/2022-11:14:34] [V] [TRT] Concat_82 [Concat] outputs: [242 -> (4)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Constant_83 [Constant]
[05/16/2022-11:14:34] [V] [TRT] Constant_83 [Constant] inputs: 
[05/16/2022-11:14:34] [V] [TRT] Constant_83 [Constant] outputs: [243 -> ()[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Resize_84 [Resize]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 189
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 235
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 243
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 242
[05/16/2022-11:14:34] [V] [TRT] Resize_84 [Resize] inputs: [189 -> (1, 1, 512, 512)[FLOAT]], [235 -> ()[FLOAT]], [243 -> ()[FLOAT]], [242 -> (4)[INT32]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Resize_84 for ONNX node: Resize_84
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 244 for ONNX tensor: 244
[05/16/2022-11:14:34] [V] [TRT] Resize_84 [Resize] outputs: [244 -> (1, 1, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: Conv_85 [Conv]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 244
[05/16/2022-11:14:34] [V] [TRT] Searching for input: decoder.up_blocks.2.attention.mask_blurring.weight
[05/16/2022-11:14:34] [V] [TRT] Conv_85 [Conv] inputs: [244 -> (1, 1, 512, 512)[FLOAT]], [decoder.up_blocks.2.attention.mask_blurring.weight -> (1, 1, 7, 7)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Convolution input dimensions: (1, 1, 512, 512)
[05/16/2022-11:14:34] [V] [TRT] Registering layer: Conv_85 for ONNX node: Conv_85
[05/16/2022-11:14:34] [V] [TRT] Using kernel: (7, 7), strides: (1, 1), prepadding: (3, 3), postpadding: (3, 3), dilations: (1, 1), numOutputs: 1
[05/16/2022-11:14:34] [V] [TRT] Convolution output dimensions: (1, 1, 512, 512)
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 245 for ONNX tensor: 245
[05/16/2022-11:14:34] [V] [TRT] Conv_85 [Conv] outputs: [245 -> (1, 1, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: GlobalAveragePool_86 [GlobalAveragePool]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 225
[05/16/2022-11:14:34] [V] [TRT] GlobalAveragePool_86 [GlobalAveragePool] inputs: [225 -> (1, 64, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Registering layer: GlobalAveragePool_86 for ONNX node: GlobalAveragePool_86
[05/16/2022-11:14:34] [V] [TRT] Registering tensor: 246 for ONNX tensor: 246
[05/16/2022-11:14:34] [V] [TRT] GlobalAveragePool_86 [GlobalAveragePool] outputs: [246 -> (1, 64, 1, 1)[FLOAT]], 
[05/16/2022-11:14:34] [V] [TRT] Parsing node: MaxPool_87 [MaxPool]
[05/16/2022-11:14:34] [V] [TRT] Searching for input: 225
[05/16/2022-11:14:34] [V] [TRT] MaxPool_87 [MaxPool] inputs: [225 -> (1, 64, 512, 512)[FLOAT]], 
[05/16/2022-11:14:34] [E] Error[3]: [network.cpp::addPoolingNd::884] Error Code 3: Internal Error (Parameter check failed at: optimizer/api/network.cpp::addPoolingNd::884, condition: allDimsGtEq(windowSize, 1) && volume(windowSize) < MAX_KERNEL_DIMS_PRODUCT(nbSpatialDims)
)
Segmentation fault (core dumped)

onnx model as followed:
iharm_onnx.onnx (8.6 MB)

Looking forward to your reply, or can provide some ideas to solve the problem,thanks.

Hi,

allDimsGtEq(windowSize, 1) ← window size parameter must be at least 1 in each dimension.
volume(windowSize) < MAX_KERNEL_DIMS_PRODUCT(nbSpatialDims) ← the volume of windowSize must be < 100k for 2D and 100M for 3D.

Pooling of size 512x512 is above what TRT can support. Could you replace the max pooling with Reduce ops.