Tlt unet evaluate failed

Hi ,

I met the following error when I use tlt unet evaluate to verify the tensorrt engine accuracy on the validation dataset.
----- log start
2021-07-08 17:29:14,295 [INFO] root: Registry: [‘nvcr.io’]
2021-07-08 17:29:14,400 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the ~/.tlt_mounts.json file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
WARNING:tensorflow:From /opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/evaluate.py:43: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

WARNING:tensorflow:From /opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/evaluate.py:43: The name tf.logging.INFO is deprecated. Please use tf.compat.v1.logging.INFO instead.

2021-07-08 17:29:21,293 [INFO] main: Loading experiment spec at /workspace/tlt-experiments/examples/unet/specs/unet_train_resnet_unet_Kvasir_SEG.txt.
2021-07-08 17:29:21,293 [INFO] iva.unet.spec_handler.spec_loader: Merging specification from /workspace/tlt-experiments/examples/unet/specs/unet_train_resnet_unet_Kvasir_SEG.txt
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:153: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

2021-07-08 17:29:21,294 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:153: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

2021-07-08 17:29:21,308 [INFO] iva.unet.model.utilities: Label Id 0: Train Id 0
2021-07-08 17:29:21,308 [INFO] iva.unet.model.utilities: Label Id 1: Train Id 1

Phase val: Total 200 files.
[TensorRT] ERROR: Parameter check failed at: engine.cpp::setBindingDimensions::1137, condition: profileMinDims.d[i] <= dimensions.d[i]
[TensorRT] ERROR: Parameter check failed at: engine.cpp::setBindingDimensions::1137, condition: profileMinDims.d[i] <= dimensions.d[i]
0%| | 0/50 [00:00<?, ?it/s]WARNING:tensorflow:From /opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/utils/data_loader.py:403: The name tf.image.resize_image_with_pad is deprecated. Please use tf.compat.v1.image.resize_image_with_pad instead.

2021-07-08 17:29:36,374 [WARNING] tensorflow: From /opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/utils/data_loader.py:403: The name tf.image.resize_image_with_pad is deprecated. Please use tf.compat.v1.image.resize_image_with_pad instead.

Traceback (most recent call last):
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/evaluate.py”, line 392, in
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/evaluate.py”, line 388, in main
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/evaluate.py”, line 296, in run_experiment
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/evaluate.py”, line 258, in evaluate_unet
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/evaluate.py”, line 212, in run_evaluate_trt
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/utils/evaluate_trt.py”, line 108, in evaluate
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/utils/evaluate_trt.py”, line 95, in _evaluate_folder
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/utils/evaluate_trt.py”, line 49, in _predict_batch
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/inferencer/trt_inferencer.py”, line 123, in infer_batch
File “<array_function internals>”, line 6, in copyto
ValueError: could not broadcast input array from shape (1228800) into shape (4915200)
2021-07-08 17:29:38,015 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
----- log end

This error happened after I upgrade to latest nvidia-tlt (04/16/2021), it is ok with tlt 3.0 02/02/2021 version.

Please provide the following information when requesting support.

• Hardware (V100)
• Network Type (unet)
• TLT Version (Please run “tlt info --verbose” and share “docker_tag” here)
Configuration of the TLT Instance
dockers: [‘nvidia/tlt-streamanalytics’, ‘nvidia/tlt-pytorch’]
format_version: 1.0
tlt_version: 3.0
published_date: 04/16/2021

• Training spec file(If have, please share here)
random_seed: 42
model_config {
model_input_width: 320
model_input_height: 320
model_input_channels: 3
num_layers: 101
all_projections: true
arch: “resnet”
use_batch_norm: true
training_precision {
backend_floatx: FLOAT32
}
}

training_config {
batch_size: 4
epochs: 10
log_summary_steps: 10
checkpoint_interval: 1
loss: “cross_dice_sum”
learning_rate:0.0001
regularizer {
type: L2
weight: 3.00000002618e-09
}
optimizer {
adam {
epsilon: 9.99999993923e-09
beta1: 0.899999976158
beta2: 0.999000012875
}
}
}

dataset_config {

dataset: “custom”
augment: True
input_image_type: “color”
train_images_path:"/workspace/tlt-experiments/Kvasir-SEG_TLT/images/train"
train_masks_path:"/workspace/tlt-experiments/Kvasir-SEG_TLT/masks/train"

val_images_path:"/workspace/tlt-experiments/Kvasir-SEG_TLT/images/val"
val_masks_path:"/workspace/tlt-experiments/Kvasir-SEG_TLT/masks/val"

test_images_path:"/workspace/tlt-experiments/Kvasir-SEG_TLT/images/test"

data_class_config {
target_classes {
name: “foreground”
mapping_class: “foreground”
label_id: 0
}
target_classes {
name: “background”
mapping_class: “background”
label_id: 1
}
}

}
• How to reproduce the issue ? (This is for errors. Please share the command line and the detailed log here.)
tlt unet evaluate --gpu_index=0 -e /workspace/tlt-experiments/examples/unet/specs/unet_train_resnet_unet_Kvasir_SEG.txt
-m /workspace/tlt-experiments/export/trtfp32.Kvasir_SEG.engine
-o /workspace/tlt-experiments/Kvasir_SEG_experiment_unpruned/
-k nvidia_tlt

Please help.
Thanks,

Did you ever try to run unet evaluate against the tlt model ? Is it successful?

Hi Morganh,

Thanks for reply. Yes, I have tried this model with nvidia-tlt 0.0.16 to train/evaluate/export successfully. After upgrading nvidia-tlt to 0.1.4 then it failed.

Thanks,
Ted

In nvidia-tlt 0.1.4, can you generate unet engine again ?

Yes, I can generate unet engine in with nvidia-tlt 0.1.4. But failed to run tle unet evaluate with it.

— log
2021-07-12 03:03:14,335 [INFO] root: Registry: [‘nvcr.io’]
2021-07-12 03:03:14,446 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the ~/.tlt_mounts.json file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
WARNING:tensorflow:From /opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/evaluate.py:43: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead.

WARNING:tensorflow:From /opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/evaluate.py:43: The name tf.logging.INFO is deprecated. Please use tf.compat.v1.logging.INFO instead.

2021-07-12 03:03:22,224 [INFO] main: Loading experiment spec at /workspace/tlt-experiments/examples/unet/specs/unet_train_resnet_unet_Kvasir_SEG.txt.
2021-07-12 03:03:22,225 [INFO] iva.unet.spec_handler.spec_loader: Merging specification from /workspace/tlt-experiments/examples/unet/specs/unet_train_resnet_unet_Kvasir_SEG.txt
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:153: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

2021-07-12 03:03:22,226 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:153: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.

2021-07-12 03:03:22,239 [INFO] iva.unet.model.utilities: Label Id 0: Train Id 0
2021-07-12 03:03:22,239 [INFO] iva.unet.model.utilities: Label Id 1: Train Id 1

Phase val: Total 200 files.
[TensorRT] ERROR: Parameter check failed at: engine.cpp::setBindingDimensions::1137, condition: profileMinDims.d[i] <= dimensions.d[i]
[TensorRT] ERROR: Parameter check failed at: engine.cpp::setBindingDimensions::1137, condition: profileMinDims.d[i] <= dimensions.d[i]
0%| | 0/50 [00:00<?, ?it/s]WARNING:tensorflow:From /opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/utils/data_loader.py:403: The name tf.image.resize_image_with_pad is deprecated. Please use tf.compat.v1.image.resize_image_with_pad instead.

2021-07-12 03:03:37,224 [WARNING] tensorflow: From /opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/utils/data_loader.py:403: The name tf.image.resize_image_with_pad is deprecated. Please use tf.compat.v1.image.resize_image_with_pad instead.

Traceback (most recent call last):
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/evaluate.py”, line 392, in
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/evaluate.py”, line 388, in main
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/evaluate.py”, line 296, in run_experiment
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/evaluate.py”, line 258, in evaluate_unet
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/scripts/evaluate.py”, line 212, in run_evaluate_trt
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/utils/evaluate_trt.py”, line 108, in evaluate
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/utils/evaluate_trt.py”, line 95, in _evaluate_folder
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/unet/utils/evaluate_trt.py”, line 49, in _predict_batch
File “/opt/tlt/.cache/dazel/_dazel_tlt/2b81a5aac84a1d3b7a324f2a7a6f400b/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/common/inferencer/trt_inferencer.py”, line 123, in infer_batch
File “<array_function internals>”, line 6, in copyto
ValueError: could not broadcast input array from shape (1228800) into shape (4915200)
2021-07-12 03:03:38,855 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

Thanks,
Ted

May I know how did you generate the unet trt engine? Can you share the command and full log?
Thanks.

Hi Morganh,

I am running it with jupyter notebook, the command used to generate is,
!tlt unet export --gpu_index=$GPU_INDEX -m
$USER_EXPERIMENT_DIR/Kvasir_SEG_experiment_unpruned/weights/model_Kvasir_SEG.tlt
-k $KEY
-e $SPECS_DIR/unet_train_resnet_unet_Kvasir_SEG.txt
–data_type fp32
–engine_file $USER_EXPERIMENT_DIR/export/trtfp32.Kvasir_SEG.engine

the log is as following,
— log start
2021-07-12 02:51:59,161 [INFO] root: Registry: [‘nvcr.io’]
2021-07-12 02:51:59,257 [WARNING] tlt.components.docker_handler.docker_handler:
Docker will run the commands as root. If you would like to retain your
local host permissions, please add the “user”:“UID:GID” in the
DockerOptions portion of the ~/.tlt_mounts.json file. You can obtain your
users UID and GID by using the “id -u” and “id -g” commands on the
terminal.
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
/usr/local/lib/python3.6/dist-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the ‘NUMBAPRO’ prefix are deprecated and consequently ignored, found use of NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so.

For more information about alternatives visit: (‘Overview — Numba 0.50.1 documentation’, ‘#cudatoolkit-lookup’)
warnings.warn(errors.NumbaWarning(msg))
/usr/local/lib/python3.6/dist-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the ‘NUMBAPRO’ prefix are deprecated and consequently ignored, found use of NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice/.

For more information about alternatives visit: (‘Overview — Numba 0.50.1 documentation’, ‘#cudatoolkit-lookup’)
warnings.warn(errors.NumbaWarning(msg))
2021-07-12 02:52:05,156 [INFO] iva.unet.spec_handler.spec_loader: Merging specification from /workspace/tlt-experiments/examples/unet/specs/unet_train_resnet_unet_Kvasir_SEG.txt
2021-07-12 02:52:05,157 [INFO] iva.common.export.keras_exporter: Using input nodes: [‘input_1’]
2021-07-12 02:52:05,158 [INFO] iva.common.export.keras_exporter: Using output nodes: [‘softmax_1’]
2021-07-12 02:52:05,158 [INFO] iva.unet.model.utilities: Label Id 0: Train Id 0
2021-07-12 02:52:05,158 [INFO] iva.unet.model.utilities: Label Id 1: Train Id 1
2021-07-12 02:52:05,159 [INFO] iva.unet.model.model_io: Loading weights from /workspace/tlt-experiments/unet/Kvasir_SEG_experiment_unpruned/weights/model_Kvasir_SEG.tlt


Layer (type) Output Shape Param # Connected to

input_1 (InputLayer) (None, 3, 320, 320) 0


conv1 (Conv2D) (None, 64, 160, 160) 9472 input_1[0][0]


bn_conv1 (BatchNormalization) (None, 64, 160, 160) 256 conv1[0][0]


activation_1 (Activation) (None, 64, 160, 160) 0 bn_conv1[0][0]


block_1a_conv_1 (Conv2D) (None, 64, 80, 80) 4160 activation_1[0][0]


block_1a_bn_1 (BatchNormalizati (None, 64, 80, 80) 256 block_1a_conv_1[0][0]


block_1a_relu_1 (Activation) (None, 64, 80, 80) 0 block_1a_bn_1[0][0]


block_1a_conv_2 (Conv2D) (None, 64, 80, 80) 36928 block_1a_relu_1[0][0]


block_1a_bn_2 (BatchNormalizati (None, 64, 80, 80) 256 block_1a_conv_2[0][0]


block_1a_relu_2 (Activation) (None, 64, 80, 80) 0 block_1a_bn_2[0][0]


block_1a_conv_3 (Conv2D) (None, 256, 80, 80) 16640 block_1a_relu_2[0][0]


block_1a_conv_shortcut (Conv2D) (None, 256, 80, 80) 16640 activation_1[0][0]


block_1a_bn_3 (BatchNormalizati (None, 256, 80, 80) 1024 block_1a_conv_3[0][0]


block_1a_bn_shortcut (BatchNorm (None, 256, 80, 80) 1024 block_1a_conv_shortcut[0][0]


add_1 (Add) (None, 256, 80, 80) 0 block_1a_bn_3[0][0]
block_1a_bn_shortcut[0][0]


block_1a_relu (Activation) (None, 256, 80, 80) 0 add_1[0][0]


block_1b_conv_1 (Conv2D) (None, 64, 80, 80) 16448 block_1a_relu[0][0]


block_1b_bn_1 (BatchNormalizati (None, 64, 80, 80) 256 block_1b_conv_1[0][0]


block_1b_relu_1 (Activation) (None, 64, 80, 80) 0 block_1b_bn_1[0][0]


block_1b_conv_2 (Conv2D) (None, 64, 80, 80) 36928 block_1b_relu_1[0][0]


block_1b_bn_2 (BatchNormalizati (None, 64, 80, 80) 256 block_1b_conv_2[0][0]


block_1b_relu_2 (Activation) (None, 64, 80, 80) 0 block_1b_bn_2[0][0]


block_1b_conv_3 (Conv2D) (None, 256, 80, 80) 16640 block_1b_relu_2[0][0]


block_1b_conv_shortcut (Conv2D) (None, 256, 80, 80) 65792 block_1a_relu[0][0]


block_1b_bn_3 (BatchNormalizati (None, 256, 80, 80) 1024 block_1b_conv_3[0][0]


block_1b_bn_shortcut (BatchNorm (None, 256, 80, 80) 1024 block_1b_conv_shortcut[0][0]


add_2 (Add) (None, 256, 80, 80) 0 block_1b_bn_3[0][0]
block_1b_bn_shortcut[0][0]


block_1b_relu (Activation) (None, 256, 80, 80) 0 add_2[0][0]


block_1c_conv_1 (Conv2D) (None, 64, 80, 80) 16448 block_1b_relu[0][0]


block_1c_bn_1 (BatchNormalizati (None, 64, 80, 80) 256 block_1c_conv_1[0][0]


block_1c_relu_1 (Activation) (None, 64, 80, 80) 0 block_1c_bn_1[0][0]


block_1c_conv_2 (Conv2D) (None, 64, 80, 80) 36928 block_1c_relu_1[0][0]


block_1c_bn_2 (BatchNormalizati (None, 64, 80, 80) 256 block_1c_conv_2[0][0]


block_1c_relu_2 (Activation) (None, 64, 80, 80) 0 block_1c_bn_2[0][0]


block_1c_conv_3 (Conv2D) (None, 256, 80, 80) 16640 block_1c_relu_2[0][0]


block_1c_conv_shortcut (Conv2D) (None, 256, 80, 80) 65792 block_1b_relu[0][0]


block_1c_bn_3 (BatchNormalizati (None, 256, 80, 80) 1024 block_1c_conv_3[0][0]


block_1c_bn_shortcut (BatchNorm (None, 256, 80, 80) 1024 block_1c_conv_shortcut[0][0]


add_3 (Add) (None, 256, 80, 80) 0 block_1c_bn_3[0][0]
block_1c_bn_shortcut[0][0]


block_1c_relu (Activation) (None, 256, 80, 80) 0 add_3[0][0]


block_2a_conv_1 (Conv2D) (None, 128, 40, 40) 32896 block_1c_relu[0][0]


block_2a_bn_1 (BatchNormalizati (None, 128, 40, 40) 512 block_2a_conv_1[0][0]


block_2a_relu_1 (Activation) (None, 128, 40, 40) 0 block_2a_bn_1[0][0]


block_2a_conv_2 (Conv2D) (None, 128, 40, 40) 147584 block_2a_relu_1[0][0]


block_2a_bn_2 (BatchNormalizati (None, 128, 40, 40) 512 block_2a_conv_2[0][0]


block_2a_relu_2 (Activation) (None, 128, 40, 40) 0 block_2a_bn_2[0][0]


block_2a_conv_3 (Conv2D) (None, 512, 40, 40) 66048 block_2a_relu_2[0][0]


block_2a_conv_shortcut (Conv2D) (None, 512, 40, 40) 131584 block_1c_relu[0][0]


block_2a_bn_3 (BatchNormalizati (None, 512, 40, 40) 2048 block_2a_conv_3[0][0]


block_2a_bn_shortcut (BatchNorm (None, 512, 40, 40) 2048 block_2a_conv_shortcut[0][0]


add_4 (Add) (None, 512, 40, 40) 0 block_2a_bn_3[0][0]
block_2a_bn_shortcut[0][0]


block_2a_relu (Activation) (None, 512, 40, 40) 0 add_4[0][0]


block_2b_conv_1 (Conv2D) (None, 128, 40, 40) 65664 block_2a_relu[0][0]


block_2b_bn_1 (BatchNormalizati (None, 128, 40, 40) 512 block_2b_conv_1[0][0]


block_2b_relu_1 (Activation) (None, 128, 40, 40) 0 block_2b_bn_1[0][0]


block_2b_conv_2 (Conv2D) (None, 128, 40, 40) 147584 block_2b_relu_1[0][0]


block_2b_bn_2 (BatchNormalizati (None, 128, 40, 40) 512 block_2b_conv_2[0][0]


block_2b_relu_2 (Activation) (None, 128, 40, 40) 0 block_2b_bn_2[0][0]


block_2b_conv_3 (Conv2D) (None, 512, 40, 40) 66048 block_2b_relu_2[0][0]


block_2b_conv_shortcut (Conv2D) (None, 512, 40, 40) 262656 block_2a_relu[0][0]


block_2b_bn_3 (BatchNormalizati (None, 512, 40, 40) 2048 block_2b_conv_3[0][0]


block_2b_bn_shortcut (BatchNorm (None, 512, 40, 40) 2048 block_2b_conv_shortcut[0][0]


add_5 (Add) (None, 512, 40, 40) 0 block_2b_bn_3[0][0]
block_2b_bn_shortcut[0][0]


block_2b_relu (Activation) (None, 512, 40, 40) 0 add_5[0][0]


block_2c_conv_1 (Conv2D) (None, 128, 40, 40) 65664 block_2b_relu[0][0]


block_2c_bn_1 (BatchNormalizati (None, 128, 40, 40) 512 block_2c_conv_1[0][0]


block_2c_relu_1 (Activation) (None, 128, 40, 40) 0 block_2c_bn_1[0][0]


block_2c_conv_2 (Conv2D) (None, 128, 40, 40) 147584 block_2c_relu_1[0][0]


block_2c_bn_2 (BatchNormalizati (None, 128, 40, 40) 512 block_2c_conv_2[0][0]


block_2c_relu_2 (Activation) (None, 128, 40, 40) 0 block_2c_bn_2[0][0]


block_2c_conv_3 (Conv2D) (None, 512, 40, 40) 66048 block_2c_relu_2[0][0]


block_2c_conv_shortcut (Conv2D) (None, 512, 40, 40) 262656 block_2b_relu[0][0]


block_2c_bn_3 (BatchNormalizati (None, 512, 40, 40) 2048 block_2c_conv_3[0][0]


block_2c_bn_shortcut (BatchNorm (None, 512, 40, 40) 2048 block_2c_conv_shortcut[0][0]


add_6 (Add) (None, 512, 40, 40) 0 block_2c_bn_3[0][0]
block_2c_bn_shortcut[0][0]


block_2c_relu (Activation) (None, 512, 40, 40) 0 add_6[0][0]


block_2d_conv_1 (Conv2D) (None, 128, 40, 40) 65664 block_2c_relu[0][0]


block_2d_bn_1 (BatchNormalizati (None, 128, 40, 40) 512 block_2d_conv_1[0][0]


block_2d_relu_1 (Activation) (None, 128, 40, 40) 0 block_2d_bn_1[0][0]


block_2d_conv_2 (Conv2D) (None, 128, 40, 40) 147584 block_2d_relu_1[0][0]


block_2d_bn_2 (BatchNormalizati (None, 128, 40, 40) 512 block_2d_conv_2[0][0]


block_2d_relu_2 (Activation) (None, 128, 40, 40) 0 block_2d_bn_2[0][0]


block_2d_conv_3 (Conv2D) (None, 512, 40, 40) 66048 block_2d_relu_2[0][0]


block_2d_conv_shortcut (Conv2D) (None, 512, 40, 40) 262656 block_2c_relu[0][0]


block_2d_bn_3 (BatchNormalizati (None, 512, 40, 40) 2048 block_2d_conv_3[0][0]


block_2d_bn_shortcut (BatchNorm (None, 512, 40, 40) 2048 block_2d_conv_shortcut[0][0]


add_7 (Add) (None, 512, 40, 40) 0 block_2d_bn_3[0][0]
block_2d_bn_shortcut[0][0]


block_2d_relu (Activation) (None, 512, 40, 40) 0 add_7[0][0]


block_3a_conv_1 (Conv2D) (None, 256, 20, 20) 131328 block_2d_relu[0][0]


block_3a_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3a_conv_1[0][0]


block_3a_relu_1 (Activation) (None, 256, 20, 20) 0 block_3a_bn_1[0][0]


block_3a_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3a_relu_1[0][0]


block_3a_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3a_conv_2[0][0]


block_3a_relu_2 (Activation) (None, 256, 20, 20) 0 block_3a_bn_2[0][0]


block_3a_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3a_relu_2[0][0]


block_3a_conv_shortcut (Conv2D) (None, 1024, 20, 20) 525312 block_2d_relu[0][0]


block_3a_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3a_conv_3[0][0]


block_3a_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3a_conv_shortcut[0][0]


add_8 (Add) (None, 1024, 20, 20) 0 block_3a_bn_3[0][0]
block_3a_bn_shortcut[0][0]


block_3a_relu (Activation) (None, 1024, 20, 20) 0 add_8[0][0]


block_3b_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3a_relu[0][0]


block_3b_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3b_conv_1[0][0]


block_3b_relu_1 (Activation) (None, 256, 20, 20) 0 block_3b_bn_1[0][0]


block_3b_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3b_relu_1[0][0]


block_3b_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3b_conv_2[0][0]


block_3b_relu_2 (Activation) (None, 256, 20, 20) 0 block_3b_bn_2[0][0]


block_3b_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3b_relu_2[0][0]


block_3b_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3a_relu[0][0]


block_3b_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3b_conv_3[0][0]


block_3b_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3b_conv_shortcut[0][0]


add_9 (Add) (None, 1024, 20, 20) 0 block_3b_bn_3[0][0]
block_3b_bn_shortcut[0][0]


block_3b_relu (Activation) (None, 1024, 20, 20) 0 add_9[0][0]


block_3c_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3b_relu[0][0]


block_3c_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3c_conv_1[0][0]


block_3c_relu_1 (Activation) (None, 256, 20, 20) 0 block_3c_bn_1[0][0]


block_3c_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3c_relu_1[0][0]


block_3c_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3c_conv_2[0][0]


block_3c_relu_2 (Activation) (None, 256, 20, 20) 0 block_3c_bn_2[0][0]


block_3c_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3c_relu_2[0][0]


block_3c_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3b_relu[0][0]


block_3c_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3c_conv_3[0][0]


block_3c_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3c_conv_shortcut[0][0]


add_10 (Add) (None, 1024, 20, 20) 0 block_3c_bn_3[0][0]
block_3c_bn_shortcut[0][0]


block_3c_relu (Activation) (None, 1024, 20, 20) 0 add_10[0][0]


block_3d_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3c_relu[0][0]


block_3d_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3d_conv_1[0][0]


block_3d_relu_1 (Activation) (None, 256, 20, 20) 0 block_3d_bn_1[0][0]


block_3d_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3d_relu_1[0][0]


block_3d_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3d_conv_2[0][0]


block_3d_relu_2 (Activation) (None, 256, 20, 20) 0 block_3d_bn_2[0][0]


block_3d_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3d_relu_2[0][0]


block_3d_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3c_relu[0][0]


block_3d_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3d_conv_3[0][0]


block_3d_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3d_conv_shortcut[0][0]


add_11 (Add) (None, 1024, 20, 20) 0 block_3d_bn_3[0][0]
block_3d_bn_shortcut[0][0]


block_3d_relu (Activation) (None, 1024, 20, 20) 0 add_11[0][0]


block_3e_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3d_relu[0][0]


block_3e_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3e_conv_1[0][0]


block_3e_relu_1 (Activation) (None, 256, 20, 20) 0 block_3e_bn_1[0][0]


block_3e_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3e_relu_1[0][0]


block_3e_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3e_conv_2[0][0]


block_3e_relu_2 (Activation) (None, 256, 20, 20) 0 block_3e_bn_2[0][0]


block_3e_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3e_relu_2[0][0]


block_3e_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3d_relu[0][0]


block_3e_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3e_conv_3[0][0]


block_3e_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3e_conv_shortcut[0][0]


add_12 (Add) (None, 1024, 20, 20) 0 block_3e_bn_3[0][0]
block_3e_bn_shortcut[0][0]


block_3e_relu (Activation) (None, 1024, 20, 20) 0 add_12[0][0]


block_3f_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3e_relu[0][0]


block_3f_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3f_conv_1[0][0]


block_3f_relu_1 (Activation) (None, 256, 20, 20) 0 block_3f_bn_1[0][0]


block_3f_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3f_relu_1[0][0]


block_3f_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3f_conv_2[0][0]


block_3f_relu_2 (Activation) (None, 256, 20, 20) 0 block_3f_bn_2[0][0]


block_3f_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3f_relu_2[0][0]


block_3f_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3e_relu[0][0]


block_3f_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3f_conv_3[0][0]


block_3f_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3f_conv_shortcut[0][0]


add_13 (Add) (None, 1024, 20, 20) 0 block_3f_bn_3[0][0]
block_3f_bn_shortcut[0][0]


block_3f_relu (Activation) (None, 1024, 20, 20) 0 add_13[0][0]


block_3g_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3f_relu[0][0]


block_3g_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3g_conv_1[0][0]


block_3g_relu_1 (Activation) (None, 256, 20, 20) 0 block_3g_bn_1[0][0]


block_3g_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3g_relu_1[0][0]


block_3g_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3g_conv_2[0][0]


block_3g_relu_2 (Activation) (None, 256, 20, 20) 0 block_3g_bn_2[0][0]


block_3g_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3g_relu_2[0][0]


block_3g_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3f_relu[0][0]


block_3g_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3g_conv_3[0][0]


block_3g_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3g_conv_shortcut[0][0]


add_14 (Add) (None, 1024, 20, 20) 0 block_3g_bn_3[0][0]
block_3g_bn_shortcut[0][0]


block_3g_relu (Activation) (None, 1024, 20, 20) 0 add_14[0][0]


block_3h_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3g_relu[0][0]


block_3h_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3h_conv_1[0][0]


block_3h_relu_1 (Activation) (None, 256, 20, 20) 0 block_3h_bn_1[0][0]


block_3h_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3h_relu_1[0][0]


block_3h_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3h_conv_2[0][0]


block_3h_relu_2 (Activation) (None, 256, 20, 20) 0 block_3h_bn_2[0][0]


block_3h_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3h_relu_2[0][0]


block_3h_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3g_relu[0][0]


block_3h_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3h_conv_3[0][0]


block_3h_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3h_conv_shortcut[0][0]


add_15 (Add) (None, 1024, 20, 20) 0 block_3h_bn_3[0][0]
block_3h_bn_shortcut[0][0]


block_3h_relu (Activation) (None, 1024, 20, 20) 0 add_15[0][0]


block_3i_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3h_relu[0][0]


block_3i_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3i_conv_1[0][0]


block_3i_relu_1 (Activation) (None, 256, 20, 20) 0 block_3i_bn_1[0][0]


block_3i_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3i_relu_1[0][0]


block_3i_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3i_conv_2[0][0]


block_3i_relu_2 (Activation) (None, 256, 20, 20) 0 block_3i_bn_2[0][0]


block_3i_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3i_relu_2[0][0]


block_3i_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3h_relu[0][0]


block_3i_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3i_conv_3[0][0]


block_3i_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3i_conv_shortcut[0][0]


add_16 (Add) (None, 1024, 20, 20) 0 block_3i_bn_3[0][0]
block_3i_bn_shortcut[0][0]


block_3i_relu (Activation) (None, 1024, 20, 20) 0 add_16[0][0]


block_3j_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3i_relu[0][0]


block_3j_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3j_conv_1[0][0]


block_3j_relu_1 (Activation) (None, 256, 20, 20) 0 block_3j_bn_1[0][0]


block_3j_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3j_relu_1[0][0]


block_3j_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3j_conv_2[0][0]


block_3j_relu_2 (Activation) (None, 256, 20, 20) 0 block_3j_bn_2[0][0]


block_3j_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3j_relu_2[0][0]


block_3j_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3i_relu[0][0]


block_3j_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3j_conv_3[0][0]


block_3j_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3j_conv_shortcut[0][0]


add_17 (Add) (None, 1024, 20, 20) 0 block_3j_bn_3[0][0]
block_3j_bn_shortcut[0][0]


block_3j_relu (Activation) (None, 1024, 20, 20) 0 add_17[0][0]


block_3k_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3j_relu[0][0]


block_3k_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3k_conv_1[0][0]


block_3k_relu_1 (Activation) (None, 256, 20, 20) 0 block_3k_bn_1[0][0]


block_3k_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3k_relu_1[0][0]


block_3k_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3k_conv_2[0][0]


block_3k_relu_2 (Activation) (None, 256, 20, 20) 0 block_3k_bn_2[0][0]


block_3k_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3k_relu_2[0][0]


block_3k_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3j_relu[0][0]


block_3k_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3k_conv_3[0][0]


block_3k_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3k_conv_shortcut[0][0]


add_18 (Add) (None, 1024, 20, 20) 0 block_3k_bn_3[0][0]
block_3k_bn_shortcut[0][0]


block_3k_relu (Activation) (None, 1024, 20, 20) 0 add_18[0][0]


block_3l_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3k_relu[0][0]


block_3l_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3l_conv_1[0][0]


block_3l_relu_1 (Activation) (None, 256, 20, 20) 0 block_3l_bn_1[0][0]


block_3l_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3l_relu_1[0][0]


block_3l_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3l_conv_2[0][0]


block_3l_relu_2 (Activation) (None, 256, 20, 20) 0 block_3l_bn_2[0][0]


block_3l_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3l_relu_2[0][0]


block_3l_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3k_relu[0][0]


block_3l_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3l_conv_3[0][0]


block_3l_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3l_conv_shortcut[0][0]


add_19 (Add) (None, 1024, 20, 20) 0 block_3l_bn_3[0][0]
block_3l_bn_shortcut[0][0]


block_3l_relu (Activation) (None, 1024, 20, 20) 0 add_19[0][0]


block_3m_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3l_relu[0][0]


block_3m_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3m_conv_1[0][0]


block_3m_relu_1 (Activation) (None, 256, 20, 20) 0 block_3m_bn_1[0][0]


block_3m_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3m_relu_1[0][0]


block_3m_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3m_conv_2[0][0]


block_3m_relu_2 (Activation) (None, 256, 20, 20) 0 block_3m_bn_2[0][0]


block_3m_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3m_relu_2[0][0]


block_3m_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3l_relu[0][0]


block_3m_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3m_conv_3[0][0]


block_3m_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3m_conv_shortcut[0][0]


add_20 (Add) (None, 1024, 20, 20) 0 block_3m_bn_3[0][0]
block_3m_bn_shortcut[0][0]


block_3m_relu (Activation) (None, 1024, 20, 20) 0 add_20[0][0]


block_3n_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3m_relu[0][0]


block_3n_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3n_conv_1[0][0]


block_3n_relu_1 (Activation) (None, 256, 20, 20) 0 block_3n_bn_1[0][0]


block_3n_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3n_relu_1[0][0]


block_3n_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3n_conv_2[0][0]


block_3n_relu_2 (Activation) (None, 256, 20, 20) 0 block_3n_bn_2[0][0]


block_3n_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3n_relu_2[0][0]


block_3n_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3m_relu[0][0]


block_3n_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3n_conv_3[0][0]


block_3n_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3n_conv_shortcut[0][0]


add_21 (Add) (None, 1024, 20, 20) 0 block_3n_bn_3[0][0]
block_3n_bn_shortcut[0][0]


block_3n_relu (Activation) (None, 1024, 20, 20) 0 add_21[0][0]


block_3o_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3n_relu[0][0]


block_3o_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3o_conv_1[0][0]


block_3o_relu_1 (Activation) (None, 256, 20, 20) 0 block_3o_bn_1[0][0]


block_3o_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3o_relu_1[0][0]


block_3o_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3o_conv_2[0][0]


block_3o_relu_2 (Activation) (None, 256, 20, 20) 0 block_3o_bn_2[0][0]


block_3o_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3o_relu_2[0][0]


block_3o_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3n_relu[0][0]


block_3o_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3o_conv_3[0][0]


block_3o_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3o_conv_shortcut[0][0]


add_22 (Add) (None, 1024, 20, 20) 0 block_3o_bn_3[0][0]
block_3o_bn_shortcut[0][0]


block_3o_relu (Activation) (None, 1024, 20, 20) 0 add_22[0][0]


block_3p_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3o_relu[0][0]


block_3p_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3p_conv_1[0][0]


block_3p_relu_1 (Activation) (None, 256, 20, 20) 0 block_3p_bn_1[0][0]


block_3p_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3p_relu_1[0][0]


block_3p_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3p_conv_2[0][0]


block_3p_relu_2 (Activation) (None, 256, 20, 20) 0 block_3p_bn_2[0][0]


block_3p_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3p_relu_2[0][0]


block_3p_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3o_relu[0][0]


block_3p_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3p_conv_3[0][0]


block_3p_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3p_conv_shortcut[0][0]


add_23 (Add) (None, 1024, 20, 20) 0 block_3p_bn_3[0][0]
block_3p_bn_shortcut[0][0]


block_3p_relu (Activation) (None, 1024, 20, 20) 0 add_23[0][0]


block_3q_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3p_relu[0][0]


block_3q_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3q_conv_1[0][0]


block_3q_relu_1 (Activation) (None, 256, 20, 20) 0 block_3q_bn_1[0][0]


block_3q_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3q_relu_1[0][0]


block_3q_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3q_conv_2[0][0]


block_3q_relu_2 (Activation) (None, 256, 20, 20) 0 block_3q_bn_2[0][0]


block_3q_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3q_relu_2[0][0]


block_3q_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3p_relu[0][0]


block_3q_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3q_conv_3[0][0]


block_3q_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3q_conv_shortcut[0][0]


add_24 (Add) (None, 1024, 20, 20) 0 block_3q_bn_3[0][0]
block_3q_bn_shortcut[0][0]


block_3q_relu (Activation) (None, 1024, 20, 20) 0 add_24[0][0]


block_3r_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3q_relu[0][0]


block_3r_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3r_conv_1[0][0]


block_3r_relu_1 (Activation) (None, 256, 20, 20) 0 block_3r_bn_1[0][0]


block_3r_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3r_relu_1[0][0]


block_3r_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3r_conv_2[0][0]


block_3r_relu_2 (Activation) (None, 256, 20, 20) 0 block_3r_bn_2[0][0]


block_3r_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3r_relu_2[0][0]


block_3r_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3q_relu[0][0]


block_3r_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3r_conv_3[0][0]


block_3r_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3r_conv_shortcut[0][0]


add_25 (Add) (None, 1024, 20, 20) 0 block_3r_bn_3[0][0]
block_3r_bn_shortcut[0][0]


block_3r_relu (Activation) (None, 1024, 20, 20) 0 add_25[0][0]


block_3s_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3r_relu[0][0]


block_3s_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3s_conv_1[0][0]


block_3s_relu_1 (Activation) (None, 256, 20, 20) 0 block_3s_bn_1[0][0]


block_3s_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3s_relu_1[0][0]


block_3s_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3s_conv_2[0][0]


block_3s_relu_2 (Activation) (None, 256, 20, 20) 0 block_3s_bn_2[0][0]


block_3s_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3s_relu_2[0][0]


block_3s_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3r_relu[0][0]


block_3s_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3s_conv_3[0][0]


block_3s_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3s_conv_shortcut[0][0]


add_26 (Add) (None, 1024, 20, 20) 0 block_3s_bn_3[0][0]
block_3s_bn_shortcut[0][0]


block_3s_relu (Activation) (None, 1024, 20, 20) 0 add_26[0][0]


block_3t_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3s_relu[0][0]


block_3t_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3t_conv_1[0][0]


block_3t_relu_1 (Activation) (None, 256, 20, 20) 0 block_3t_bn_1[0][0]


block_3t_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3t_relu_1[0][0]


block_3t_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3t_conv_2[0][0]


block_3t_relu_2 (Activation) (None, 256, 20, 20) 0 block_3t_bn_2[0][0]


block_3t_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3t_relu_2[0][0]


block_3t_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3s_relu[0][0]


block_3t_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3t_conv_3[0][0]


block_3t_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3t_conv_shortcut[0][0]


add_27 (Add) (None, 1024, 20, 20) 0 block_3t_bn_3[0][0]
block_3t_bn_shortcut[0][0]


block_3t_relu (Activation) (None, 1024, 20, 20) 0 add_27[0][0]


block_3u_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3t_relu[0][0]


block_3u_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3u_conv_1[0][0]


block_3u_relu_1 (Activation) (None, 256, 20, 20) 0 block_3u_bn_1[0][0]


block_3u_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3u_relu_1[0][0]


block_3u_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3u_conv_2[0][0]


block_3u_relu_2 (Activation) (None, 256, 20, 20) 0 block_3u_bn_2[0][0]


block_3u_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3u_relu_2[0][0]


block_3u_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3t_relu[0][0]


block_3u_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3u_conv_3[0][0]


block_3u_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3u_conv_shortcut[0][0]


add_28 (Add) (None, 1024, 20, 20) 0 block_3u_bn_3[0][0]
block_3u_bn_shortcut[0][0]


block_3u_relu (Activation) (None, 1024, 20, 20) 0 add_28[0][0]


block_3v_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3u_relu[0][0]


block_3v_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3v_conv_1[0][0]


block_3v_relu_1 (Activation) (None, 256, 20, 20) 0 block_3v_bn_1[0][0]


block_3v_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3v_relu_1[0][0]


block_3v_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3v_conv_2[0][0]


block_3v_relu_2 (Activation) (None, 256, 20, 20) 0 block_3v_bn_2[0][0]


block_3v_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3v_relu_2[0][0]


block_3v_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3u_relu[0][0]


block_3v_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3v_conv_3[0][0]


block_3v_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3v_conv_shortcut[0][0]


add_29 (Add) (None, 1024, 20, 20) 0 block_3v_bn_3[0][0]
block_3v_bn_shortcut[0][0]


block_3v_relu (Activation) (None, 1024, 20, 20) 0 add_29[0][0]


block_3w_conv_1 (Conv2D) (None, 256, 20, 20) 262400 block_3v_relu[0][0]


block_3w_bn_1 (BatchNormalizati (None, 256, 20, 20) 1024 block_3w_conv_1[0][0]


block_3w_relu_1 (Activation) (None, 256, 20, 20) 0 block_3w_bn_1[0][0]


block_3w_conv_2 (Conv2D) (None, 256, 20, 20) 590080 block_3w_relu_1[0][0]


block_3w_bn_2 (BatchNormalizati (None, 256, 20, 20) 1024 block_3w_conv_2[0][0]


block_3w_relu_2 (Activation) (None, 256, 20, 20) 0 block_3w_bn_2[0][0]


block_3w_conv_3 (Conv2D) (None, 1024, 20, 20) 263168 block_3w_relu_2[0][0]


block_3w_conv_shortcut (Conv2D) (None, 1024, 20, 20) 1049600 block_3v_relu[0][0]


block_3w_bn_3 (BatchNormalizati (None, 1024, 20, 20) 4096 block_3w_conv_3[0][0]


block_3w_bn_shortcut (BatchNorm (None, 1024, 20, 20) 4096 block_3w_conv_shortcut[0][0]


add_30 (Add) (None, 1024, 20, 20) 0 block_3w_bn_3[0][0]
block_3w_bn_shortcut[0][0]


block_3w_relu (Activation) (None, 1024, 20, 20) 0 add_30[0][0]


block_4a_conv_1 (Conv2D) (None, 512, 20, 20) 524800 block_3w_relu[0][0]


block_4a_bn_1 (BatchNormalizati (None, 512, 20, 20) 2048 block_4a_conv_1[0][0]


block_4a_relu_1 (Activation) (None, 512, 20, 20) 0 block_4a_bn_1[0][0]


block_4a_conv_2 (Conv2D) (None, 512, 20, 20) 2359808 block_4a_relu_1[0][0]


block_4a_bn_2 (BatchNormalizati (None, 512, 20, 20) 2048 block_4a_conv_2[0][0]


block_4a_relu_2 (Activation) (None, 512, 20, 20) 0 block_4a_bn_2[0][0]


block_4a_conv_3 (Conv2D) (None, 2048, 20, 20) 1050624 block_4a_relu_2[0][0]


block_4a_conv_shortcut (Conv2D) (None, 2048, 20, 20) 2099200 block_3w_relu[0][0]


block_4a_bn_3 (BatchNormalizati (None, 2048, 20, 20) 8192 block_4a_conv_3[0][0]


block_4a_bn_shortcut (BatchNorm (None, 2048, 20, 20) 8192 block_4a_conv_shortcut[0][0]


add_31 (Add) (None, 2048, 20, 20) 0 block_4a_bn_3[0][0]
block_4a_bn_shortcut[0][0]


block_4a_relu (Activation) (None, 2048, 20, 20) 0 add_31[0][0]


block_4b_conv_1 (Conv2D) (None, 512, 20, 20) 1049088 block_4a_relu[0][0]


block_4b_bn_1 (BatchNormalizati (None, 512, 20, 20) 2048 block_4b_conv_1[0][0]


block_4b_relu_1 (Activation) (None, 512, 20, 20) 0 block_4b_bn_1[0][0]


block_4b_conv_2 (Conv2D) (None, 512, 20, 20) 2359808 block_4b_relu_1[0][0]


block_4b_bn_2 (BatchNormalizati (None, 512, 20, 20) 2048 block_4b_conv_2[0][0]


block_4b_relu_2 (Activation) (None, 512, 20, 20) 0 block_4b_bn_2[0][0]


block_4b_conv_3 (Conv2D) (None, 2048, 20, 20) 1050624 block_4b_relu_2[0][0]


block_4b_conv_shortcut (Conv2D) (None, 2048, 20, 20) 4196352 block_4a_relu[0][0]


block_4b_bn_3 (BatchNormalizati (None, 2048, 20, 20) 8192 block_4b_conv_3[0][0]


block_4b_bn_shortcut (BatchNorm (None, 2048, 20, 20) 8192 block_4b_conv_shortcut[0][0]


add_32 (Add) (None, 2048, 20, 20) 0 block_4b_bn_3[0][0]
block_4b_bn_shortcut[0][0]


block_4b_relu (Activation) (None, 2048, 20, 20) 0 add_32[0][0]


block_4c_conv_1 (Conv2D) (None, 512, 20, 20) 1049088 block_4b_relu[0][0]


block_4c_bn_1 (BatchNormalizati (None, 512, 20, 20) 2048 block_4c_conv_1[0][0]


block_4c_relu_1 (Activation) (None, 512, 20, 20) 0 block_4c_bn_1[0][0]


block_4c_conv_2 (Conv2D) (None, 512, 20, 20) 2359808 block_4c_relu_1[0][0]


block_4c_bn_2 (BatchNormalizati (None, 512, 20, 20) 2048 block_4c_conv_2[0][0]


block_4c_relu_2 (Activation) (None, 512, 20, 20) 0 block_4c_bn_2[0][0]


block_4c_conv_3 (Conv2D) (None, 2048, 20, 20) 1050624 block_4c_relu_2[0][0]


block_4c_conv_shortcut (Conv2D) (None, 2048, 20, 20) 4196352 block_4b_relu[0][0]


block_4c_bn_3 (BatchNormalizati (None, 2048, 20, 20) 8192 block_4c_conv_3[0][0]


block_4c_bn_shortcut (BatchNorm (None, 2048, 20, 20) 8192 block_4c_conv_shortcut[0][0]


add_33 (Add) (None, 2048, 20, 20) 0 block_4c_bn_3[0][0]
block_4c_bn_shortcut[0][0]


block_4c_relu (Activation) (None, 2048, 20, 20) 0 add_33[0][0]


conv2d_transpose_1 (Conv2DTrans (None, 256, 40, 40) 8388864 block_4c_relu[0][0]


concatenate_1 (Concatenate) (None, 768, 40, 40) 0 conv2d_transpose_1[0][0]
block_2b_relu[0][0]


batch_normalization_1 (BatchNor (None, 768, 40, 40) 3072 concatenate_1[0][0]


activation_2 (Activation) (None, 768, 40, 40) 0 batch_normalization_1[0][0]


conv2d_1 (Conv2D) (None, 256, 40, 40) 1769728 activation_2[0][0]


batch_normalization_2 (BatchNor (None, 256, 40, 40) 1024 conv2d_1[0][0]


activation_3 (Activation) (None, 256, 40, 40) 0 batch_normalization_2[0][0]


conv2d_transpose_2 (Conv2DTrans (None, 128, 80, 80) 524416 activation_3[0][0]


concatenate_2 (Concatenate) (None, 384, 80, 80) 0 conv2d_transpose_2[0][0]
block_1b_relu[0][0]


batch_normalization_3 (BatchNor (None, 384, 80, 80) 1536 concatenate_2[0][0]


activation_4 (Activation) (None, 384, 80, 80) 0 batch_normalization_3[0][0]


conv2d_2 (Conv2D) (None, 128, 80, 80) 442496 activation_4[0][0]


batch_normalization_4 (BatchNor (None, 128, 80, 80) 512 conv2d_2[0][0]


activation_5 (Activation) (None, 128, 80, 80) 0 batch_normalization_4[0][0]


conv2d_transpose_3 (Conv2DTrans (None, 64, 160, 160) 131136 activation_5[0][0]


concatenate_3 (Concatenate) (None, 128, 160, 160 0 conv2d_transpose_3[0][0]
activation_1[0][0]


batch_normalization_5 (BatchNor (None, 128, 160, 160 512 concatenate_3[0][0]


activation_6 (Activation) (None, 128, 160, 160 0 batch_normalization_5[0][0]


conv2d_3 (Conv2D) (None, 64, 160, 160) 73792 activation_6[0][0]


batch_normalization_6 (BatchNor (None, 64, 160, 160) 256 conv2d_3[0][0]


activation_7 (Activation) (None, 64, 160, 160) 0 batch_normalization_6[0][0]


conv2d_transpose_4 (Conv2DTrans (None, 64, 320, 320) 65600 activation_7[0][0]


batch_normalization_7 (BatchNor (None, 64, 320, 320) 256 conv2d_transpose_4[0][0]


activation_8 (Activation) (None, 64, 320, 320) 0 batch_normalization_7[0][0]


conv2d_4 (Conv2D) (None, 64, 320, 320) 36928 activation_8[0][0]


batch_normalization_8 (BatchNor (None, 64, 320, 320) 256 conv2d_4[0][0]


activation_9 (Activation) (None, 64, 320, 320) 0 batch_normalization_8[0][0]


conv2d_5 (Conv2D) (None, 2, 320, 320) 1154 activation_9[0][0]


permute_1 (Permute) (None, 320, 320, 2) 0 conv2d_5[0][0]


softmax_1 (Softmax) (None, 320, 320, 2) 0 permute_1[0][0]

Total params: 86,617,858
Trainable params: 86,451,458
Non-trainable params: 166,400


2021-07-12 02:52:25,540 [INFO] iva.unet.model.model_io: Loaded weights Successfully for Export
The ONNX operator number change on the optimization: 717 → 292
2021-07-12 02:53:36,068 [INFO] keras2onnx: The ONNX operator number change on the optimization: 717 → 292
2021-07-12 02:53:36,370 [WARNING] onnxmltools: The maximum opset needed by this model is only 11.
2021-07-12 02:53:40,531 [INFO] numba.cuda.cudadrv.driver: init
2021-07-12 02:53:40,718 [INFO] iva.unet.export.unet_exporter: Converted model was saved into /workspace/tlt-experiments/unet/Kvasir_SEG_experiment_unpruned/weights/model_Kvasir_SEG.etlt
2021-07-12 02:54:33,324 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.
— log end

the next step is use the following command to very the tensorrt engine and error occurred.

Thanks,
Ted

May I know that after you update to the latest TLT version, did you ever train a new tlt model?
Can I assume that you generate the trt engine with the old tlt model, right?

Hi Morganh,

Yes, after upgrade to the latest TLT, I remove all outputs and run the playbook start over, it will download the pretrained model from ngc and run TLT training then generate the engine…

So, you are training a new tlt model with the latest TLT 3.0 docker , right?
I suggest you run only for 1 or 2 epochs, then to check if there is still an issue.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.