Full command:
!mask_rcnn inference -i /workspace/tlt/results/not_corrosion \
-o /workspace/tlt/results/tlt_mask_rcnn_corrosion1000_resnet50/annotated_images \
-e /workspace/tlt/results/tlt_mask_rcnn_corrosion1000_resnet50/pruned_model/model.step-90000.tlt/final_spec.txt \
-m /workspace/tlt/results/tlt_mask_rcnn_corrosion1000_resnet50/pruned_model/model.step-90000.tlt/model.tlt \
-t 0.6 \
-k $KEY \
--gpu_index 0 \
--include_mask
Full log:
Using TensorFlow backend.
WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.
Using TensorFlow backend.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/__init__.py:117: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/__init__.py:143: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
Label file does not exist. Skipping...
[MaskRCNN] INFO : Running inference...
[MaskRCNN] INFO : Loading weights from /workspace/tlt/results/tlt_mask_rcnn_corrosion1000_resnet50/pruned_model2/model.step-90000.tlt/model.tlt
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/autograph/converters/directives.py:119: The name tf.set_random_seed is deprecated. Please use tf.compat.v1.set_random_seed instead.
WARNING:tensorflow:Entity <function infer.<locals>.infer_input_fn.<locals>.process_path at 0x7f850de6fb70> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Unable to locate the source code of <function infer.<locals>.infer_input_fn.<locals>.process_path at 0x7f850de6fb70>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code
[MaskRCNN] INFO : ***********************
[MaskRCNN] INFO : Loading model graph...
[MaskRCNN] INFO : ***********************
WARNING:tensorflow:Entity <bound method AnchorLayer.call of <iva.mask_rcnn.layers.anchor_layer.AnchorLayer object at 0x7f83df32a400>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Unable to locate the source code of <bound method AnchorLayer.call of <iva.mask_rcnn.layers.anchor_layer.AnchorLayer object at 0x7f83df32a400>>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code
WARNING:tensorflow:Entity <bound method MultilevelProposal.call of <iva.mask_rcnn.layers.multilevel_proposal_layer.MultilevelProposal object at 0x7f83df32a5f8>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Unable to locate the source code of <bound method MultilevelProposal.call of <iva.mask_rcnn.layers.multilevel_proposal_layer.MultilevelProposal object at 0x7f83df32a5f8>>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code
[MaskRCNN] INFO : [ROI OPs] Using Batched NMS... Scope: MLP/multilevel_propose_rois/level_2/
[MaskRCNN] INFO : [ROI OPs] Using Batched NMS... Scope: MLP/multilevel_propose_rois/level_3/
[MaskRCNN] INFO : [ROI OPs] Using Batched NMS... Scope: MLP/multilevel_propose_rois/level_4/
[MaskRCNN] INFO : [ROI OPs] Using Batched NMS... Scope: MLP/multilevel_propose_rois/level_5/
[MaskRCNN] INFO : [ROI OPs] Using Batched NMS... Scope: MLP/multilevel_propose_rois/level_6/
WARNING:tensorflow:Entity <bound method MultilevelCropResize.call of <iva.mask_rcnn.layers.multilevel_crop_resize_layer.MultilevelCropResize object at 0x7f83df32a7f0>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Unable to locate the source code of <bound method MultilevelCropResize.call of <iva.mask_rcnn.layers.multilevel_crop_resize_layer.MultilevelCropResize object at 0x7f83df32a7f0>>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code
WARNING:tensorflow:Entity <bound method ReshapeLayer.call of <iva.mask_rcnn.layers.reshape_layer.ReshapeLayer object at 0x7f83df32ac88>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Unable to locate the source code of <bound method ReshapeLayer.call of <iva.mask_rcnn.layers.reshape_layer.ReshapeLayer object at 0x7f83df32ac88>>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code
WARNING:tensorflow:Entity <bound method ReshapeLayer.call of <iva.mask_rcnn.layers.reshape_layer.ReshapeLayer object at 0x7f83df3339e8>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Unable to locate the source code of <bound method ReshapeLayer.call of <iva.mask_rcnn.layers.reshape_layer.ReshapeLayer object at 0x7f83df3339e8>>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code
WARNING:tensorflow:Entity <bound method ReshapeLayer.call of <iva.mask_rcnn.layers.reshape_layer.ReshapeLayer object at 0x7f83df333b00>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Unable to locate the source code of <bound method ReshapeLayer.call of <iva.mask_rcnn.layers.reshape_layer.ReshapeLayer object at 0x7f83df333b00>>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code
WARNING:tensorflow:Entity <bound method GPUDetections.call of <iva.mask_rcnn.layers.gpu_detection_layer.GPUDetections object at 0x7f83df333c18>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Unable to locate the source code of <bound method GPUDetections.call of <iva.mask_rcnn.layers.gpu_detection_layer.GPUDetections object at 0x7f83df333c18>>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code
WARNING:tensorflow:Entity <bound method MultilevelCropResize.call of <iva.mask_rcnn.layers.multilevel_crop_resize_layer.MultilevelCropResize object at 0x7f83df333f28>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Unable to locate the source code of <bound method MultilevelCropResize.call of <iva.mask_rcnn.layers.multilevel_crop_resize_layer.MultilevelCropResize object at 0x7f83df333f28>>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code
WARNING:tensorflow:Entity <bound method ReshapeLayer.call of <iva.mask_rcnn.layers.reshape_layer.ReshapeLayer object at 0x7f83df33d080>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Unable to locate the source code of <bound method ReshapeLayer.call of <iva.mask_rcnn.layers.reshape_layer.ReshapeLayer object at 0x7f83df33d080>>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code
WARNING:tensorflow:Entity <bound method MaskPostprocess.call of <iva.mask_rcnn.layers.mask_postprocess_layer.MaskPostprocess object at 0x7f83df344d30>> could not be transformed and will be executed as-is. Please report this to the AutoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: Unable to locate the source code of <bound method MaskPostprocess.call of <iva.mask_rcnn.layers.mask_postprocess_layer.MaskPostprocess object at 0x7f83df344d30>>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code
Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
image_input (ImageInput) [(2, 3, 640, 640)] 0
__________________________________________________________________________________________________
conv1 (Conv2D) (2, 64, 320, 320) 9408 image_input[0][0]
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization) (2, 64, 320, 320) 256 conv1[0][0]
__________________________________________________________________________________________________
activation (Activation) (2, 64, 320, 320) 0 bn_conv1[0][0]
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D) (2, 64, 160, 160) 0 activation[0][0]
__________________________________________________________________________________________________
block_1a_conv_1 (Conv2D) (2, 64, 160, 160) 4096 max_pooling2d[0][0]
__________________________________________________________________________________________________
block_1a_bn_1 (BatchNormalizati (2, 64, 160, 160) 256 block_1a_conv_1[0][0]
__________________________________________________________________________________________________
block_1a_relu_1 (Activation) (2, 64, 160, 160) 0 block_1a_bn_1[0][0]
__________________________________________________________________________________________________
block_1a_conv_2 (Conv2D) (2, 64, 160, 160) 36864 block_1a_relu_1[0][0]
__________________________________________________________________________________________________
block_1a_bn_2 (BatchNormalizati (2, 64, 160, 160) 256 block_1a_conv_2[0][0]
__________________________________________________________________________________________________
block_1a_relu_2 (Activation) (2, 64, 160, 160) 0 block_1a_bn_2[0][0]
__________________________________________________________________________________________________
block_1a_conv_3 (Conv2D) (2, 256, 160, 160) 16384 block_1a_relu_2[0][0]
__________________________________________________________________________________________________
block_1a_conv_shortcut (Conv2D) (2, 256, 160, 160) 16384 max_pooling2d[0][0]
__________________________________________________________________________________________________
block_1a_bn_3 (BatchNormalizati (2, 256, 160, 160) 1024 block_1a_conv_3[0][0]
__________________________________________________________________________________________________
block_1a_bn_shortcut (BatchNorm (2, 256, 160, 160) 1024 block_1a_conv_shortcut[0][0]
__________________________________________________________________________________________________
add (Add) (2, 256, 160, 160) 0 block_1a_bn_3[0][0]
block_1a_bn_shortcut[0][0]
__________________________________________________________________________________________________
block_1a_relu (Activation) (2, 256, 160, 160) 0 add[0][0]
__________________________________________________________________________________________________
block_1b_conv_1 (Conv2D) (2, 64, 160, 160) 16384 block_1a_relu[0][0]
__________________________________________________________________________________________________
block_1b_bn_1 (BatchNormalizati (2, 64, 160, 160) 256 block_1b_conv_1[0][0]
__________________________________________________________________________________________________
block_1b_relu_1 (Activation) (2, 64, 160, 160) 0 block_1b_bn_1[0][0]
__________________________________________________________________________________________________
block_1b_conv_2 (Conv2D) (2, 64, 160, 160) 36864 block_1b_relu_1[0][0]
__________________________________________________________________________________________________
block_1b_bn_2 (BatchNormalizati (2, 64, 160, 160) 256 block_1b_conv_2[0][0]
__________________________________________________________________________________________________
block_1b_relu_2 (Activation) (2, 64, 160, 160) 0 block_1b_bn_2[0][0]
__________________________________________________________________________________________________
block_1b_conv_3 (Conv2D) (2, 256, 160, 160) 16384 block_1b_relu_2[0][0]
__________________________________________________________________________________________________
block_1b_bn_3 (BatchNormalizati (2, 256, 160, 160) 1024 block_1b_conv_3[0][0]
__________________________________________________________________________________________________
add_1 (Add) (2, 256, 160, 160) 0 block_1b_bn_3[0][0]
block_1a_relu[0][0]
__________________________________________________________________________________________________
block_1b_relu (Activation) (2, 256, 160, 160) 0 add_1[0][0]
__________________________________________________________________________________________________
block_1c_conv_1 (Conv2D) (2, 64, 160, 160) 16384 block_1b_relu[0][0]
__________________________________________________________________________________________________
block_1c_bn_1 (BatchNormalizati (2, 64, 160, 160) 256 block_1c_conv_1[0][0]
__________________________________________________________________________________________________
block_1c_relu_1 (Activation) (2, 64, 160, 160) 0 block_1c_bn_1[0][0]
__________________________________________________________________________________________________
block_1c_conv_2 (Conv2D) (2, 64, 160, 160) 36864 block_1c_relu_1[0][0]
__________________________________________________________________________________________________
block_1c_bn_2 (BatchNormalizati (2, 64, 160, 160) 256 block_1c_conv_2[0][0]
__________________________________________________________________________________________________
block_1c_relu_2 (Activation) (2, 64, 160, 160) 0 block_1c_bn_2[0][0]
__________________________________________________________________________________________________
block_1c_conv_3 (Conv2D) (2, 256, 160, 160) 16384 block_1c_relu_2[0][0]
__________________________________________________________________________________________________
block_1c_bn_3 (BatchNormalizati (2, 256, 160, 160) 1024 block_1c_conv_3[0][0]
__________________________________________________________________________________________________
add_2 (Add) (2, 256, 160, 160) 0 block_1c_bn_3[0][0]
block_1b_relu[0][0]
__________________________________________________________________________________________________
block_1c_relu (Activation) (2, 256, 160, 160) 0 add_2[0][0]
__________________________________________________________________________________________________
block_2a_conv_1 (Conv2D) (2, 128, 80, 80) 32768 block_1c_relu[0][0]
__________________________________________________________________________________________________
block_2a_bn_1 (BatchNormalizati (2, 128, 80, 80) 512 block_2a_conv_1[0][0]
__________________________________________________________________________________________________
block_2a_relu_1 (Activation) (2, 128, 80, 80) 0 block_2a_bn_1[0][0]
__________________________________________________________________________________________________
block_2a_conv_2 (Conv2D) (2, 128, 80, 80) 147456 block_2a_relu_1[0][0]
__________________________________________________________________________________________________
block_2a_bn_2 (BatchNormalizati (2, 128, 80, 80) 512 block_2a_conv_2[0][0]
__________________________________________________________________________________________________
block_2a_relu_2 (Activation) (2, 128, 80, 80) 0 block_2a_bn_2[0][0]
__________________________________________________________________________________________________
block_2a_conv_3 (Conv2D) (2, 512, 80, 80) 65536 block_2a_relu_2[0][0]
__________________________________________________________________________________________________
block_2a_conv_shortcut (Conv2D) (2, 512, 80, 80) 131072 block_1c_relu[0][0]
__________________________________________________________________________________________________
block_2a_bn_3 (BatchNormalizati (2, 512, 80, 80) 2048 block_2a_conv_3[0][0]
__________________________________________________________________________________________________
block_2a_bn_shortcut (BatchNorm (2, 512, 80, 80) 2048 block_2a_conv_shortcut[0][0]
__________________________________________________________________________________________________
add_3 (Add) (2, 512, 80, 80) 0 block_2a_bn_3[0][0]
block_2a_bn_shortcut[0][0]
__________________________________________________________________________________________________
block_2a_relu (Activation) (2, 512, 80, 80) 0 add_3[0][0]
__________________________________________________________________________________________________
block_2b_conv_1 (Conv2D) (2, 128, 80, 80) 65536 block_2a_relu[0][0]
__________________________________________________________________________________________________
block_2b_bn_1 (BatchNormalizati (2, 128, 80, 80) 512 block_2b_conv_1[0][0]
__________________________________________________________________________________________________
block_2b_relu_1 (Activation) (2, 128, 80, 80) 0 block_2b_bn_1[0][0]
__________________________________________________________________________________________________
block_2b_conv_2 (Conv2D) (2, 128, 80, 80) 147456 block_2b_relu_1[0][0]
__________________________________________________________________________________________________
block_2b_bn_2 (BatchNormalizati (2, 128, 80, 80) 512 block_2b_conv_2[0][0]
__________________________________________________________________________________________________
block_2b_relu_2 (Activation) (2, 128, 80, 80) 0 block_2b_bn_2[0][0]
__________________________________________________________________________________________________
block_2b_conv_3 (Conv2D) (2, 512, 80, 80) 65536 block_2b_relu_2[0][0]
__________________________________________________________________________________________________
block_2b_bn_3 (BatchNormalizati (2, 512, 80, 80) 2048 block_2b_conv_3[0][0]
__________________________________________________________________________________________________
add_4 (Add) (2, 512, 80, 80) 0 block_2b_bn_3[0][0]
block_2a_relu[0][0]
__________________________________________________________________________________________________
block_2b_relu (Activation) (2, 512, 80, 80) 0 add_4[0][0]
__________________________________________________________________________________________________
block_2c_conv_1 (Conv2D) (2, 128, 80, 80) 65536 block_2b_relu[0][0]
__________________________________________________________________________________________________
block_2c_bn_1 (BatchNormalizati (2, 128, 80, 80) 512 block_2c_conv_1[0][0]
__________________________________________________________________________________________________
block_2c_relu_1 (Activation) (2, 128, 80, 80) 0 block_2c_bn_1[0][0]
__________________________________________________________________________________________________
block_2c_conv_2 (Conv2D) (2, 128, 80, 80) 147456 block_2c_relu_1[0][0]
__________________________________________________________________________________________________
block_2c_bn_2 (BatchNormalizati (2, 128, 80, 80) 512 block_2c_conv_2[0][0]
__________________________________________________________________________________________________
block_2c_relu_2 (Activation) (2, 128, 80, 80) 0 block_2c_bn_2[0][0]
__________________________________________________________________________________________________
block_2c_conv_3 (Conv2D) (2, 512, 80, 80) 65536 block_2c_relu_2[0][0]
__________________________________________________________________________________________________
block_2c_bn_3 (BatchNormalizati (2, 512, 80, 80) 2048 block_2c_conv_3[0][0]
__________________________________________________________________________________________________
add_5 (Add) (2, 512, 80, 80) 0 block_2c_bn_3[0][0]
block_2b_relu[0][0]
__________________________________________________________________________________________________
block_2c_relu (Activation) (2, 512, 80, 80) 0 add_5[0][0]
__________________________________________________________________________________________________
block_2d_conv_1 (Conv2D) (2, 128, 80, 80) 65536 block_2c_relu[0][0]
__________________________________________________________________________________________________
block_2d_bn_1 (BatchNormalizati (2, 128, 80, 80) 512 block_2d_conv_1[0][0]
__________________________________________________________________________________________________
block_2d_relu_1 (Activation) (2, 128, 80, 80) 0 block_2d_bn_1[0][0]
__________________________________________________________________________________________________
block_2d_conv_2 (Conv2D) (2, 128, 80, 80) 147456 block_2d_relu_1[0][0]
__________________________________________________________________________________________________
block_2d_bn_2 (BatchNormalizati (2, 128, 80, 80) 512 block_2d_conv_2[0][0]
__________________________________________________________________________________________________
block_2d_relu_2 (Activation) (2, 128, 80, 80) 0 block_2d_bn_2[0][0]
__________________________________________________________________________________________________
block_2d_conv_3 (Conv2D) (2, 512, 80, 80) 65536 block_2d_relu_2[0][0]
__________________________________________________________________________________________________
block_2d_bn_3 (BatchNormalizati (2, 512, 80, 80) 2048 block_2d_conv_3[0][0]
__________________________________________________________________________________________________
add_6 (Add) (2, 512, 80, 80) 0 block_2d_bn_3[0][0]
block_2c_relu[0][0]
__________________________________________________________________________________________________
block_2d_relu (Activation) (2, 512, 80, 80) 0 add_6[0][0]
__________________________________________________________________________________________________
block_3a_conv_1 (Conv2D) (2, 256, 40, 40) 131072 block_2d_relu[0][0]
__________________________________________________________________________________________________
block_3a_bn_1 (BatchNormalizati (2, 256, 40, 40) 1024 block_3a_conv_1[0][0]
__________________________________________________________________________________________________
block_3a_relu_1 (Activation) (2, 256, 40, 40) 0 block_3a_bn_1[0][0]
__________________________________________________________________________________________________
block_3a_conv_2 (Conv2D) (2, 256, 40, 40) 589824 block_3a_relu_1[0][0]
__________________________________________________________________________________________________
block_3a_bn_2 (BatchNormalizati (2, 256, 40, 40) 1024 block_3a_conv_2[0][0]
__________________________________________________________________________________________________
block_3a_relu_2 (Activation) (2, 256, 40, 40) 0 block_3a_bn_2[0][0]
__________________________________________________________________________________________________
block_3a_conv_3 (Conv2D) (2, 1024, 40, 40) 262144 block_3a_relu_2[0][0]
__________________________________________________________________________________________________
block_3a_conv_shortcut (Conv2D) (2, 1024, 40, 40) 524288 block_2d_relu[0][0]
__________________________________________________________________________________________________
block_3a_bn_3 (BatchNormalizati (2, 1024, 40, 40) 4096 block_3a_conv_3[0][0]
__________________________________________________________________________________________________
block_3a_bn_shortcut (BatchNorm (2, 1024, 40, 40) 4096 block_3a_conv_shortcut[0][0]
__________________________________________________________________________________________________
add_7 (Add) (2, 1024, 40, 40) 0 block_3a_bn_3[0][0]
block_3a_bn_shortcut[0][0]
__________________________________________________________________________________________________
block_3a_relu (Activation) (2, 1024, 40, 40) 0 add_7[0][0]
__________________________________________________________________________________________________
block_3b_conv_1 (Conv2D) (2, 256, 40, 40) 262144 block_3a_relu[0][0]
__________________________________________________________________________________________________
block_3b_bn_1 (BatchNormalizati (2, 256, 40, 40) 1024 block_3b_conv_1[0][0]
__________________________________________________________________________________________________
block_3b_relu_1 (Activation) (2, 256, 40, 40) 0 block_3b_bn_1[0][0]
__________________________________________________________________________________________________
block_3b_conv_2 (Conv2D) (2, 256, 40, 40) 589824 block_3b_relu_1[0][0]
__________________________________________________________________________________________________
block_3b_bn_2 (BatchNormalizati (2, 256, 40, 40) 1024 block_3b_conv_2[0][0]
__________________________________________________________________________________________________
block_3b_relu_2 (Activation) (2, 256, 40, 40) 0 block_3b_bn_2[0][0]
__________________________________________________________________________________________________
block_3b_conv_3 (Conv2D) (2, 1024, 40, 40) 262144 block_3b_relu_2[0][0]
__________________________________________________________________________________________________
block_3b_bn_3 (BatchNormalizati (2, 1024, 40, 40) 4096 block_3b_conv_3[0][0]
__________________________________________________________________________________________________
add_8 (Add) (2, 1024, 40, 40) 0 block_3b_bn_3[0][0]
block_3a_relu[0][0]
__________________________________________________________________________________________________
block_3b_relu (Activation) (2, 1024, 40, 40) 0 add_8[0][0]
__________________________________________________________________________________________________
block_3c_conv_1 (Conv2D) (2, 256, 40, 40) 262144 block_3b_relu[0][0]
__________________________________________________________________________________________________
block_3c_bn_1 (BatchNormalizati (2, 256, 40, 40) 1024 block_3c_conv_1[0][0]
__________________________________________________________________________________________________
block_3c_relu_1 (Activation) (2, 256, 40, 40) 0 block_3c_bn_1[0][0]
__________________________________________________________________________________________________
block_3c_conv_2 (Conv2D) (2, 256, 40, 40) 589824 block_3c_relu_1[0][0]
__________________________________________________________________________________________________
block_3c_bn_2 (BatchNormalizati (2, 256, 40, 40) 1024 block_3c_conv_2[0][0]
__________________________________________________________________________________________________
block_3c_relu_2 (Activation) (2, 256, 40, 40) 0 block_3c_bn_2[0][0]
__________________________________________________________________________________________________
block_3c_conv_3 (Conv2D) (2, 1024, 40, 40) 262144 block_3c_relu_2[0][0]
__________________________________________________________________________________________________
block_3c_bn_3 (BatchNormalizati (2, 1024, 40, 40) 4096 block_3c_conv_3[0][0]
__________________________________________________________________________________________________
add_9 (Add) (2, 1024, 40, 40) 0 block_3c_bn_3[0][0]
block_3b_relu[0][0]
__________________________________________________________________________________________________
block_3c_relu (Activation) (2, 1024, 40, 40) 0 add_9[0][0]
__________________________________________________________________________________________________
block_3d_conv_1 (Conv2D) (2, 256, 40, 40) 262144 block_3c_relu[0][0]
__________________________________________________________________________________________________
block_3d_bn_1 (BatchNormalizati (2, 256, 40, 40) 1024 block_3d_conv_1[0][0]
__________________________________________________________________________________________________
block_3d_relu_1 (Activation) (2, 256, 40, 40) 0 block_3d_bn_1[0][0]
__________________________________________________________________________________________________
block_3d_conv_2 (Conv2D) (2, 256, 40, 40) 589824 block_3d_relu_1[0][0]
__________________________________________________________________________________________________
block_3d_bn_2 (BatchNormalizati (2, 256, 40, 40) 1024 block_3d_conv_2[0][0]
__________________________________________________________________________________________________
block_3d_relu_2 (Activation) (2, 256, 40, 40) 0 block_3d_bn_2[0][0]
__________________________________________________________________________________________________
block_3d_conv_3 (Conv2D) (2, 1024, 40, 40) 262144 block_3d_relu_2[0][0]
__________________________________________________________________________________________________
block_3d_bn_3 (BatchNormalizati (2, 1024, 40, 40) 4096 block_3d_conv_3[0][0]
__________________________________________________________________________________________________
add_10 (Add) (2, 1024, 40, 40) 0 block_3d_bn_3[0][0]
block_3c_relu[0][0]
__________________________________________________________________________________________________
block_3d_relu (Activation) (2, 1024, 40, 40) 0 add_10[0][0]
__________________________________________________________________________________________________
block_3e_conv_1 (Conv2D) (2, 256, 40, 40) 262144 block_3d_relu[0][0]
__________________________________________________________________________________________________
block_3e_bn_1 (BatchNormalizati (2, 256, 40, 40) 1024 block_3e_conv_1[0][0]
__________________________________________________________________________________________________
block_3e_relu_1 (Activation) (2, 256, 40, 40) 0 block_3e_bn_1[0][0]
__________________________________________________________________________________________________
block_3e_conv_2 (Conv2D) (2, 256, 40, 40) 589824 block_3e_relu_1[0][0]
__________________________________________________________________________________________________
block_3e_bn_2 (BatchNormalizati (2, 256, 40, 40) 1024 block_3e_conv_2[0][0]
__________________________________________________________________________________________________
block_3e_relu_2 (Activation) (2, 256, 40, 40) 0 block_3e_bn_2[0][0]
__________________________________________________________________________________________________
block_3e_conv_3 (Conv2D) (2, 1024, 40, 40) 262144 block_3e_relu_2[0][0]
__________________________________________________________________________________________________
block_3e_bn_3 (BatchNormalizati (2, 1024, 40, 40) 4096 block_3e_conv_3[0][0]
__________________________________________________________________________________________________
add_11 (Add) (2, 1024, 40, 40) 0 block_3e_bn_3[0][0]
block_3d_relu[0][0]
__________________________________________________________________________________________________
block_3e_relu (Activation) (2, 1024, 40, 40) 0 add_11[0][0]
__________________________________________________________________________________________________
block_3f_conv_1 (Conv2D) (2, 256, 40, 40) 262144 block_3e_relu[0][0]
__________________________________________________________________________________________________
block_3f_bn_1 (BatchNormalizati (2, 256, 40, 40) 1024 block_3f_conv_1[0][0]
__________________________________________________________________________________________________
block_3f_relu_1 (Activation) (2, 256, 40, 40) 0 block_3f_bn_1[0][0]
__________________________________________________________________________________________________
block_3f_conv_2 (Conv2D) (2, 256, 40, 40) 589824 block_3f_relu_1[0][0]
__________________________________________________________________________________________________
block_3f_bn_2 (BatchNormalizati (2, 256, 40, 40) 1024 block_3f_conv_2[0][0]
__________________________________________________________________________________________________
block_3f_relu_2 (Activation) (2, 256, 40, 40) 0 block_3f_bn_2[0][0]
__________________________________________________________________________________________________
block_3f_conv_3 (Conv2D) (2, 1024, 40, 40) 262144 block_3f_relu_2[0][0]
__________________________________________________________________________________________________
block_3f_bn_3 (BatchNormalizati (2, 1024, 40, 40) 4096 block_3f_conv_3[0][0]
__________________________________________________________________________________________________
add_12 (Add) (2, 1024, 40, 40) 0 block_3f_bn_3[0][0]
block_3e_relu[0][0]
__________________________________________________________________________________________________
block_3f_relu (Activation) (2, 1024, 40, 40) 0 add_12[0][0]
__________________________________________________________________________________________________
block_4a_conv_1 (Conv2D) (2, 512, 20, 20) 524288 block_3f_relu[0][0]
__________________________________________________________________________________________________
block_4a_bn_1 (BatchNormalizati (2, 512, 20, 20) 2048 block_4a_conv_1[0][0]
__________________________________________________________________________________________________
block_4a_relu_1 (Activation) (2, 512, 20, 20) 0 block_4a_bn_1[0][0]
__________________________________________________________________________________________________
block_4a_conv_2 (Conv2D) (2, 512, 20, 20) 2359296 block_4a_relu_1[0][0]
__________________________________________________________________________________________________
block_4a_bn_2 (BatchNormalizati (2, 512, 20, 20) 2048 block_4a_conv_2[0][0]
__________________________________________________________________________________________________
block_4a_relu_2 (Activation) (2, 512, 20, 20) 0 block_4a_bn_2[0][0]
__________________________________________________________________________________________________
block_4a_conv_3 (Conv2D) (2, 2048, 20, 20) 1048576 block_4a_relu_2[0][0]
__________________________________________________________________________________________________
block_4a_conv_shortcut (Conv2D) (2, 2048, 20, 20) 2097152 block_3f_relu[0][0]
__________________________________________________________________________________________________
block_4a_bn_3 (BatchNormalizati (2, 2048, 20, 20) 8192 block_4a_conv_3[0][0]
__________________________________________________________________________________________________
block_4a_bn_shortcut (BatchNorm (2, 2048, 20, 20) 8192 block_4a_conv_shortcut[0][0]
__________________________________________________________________________________________________
add_13 (Add) (2, 2048, 20, 20) 0 block_4a_bn_3[0][0]
block_4a_bn_shortcut[0][0]
__________________________________________________________________________________________________
block_4a_relu (Activation) (2, 2048, 20, 20) 0 add_13[0][0]
__________________________________________________________________________________________________
block_4b_conv_1 (Conv2D) (2, 512, 20, 20) 1048576 block_4a_relu[0][0]
__________________________________________________________________________________________________
block_4b_bn_1 (BatchNormalizati (2, 512, 20, 20) 2048 block_4b_conv_1[0][0]
__________________________________________________________________________________________________
block_4b_relu_1 (Activation) (2, 512, 20, 20) 0 block_4b_bn_1[0][0]
__________________________________________________________________________________________________
block_4b_conv_2 (Conv2D) (2, 512, 20, 20) 2359296 block_4b_relu_1[0][0]
__________________________________________________________________________________________________
block_4b_bn_2 (BatchNormalizati (2, 512, 20, 20) 2048 block_4b_conv_2[0][0]
__________________________________________________________________________________________________
block_4b_relu_2 (Activation) (2, 512, 20, 20) 0 block_4b_bn_2[0][0]
__________________________________________________________________________________________________
block_4b_conv_3 (Conv2D) (2, 2048, 20, 20) 1048576 block_4b_relu_2[0][0]
__________________________________________________________________________________________________
block_4b_bn_3 (BatchNormalizati (2, 2048, 20, 20) 8192 block_4b_conv_3[0][0]
__________________________________________________________________________________________________
add_14 (Add) (2, 2048, 20, 20) 0 block_4b_bn_3[0][0]
block_4a_relu[0][0]
__________________________________________________________________________________________________
block_4b_relu (Activation) (2, 2048, 20, 20) 0 add_14[0][0]
__________________________________________________________________________________________________
block_4c_conv_1 (Conv2D) (2, 512, 20, 20) 1048576 block_4b_relu[0][0]
__________________________________________________________________________________________________
block_4c_bn_1 (BatchNormalizati (2, 512, 20, 20) 2048 block_4c_conv_1[0][0]
__________________________________________________________________________________________________
block_4c_relu_1 (Activation) (2, 512, 20, 20) 0 block_4c_bn_1[0][0]
__________________________________________________________________________________________________
block_4c_conv_2 (Conv2D) (2, 512, 20, 20) 2359296 block_4c_relu_1[0][0]
__________________________________________________________________________________________________
block_4c_bn_2 (BatchNormalizati (2, 512, 20, 20) 2048 block_4c_conv_2[0][0]
__________________________________________________________________________________________________
block_4c_relu_2 (Activation) (2, 512, 20, 20) 0 block_4c_bn_2[0][0]
__________________________________________________________________________________________________
block_4c_conv_3 (Conv2D) (2, 2048, 20, 20) 1048576 block_4c_relu_2[0][0]
__________________________________________________________________________________________________
block_4c_bn_3 (BatchNormalizati (2, 2048, 20, 20) 8192 block_4c_conv_3[0][0]
__________________________________________________________________________________________________
add_15 (Add) (2, 2048, 20, 20) 0 block_4c_bn_3[0][0]
block_4b_relu[0][0]
__________________________________________________________________________________________________
block_4c_relu (Activation) (2, 2048, 20, 20) 0 add_15[0][0]
__________________________________________________________________________________________________
l5 (Conv2D) (2, 256, 20, 20) 524544 block_4c_relu[0][0]
__________________________________________________________________________________________________
l4 (Conv2D) (2, 256, 40, 40) 262400 block_3f_relu[0][0]
__________________________________________________________________________________________________
FPN_up_4 (UpSampling2D) (2, 256, 40, 40) 0 l5[0][0]
__________________________________________________________________________________________________
FPN_add_4 (Add) (2, 256, 40, 40) 0 l4[0][0]
FPN_up_4[0][0]
__________________________________________________________________________________________________
l3 (Conv2D) (2, 256, 80, 80) 131328 block_2d_relu[0][0]
__________________________________________________________________________________________________
FPN_up_3 (UpSampling2D) (2, 256, 80, 80) 0 FPN_add_4[0][0]
__________________________________________________________________________________________________
FPN_add_3 (Add) (2, 256, 80, 80) 0 l3[0][0]
FPN_up_3[0][0]
__________________________________________________________________________________________________
l2 (Conv2D) (2, 256, 160, 160) 65792 block_1c_relu[0][0]
__________________________________________________________________________________________________
FPN_up_2 (UpSampling2D) (2, 256, 160, 160) 0 FPN_add_3[0][0]
__________________________________________________________________________________________________
FPN_add_2 (Add) (2, 256, 160, 160) 0 l2[0][0]
FPN_up_2[0][0]
__________________________________________________________________________________________________
post_hoc_d5 (Conv2D) (2, 256, 20, 20) 590080 l5[0][0]
__________________________________________________________________________________________________
post_hoc_d2 (Conv2D) (2, 256, 160, 160) 590080 FPN_add_2[0][0]
__________________________________________________________________________________________________
post_hoc_d3 (Conv2D) (2, 256, 80, 80) 590080 FPN_add_3[0][0]
__________________________________________________________________________________________________
post_hoc_d4 (Conv2D) (2, 256, 40, 40) 590080 FPN_add_4[0][0]
__________________________________________________________________________________________________
p6 (MaxPooling2D) (2, 256, 10, 10) 0 post_hoc_d5[0][0]
__________________________________________________________________________________________________
rpn (Conv2D) multiple 590080 post_hoc_d2[0][0]
post_hoc_d3[0][0]
post_hoc_d4[0][0]
post_hoc_d5[0][0]
p6[0][0]
__________________________________________________________________________________________________
rpn-class (Conv2D) multiple 771 rpn[0][0]
rpn[1][0]
rpn[2][0]
rpn[3][0]
rpn[4][0]
__________________________________________________________________________________________________
rpn-box (Conv2D) multiple 3084 rpn[0][0]
rpn[1][0]
rpn[2][0]
rpn[3][0]
rpn[4][0]
__________________________________________________________________________________________________
permute (Permute) (2, 160, 160, 3) 0 rpn-class[0][0]
__________________________________________________________________________________________________
permute_2 (Permute) (2, 80, 80, 3) 0 rpn-class[1][0]
__________________________________________________________________________________________________
permute_4 (Permute) (2, 40, 40, 3) 0 rpn-class[2][0]
__________________________________________________________________________________________________
permute_6 (Permute) (2, 20, 20, 3) 0 rpn-class[3][0]
__________________________________________________________________________________________________
permute_8 (Permute) (2, 10, 10, 3) 0 rpn-class[4][0]
__________________________________________________________________________________________________
permute_1 (Permute) (2, 160, 160, 12) 0 rpn-box[0][0]
__________________________________________________________________________________________________
permute_3 (Permute) (2, 80, 80, 12) 0 rpn-box[1][0]
__________________________________________________________________________________________________
permute_5 (Permute) (2, 40, 40, 12) 0 rpn-box[2][0]
__________________________________________________________________________________________________
permute_7 (Permute) (2, 20, 20, 12) 0 rpn-box[3][0]
__________________________________________________________________________________________________
permute_9 (Permute) (2, 10, 10, 12) 0 rpn-box[4][0]
__________________________________________________________________________________________________
anchor_layer (AnchorLayer) OrderedDict([(2, (16 0 image_input[0][0]
__________________________________________________________________________________________________
info_input (InfoInput) [(2, 5)] 0
__________________________________________________________________________________________________
MLP (MultilevelProposal) ((2, 1000), (2, 1000 0 permute[0][0]
permute_2[0][0]
permute_4[0][0]
permute_6[0][0]
permute_8[0][0]
permute_1[0][0]
permute_3[0][0]
permute_5[0][0]
permute_7[0][0]
permute_9[0][0]
anchor_layer[0][0]
anchor_layer[0][1]
anchor_layer[0][2]
anchor_layer[0][3]
anchor_layer[0][4]
info_input[0][0]
__________________________________________________________________________________________________
multilevel_crop_resize (Multile (2, 1000, 256, 7, 7) 0 post_hoc_d2[0][0]
post_hoc_d3[0][0]
post_hoc_d4[0][0]
post_hoc_d5[0][0]
p6[0][0]
MLP[0][1]
__________________________________________________________________________________________________
box_head_reshape1 (ReshapeLayer (2000, 12544) 0 multilevel_crop_resize[0][0]
__________________________________________________________________________________________________
fc6 (Dense) (2000, 1024) 12846080 box_head_reshape1[0][0]
__________________________________________________________________________________________________
fc7 (Dense) (2000, 1024) 1049600 fc6[0][0]
__________________________________________________________________________________________________
class-predict (Dense) (2000, 2) 2050 fc7[0][0]
__________________________________________________________________________________________________
box-predict (Dense) (2000, 8) 8200 fc7[0][0]
__________________________________________________________________________________________________
box_head_reshape2 (ReshapeLayer (2, 1000, 2) 0 class-predict[0][0]
__________________________________________________________________________________________________
box_head_reshape3 (ReshapeLayer (2, 1000, 8) 0 box-predict[0][0]
__________________________________________________________________________________________________
gpu_detections (GPUDetections) ((2,), (2, 100, 4), 0 box_head_reshape2[0][0]
box_head_reshape3[0][0]
MLP[0][1]
info_input[0][0]
__________________________________________________________________________________________________
multilevel_crop_resize_1 (Multi (2, 100, 256, 14, 14 0 post_hoc_d2[0][0]
post_hoc_d3[0][0]
post_hoc_d4[0][0]
post_hoc_d5[0][0]
p6[0][0]
gpu_detections[0][1]
__________________________________________________________________________________________________
mask_head_reshape_1 (ReshapeLay (200, 256, 14, 14) 0 multilevel_crop_resize_1[0][0]
__________________________________________________________________________________________________
mask-conv-l0 (Conv2D) (200, 256, 14, 14) 590080 mask_head_reshape_1[0][0]
__________________________________________________________________________________________________
mask-conv-l1 (Conv2D) (200, 256, 14, 14) 590080 mask-conv-l0[0][0]
__________________________________________________________________________________________________
mask-conv-l2 (Conv2D) (200, 256, 14, 14) 590080 mask-conv-l1[0][0]
__________________________________________________________________________________________________
mask-conv-l3 (Conv2D) (200, 256, 14, 14) 590080 mask-conv-l2[0][0]
__________________________________________________________________________________________________
conv5-mask (Conv2DTranspose) (200, 256, 28, 28) 262400 mask-conv-l3[0][0]
__________________________________________________________________________________________________
mask_fcn_logits (Conv2D) (200, 2, 28, 28) 514 conv5-mask[0][0]
__________________________________________________________________________________________________
mask_postprocess (MaskPostproce (2, 100, 28, 28) 0 mask_fcn_logits[0][0]
gpu_detections[0][2]
__________________________________________________________________________________________________
mask_sigmoid (Activation) (2, 100, 28, 28) 0 mask_postprocess[0][0]
==================================================================================================
Total params: 44,028,635
Trainable params: 23,508,032
Non-trainable params: 20,520,603
__________________________________________________________________________________________________
Traceback (most recent call last):
File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/mask_rcnn/scripts/inference.py", line 351, in <module>
File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/mask_rcnn/scripts/inference.py", line 343, in main
File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/mask_rcnn/scripts/inference.py", line 305, in infer
File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/mask_rcnn/executer/distributed_executer.py", line 490, in infer
File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/mask_rcnn/utils/evaluation.py", line 242, in infer
File "/usr/local/lib/python3.6/dist-packages/tensorflow_estimator/python/estimator/estimator.py", line 638, in predict
hooks=all_hooks) as mon_sess:
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/monitored_session.py", line 1014, in __init__
stop_grace_period_secs=stop_grace_period_secs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/monitored_session.py", line 725, in __init__
self._sess = _RecoverableSession(self._coordinated_creator)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/monitored_session.py", line 1207, in __init__
_WrappedSession.__init__(self, self._create_session())
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/monitored_session.py", line 1212, in _create_session
return self._sess_creator.create_session()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/monitored_session.py", line 878, in create_session
self.tf_sess = self._session_creator.create_session()
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/monitored_session.py", line 647, in create_session
init_fn=self._scaffold.init_fn)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/session_manager.py", line 290, in prepare_session
config=config)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/session_manager.py", line 204, in _restore_checkpoint
saver.restore(sess, checkpoint_filename_with_path)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/saver.py", line 1282, in restore
checkpoint_prefix)
ValueError: The passed save_path is not a valid checkpoint: /tmp/tmpc0iiwecg/model.ckpt-90000