2022-08-15 12:55:53,943 [INFO] root: Registry: ['nvcr.io'] 2022-08-15 12:55:54,114 [INFO] tlt.components.instance_handler.local_instance: Running command in container: nvcr.io/nvidia/tao/tao-toolkit-tf:v3.21.11-tf1.15.5-py3 Matplotlib created a temporary config/cache directory at /tmp/matplotlib-sbxuhodd because the default path (/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing. Using TensorFlow backend. WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them. Using TensorFlow backend. /usr/local/lib/python3.6/dist-packages/numba/cuda/envvars.py:17: NumbaWarning: Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so. For more information about alternatives visit: ('http://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup') warnings.warn(errors.NumbaWarning(msg)) /usr/local/lib/python3.6/dist-packages/numba/cuda/envvars.py:17: NumbaWarning: Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice/. For more information about alternatives visit: ('http://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup') warnings.warn(errors.NumbaWarning(msg)) 2022-08-15 09:56:01,585 [INFO] iva.unet.spec_handler.spec_loader: Merging specification from /workspace/tao-experiments/specs/unet_retrain_vgg_6S1100.txt 2022-08-15 09:56:01,590 [INFO] iva.unet.model.utilities: Label Id 0: Train Id 0 2022-08-15 09:56:01,590 [INFO] iva.unet.model.utilities: Label Id 1: Train Id 1 2022-08-15 09:56:01,590 [INFO] iva.unet.model.utilities: Label Id 2: Train Id 2 2022-08-15 09:56:01,590 [INFO] iva.unet.model.utilities: Label Id 3: Train Id 3 2022-08-15 09:56:01,590 [INFO] iva.unet.model.utilities: Label Id 4: Train Id 4 2022-08-15 09:56:01,590 [INFO] iva.unet.model.utilities: Label Id 5: Train Id 5 2022-08-15 09:56:01,591 [INFO] iva.unet.model.model_io: Loading weights from /workspace/tao-experiments/retrain/weights/model_retrained.tlt __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_1 (InputLayer) (None, 3, 704, 1280) 0 __________________________________________________________________________________________________ block_1a_conv_1 (Conv2D) (None, 64, 704, 1280 1792 input_1[0][0] __________________________________________________________________________________________________ block_1a_relu (Activation) (None, 64, 704, 1280 0 block_1a_conv_1[0][0] __________________________________________________________________________________________________ block_1b_conv_1 (Conv2D) (None, 64, 704, 1280 36928 block_1a_relu[0][0] __________________________________________________________________________________________________ block_1b_relu (Activation) (None, 64, 704, 1280 0 block_1b_conv_1[0][0] __________________________________________________________________________________________________ block1_pool (MaxPooling2D) (None, 64, 352, 640) 0 block_1b_relu[0][0] __________________________________________________________________________________________________ block_2a_conv_1 (Conv2D) (None, 128, 352, 640 73856 block1_pool[0][0] __________________________________________________________________________________________________ block_2a_relu (Activation) (None, 128, 352, 640 0 block_2a_conv_1[0][0] __________________________________________________________________________________________________ block_2b_conv_1 (Conv2D) (None, 128, 352, 640 147584 block_2a_relu[0][0] __________________________________________________________________________________________________ block_2b_relu (Activation) (None, 128, 352, 640 0 block_2b_conv_1[0][0] __________________________________________________________________________________________________ block2_pool (MaxPooling2D) (None, 128, 176, 320 0 block_2b_relu[0][0] __________________________________________________________________________________________________ block_3a_conv_1 (Conv2D) (None, 256, 176, 320 295168 block2_pool[0][0] __________________________________________________________________________________________________ block_3a_relu (Activation) (None, 256, 176, 320 0 block_3a_conv_1[0][0] __________________________________________________________________________________________________ block_3b_conv_1 (Conv2D) (None, 256, 176, 320 590080 block_3a_relu[0][0] __________________________________________________________________________________________________ block_3b_relu (Activation) (None, 256, 176, 320 0 block_3b_conv_1[0][0] __________________________________________________________________________________________________ block_3c_conv_1 (Conv2D) (None, 256, 176, 320 590080 block_3b_relu[0][0] __________________________________________________________________________________________________ block_3c_relu (Activation) (None, 256, 176, 320 0 block_3c_conv_1[0][0] __________________________________________________________________________________________________ block3_pool (MaxPooling2D) (None, 256, 88, 160) 0 block_3c_relu[0][0] __________________________________________________________________________________________________ block_4a_conv_1 (Conv2D) (None, 512, 88, 160) 1180160 block3_pool[0][0] __________________________________________________________________________________________________ block_4a_relu (Activation) (None, 512, 88, 160) 0 block_4a_conv_1[0][0] __________________________________________________________________________________________________ block_4b_conv_1 (Conv2D) (None, 512, 88, 160) 2359808 block_4a_relu[0][0] __________________________________________________________________________________________________ block_4b_relu (Activation) (None, 512, 88, 160) 0 block_4b_conv_1[0][0] __________________________________________________________________________________________________ block_4c_conv_1 (Conv2D) (None, 512, 88, 160) 2359808 block_4b_relu[0][0] __________________________________________________________________________________________________ block_4c_relu (Activation) (None, 512, 88, 160) 0 block_4c_conv_1[0][0] __________________________________________________________________________________________________ block4_pool (MaxPooling2D) (None, 512, 44, 80) 0 block_4c_relu[0][0] __________________________________________________________________________________________________ block_5a_conv_1 (Conv2D) (None, 512, 44, 80) 2359808 block4_pool[0][0] __________________________________________________________________________________________________ block_5a_relu (Activation) (None, 512, 44, 80) 0 block_5a_conv_1[0][0] __________________________________________________________________________________________________ block_5b_conv_1 (Conv2D) (None, 512, 44, 80) 2359808 block_5a_relu[0][0] __________________________________________________________________________________________________ block_5b_relu (Activation) (None, 512, 44, 80) 0 block_5b_conv_1[0][0] __________________________________________________________________________________________________ block_5c_conv_1 (Conv2D) (None, 512, 44, 80) 2359808 block_5b_relu[0][0] __________________________________________________________________________________________________ block_5c_relu (Activation) (None, 512, 44, 80) 0 block_5c_conv_1[0][0] __________________________________________________________________________________________________ max_pooling2d_1 (MaxPooling2D) (None, 512, 22, 40) 0 block_5c_relu[0][0] __________________________________________________________________________________________________ conv2d_transpose_1 (Conv2DTrans (None, 512, 44, 80) 4194816 max_pooling2d_1[0][0] __________________________________________________________________________________________________ concatenate_1 (Concatenate) (None, 1024, 44, 80) 0 conv2d_transpose_1[0][0] block4_pool[0][0] __________________________________________________________________________________________________ conv2d_1 (Conv2D) (None, 512, 44, 80) 4719104 concatenate_1[0][0] __________________________________________________________________________________________________ conv2d_2 (Conv2D) (None, 512, 44, 80) 2359808 conv2d_1[0][0] __________________________________________________________________________________________________ conv2d_transpose_2 (Conv2DTrans (None, 256, 88, 160) 2097408 conv2d_2[0][0] __________________________________________________________________________________________________ concatenate_2 (Concatenate) (None, 512, 88, 160) 0 conv2d_transpose_2[0][0] block3_pool[0][0] __________________________________________________________________________________________________ conv2d_3 (Conv2D) (None, 256, 88, 160) 1179904 concatenate_2[0][0] __________________________________________________________________________________________________ conv2d_4 (Conv2D) (None, 256, 88, 160) 590080 conv2d_3[0][0] __________________________________________________________________________________________________ conv2d_transpose_3 (Conv2DTrans (None, 128, 176, 320 524416 conv2d_4[0][0] __________________________________________________________________________________________________ concatenate_3 (Concatenate) (None, 256, 176, 320 0 conv2d_transpose_3[0][0] block2_pool[0][0] __________________________________________________________________________________________________ conv2d_5 (Conv2D) (None, 128, 176, 320 295040 concatenate_3[0][0] __________________________________________________________________________________________________ conv2d_6 (Conv2D) (None, 128, 176, 320 147584 conv2d_5[0][0] __________________________________________________________________________________________________ conv2d_transpose_4 (Conv2DTrans (None, 64, 352, 640) 131136 conv2d_6[0][0] __________________________________________________________________________________________________ concatenate_4 (Concatenate) (None, 128, 352, 640 0 conv2d_transpose_4[0][0] block1_pool[0][0] __________________________________________________________________________________________________ conv2d_7 (Conv2D) (None, 64, 352, 640) 73792 concatenate_4[0][0] __________________________________________________________________________________________________ conv2d_8 (Conv2D) (None, 64, 352, 640) 36928 conv2d_7[0][0] __________________________________________________________________________________________________ conv2d_transpose_5 (Conv2DTrans (None, 64, 704, 1280 65600 conv2d_8[0][0] __________________________________________________________________________________________________ concatenate_5 (Concatenate) (None, 128, 704, 128 0 conv2d_transpose_5[0][0] block_1a_relu[0][0] __________________________________________________________________________________________________ conv2d_9 (Conv2D) (None, 64, 704, 1280 73792 concatenate_5[0][0] __________________________________________________________________________________________________ conv2d_10 (Conv2D) (None, 64, 704, 1280 36928 conv2d_9[0][0] __________________________________________________________________________________________________ conv2d_11 (Conv2D) (None, 6, 704, 1280) 390 conv2d_10[0][0] __________________________________________________________________________________________________ permute_1 (Permute) (None, 704, 1280, 6) 0 conv2d_11[0][0] __________________________________________________________________________________________________ softmax_1 (Softmax) (None, 704, 1280, 6) 0 permute_1[0][0] ================================================================================================== Total params: 31,241,414 Trainable params: 31,241,414 Non-trainable params: 0 __________________________________________________________________________________________________ 2022-08-15 09:56:06,050 [INFO] iva.unet.model.model_io: Loaded weights Successfully for Export 2022-08-15 09:56:06,050 [INFO] root: Using input nodes: ['input_1'] 2022-08-15 09:56:06,050 [INFO] root: Using output nodes: ['softmax_1'] 2022-08-15 09:56:06,050 [INFO] iva.common.export.keras_exporter: Using input nodes: ['input_1'] 2022-08-15 09:56:06,050 [INFO] iva.common.export.keras_exporter: Using output nodes: ['softmax_1'] The ONNX operator number change on the optimization: 121 -> 54 2022-08-15 09:56:10,875 [INFO] keras2onnx: The ONNX operator number change on the optimization: 121 -> 54 2022-08-15 09:56:11,595 [WARNING] onnxmltools: The maximum opset needed by this model is only 11. 2022-08-15 09:56:17,948 [INFO] numba.cuda.cudadrv.driver: init 2022-08-15 09:56:18,148 [INFO] iva.unet.export.unet_exporter: Converted model was saved into /workspace/tao-experiments/export/tao.fp32_6s03.etlt 2022-08-15 09:56:42,613 [INFO] root: Export complete. 2022-08-15 09:56:42,614 [INFO] root: { "param_count": 31.241414, "size": 134.78579711914062 } 2022-08-15 12:56:44,663 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.