{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 0. Set up env variables and map drives \n",
"\n",
"When using the purpose-built pretrained models from NGC, please make sure to set the `$KEY` environment variable to the key as mentioned in the model overview. Failing to do so, can lead to errors when trying to load them as pretrained models.\n",
"\n",
"The following notebook requires the user to set an env variable called the `$LOCAL_PROJECT_DIR` as the path to the users workspace. Please note that the dataset to run this notebook is expected to reside in the `$LOCAL_PROJECT_DIR/data`, while the TLT experiment generated collaterals will be output to `$LOCAL_PROJECT_DIR/ssd`. More information on how to set up the dataset and the supported steps in the TLT workflow are provided in the subsequent cells.\n",
"\n",
"*Note: Please make sure to remove any stray artifacts/files from the `$USER_EXPERIMENT_DIR` or `$DATA_DOWNLOAD_DIR` paths as mentioned below, that may have been generated from previous experiments. Having checkpoint files etc may interfere with creating a training graph for a new experiment.*\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Please replace the variable with your key.\n",
"env: KEY=dWh2djZnOWJ1NnBjbDE4bWJuNzJxNDJiM3I6MGIyZDljN2EtZDk3Yi00ZmIwLThjNTctYWQwZmIxNWFhNmIy\n",
"env: GPU_INDEX=0\n",
"env: USER_EXPERIMENT_DIR=/workspace/tlt-experiments/ssd\n",
"env: DATA_DOWNLOAD_DIR=/workspace/tlt-experiments/data\n",
"env: LOCAL_PROJECT_DIR=/home/dell/tlt-experiments\n",
"env: SPECS_DIR=/workspace/tlt-experiments/ssd/specs\n",
"total 40\r\n",
"-rw-r--r-- 1 dell dell 309 Şub 25 21:15 ssd_tfrecords_kitti_trainval.txt\r\n",
"-rw-r--r-- 1 dell dell 1659 Şub 25 21:15 ssd_retrain_resnet18_kitti.txt\r\n",
"-rw-r--r-- 1 dell dell 513 Mar 12 00:55 augment.txt\r\n",
"-rw-r--r-- 1 dell dell 1351 Mar 12 01:35 ssd_retrain_mobilenet_v2.txt\r\n",
"-rw-r--r-- 1 dell dell 1401 Mar 12 12:13 ssd_train_mobilenet_v2.txt\r\n",
"-rw-r--r-- 1 dell dell 1361 Mar 12 15:45 ssd_train_kitti.txt\r\n",
"-rw-r--r-- 1 dell dell 1412 Mar 12 15:55 ssd_train_resnet18_head.txt\r\n",
"-rw-r--r-- 1 dell dell 1395 Mar 12 17:00 ssd_retrain_resnet18_head.txt\r\n",
"-rw-r--r-- 1 dell dell 1672 Mar 12 18:24 ssd_train_resnet18_kitti.txt\r\n",
"-rw-r--r-- 1 dell dell 1682 Mar 12 18:38 ssd_train_mobilenet_kitti.txt\r\n"
]
}
],
"source": [
"# Setting up env variables for cleaner command line commands.\n",
"import os\n",
"\n",
"print(\"Please replace the variable with your key.\")\n",
"%env KEY=dWh2djZnOWJ1NnBjbDE4bWJuNzJxNDJiM3I6MGIyZDljN2EtZDk3Yi00ZmIwLThjNTctYWQwZmIxNWFhNmIy\n",
"%env GPU_INDEX=0\n",
"%env USER_EXPERIMENT_DIR=/workspace/tlt-experiments/ssd\n",
"%env DATA_DOWNLOAD_DIR=/workspace/tlt-experiments/data\n",
"\n",
"# Please define this local project directory that needs to be mapped to the TLT docker session.\n",
"# The dataset expected to be present in $LOCAL_PROJECT_DIR/data, while the results for the steps\n",
"# in this notebook will be stored at $LOCAL_PROJECT_DIR/ssd\n",
"%env LOCAL_PROJECT_DIR=/home/dell/tlt-experiments\n",
"os.environ[\"LOCAL_DATA_DIR\"] = os.path.join(os.getenv(\"LOCAL_PROJECT_DIR\", os.getcwd()), \"data\")\n",
"os.environ[\"LOCAL_EXPERIMENT_DIR\"] = os.path.join(os.getenv(\"LOCAL_PROJECT_DIR\", os.getcwd()), \"ssd\")\n",
"\n",
"# Set this path if you don't run the notebook from the samples directory.\n",
"# %env NOTEBOOK_ROOT=/data/tlt-experiments/ssd\n",
"# The sample spec files are present in the same path as the downloaded samples.\n",
"os.environ[\"LOCAL_SPECS_DIR\"] = os.path.join(\n",
" os.getenv(\"NOTEBOOK_ROOT\", os.getcwd()),\n",
" \"specs\"\n",
")\n",
"%env SPECS_DIR=/workspace/tlt-experiments/ssd/specs\n",
"\n",
"# Showing list of specification files.\n",
"!ls -rlt $LOCAL_SPECS_DIR"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"# Mapping up the local directories to the TLT docker.\n",
"import json\n",
"mounts_file = os.path.expanduser(\"~/.tlt_mounts.json\")\n",
"\n",
"# Define the dictionary with the mapped drives\n",
"drive_map = {\n",
" \"Mounts\": [\n",
" # Mapping the data directory\n",
" {\n",
" \"source\": os.environ[\"LOCAL_PROJECT_DIR\"],\n",
" \"destination\": \"/workspace/tlt-experiments\"\n",
" },\n",
" # Mapping the specs directory.\n",
" {\n",
" \"source\": os.environ[\"LOCAL_SPECS_DIR\"],\n",
" \"destination\": os.environ[\"SPECS_DIR\"]\n",
" },\n",
" ]\n",
"}\n",
"\n",
"# Writing the mounts file.\n",
"with open(mounts_file, \"w\") as mfile:\n",
" json.dump(drive_map, mfile, indent=4)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\r\n",
" \"Mounts\": [\r\n",
" {\r\n",
" \"source\": \"/home/dell/tlt-experiments\",\r\n",
" \"destination\": \"/workspace/tlt-experiments\"\r\n",
" },\r\n",
" {\r\n",
" \"source\": \"/home/dell/tlt-experiments/ssd/specs\",\r\n",
" \"destination\": \"/workspace/tlt-experiments/ssd/specs\"\r\n",
" }\r\n",
" ]\r\n",
"}"
]
}
],
"source": [
"!cat ~/.tlt_mounts.json"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Configuration of the TLT Instance\r\n",
"dockers: ['nvcr.io/nvidia/tlt-streamanalytics', 'nvcr.io/nvidia/tlt-pytorch']\r\n",
"format_version: 1.0\r\n",
"tlt_version: 3.0\r\n",
"published_date: 02/02/2021\r\n"
]
}
],
"source": [
"# View the versions of the TLT launcher\n",
"!tlt info"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. Prepare dataset and pre-trained model "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" We will be using the KITTI detection dataset for the tutorial. To find more details please visit\n",
" http://www.cvlibs.net/datasets/kitti/eval_object.php?obj_benchmark=2d. Please download the KITTI detection images (http://www.cvlibs.net/download.php?file=data_object_image_2.zip) and labels (http://www.cvlibs.net/download.php?file=data_object_label_2.zip) to $DATA_DOWNLOAD_DIR."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"total 8\r\n",
"drwxr-xr-x 4 dell dell 4096 Mar 12 00:31 training\r\n",
"drwxr-xr-x 4 dell dell 4096 Mar 12 00:29 val\r\n"
]
}
],
"source": [
"# verify\n",
"!ls -l $LOCAL_DATA_DIR/HEAD"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"This script will not run as output image path already exists.\r\n"
]
}
],
"source": [
"# Generate val dataset out of training dataset\n",
"!python3.6 generate_val_dataset.py --input_image_dir=$LOCAL_DATA_DIR/HEAD/training/images_300x300 \\\n",
" --input_label_dir=$LOCAL_DATA_DIR/HEAD/training/labels_300x300\\\n",
" --output_dir=$LOCAL_DATA_DIR/HEAD/val"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 2.1 Download pre-trained model "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will use NGC CLI to get the pre-trained models. For more details, go to [ngc.nvidia.com](ngc.nvidia.com) and click the SETUP on the navigation bar."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"env: CLI=ngccli_reg_linux.zip\n",
"--2021-03-12 11:35:56-- https://ngc.nvidia.com/downloads/ngccli_reg_linux.zip\n",
"Resolving ngc.nvidia.com (ngc.nvidia.com)... 54.192.233.50, 54.192.233.91, 54.192.233.124, ...\n",
"Connecting to ngc.nvidia.com (ngc.nvidia.com)|54.192.233.50|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: 21648112 (21M) [application/zip]\n",
"Saving to: ‘/home/dell/tlt-experiments/ngccli/ngccli_reg_linux.zip’\n",
"\n",
"ngccli_reg_linux.zi 100%[===================>] 20,64M 2,29MB/s in 8,7s \n",
"\n",
"2021-03-12 11:36:05 (2,37 MB/s) - ‘/home/dell/tlt-experiments/ngccli/ngccli_reg_linux.zip’ saved [21648112/21648112]\n",
"\n",
"Archive: /home/dell/tlt-experiments/ngccli/ngccli_reg_linux.zip\n",
" inflating: /home/dell/tlt-experiments/ngccli/ngc \n",
" inflating: /home/dell/tlt-experiments/ngccli/ngc.md5 \n"
]
}
],
"source": [
"# Installing NGC CLI on the local machine.\n",
"## Download and install\n",
"%env CLI=ngccli_reg_linux.zip\n",
"!mkdir -p $LOCAL_PROJECT_DIR/ngccli\n",
"\n",
"# Remove any previously existing CLI installations\n",
"!rm -rf $LOCAL_PROJECT_DIR/ngccli/*\n",
"!wget \"https://ngc.nvidia.com/downloads/$CLI\" -P $LOCAL_PROJECT_DIR/ngccli\n",
"!unzip -u \"$LOCAL_PROJECT_DIR/ngccli/$CLI\" -d $LOCAL_PROJECT_DIR/ngccli/\n",
"!rm $LOCAL_PROJECT_DIR/ngccli/*.zip \n",
"os.environ[\"PATH\"]=\"{}/ngccli:{}\".format(os.getenv(\"LOCAL_PROJECT_DIR\", \"\"), os.getenv(\"PATH\", \"\"))"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"+-------+-------+-------+-------+-------+-------+-------+-------+-------+\r\n",
"| Versi | Accur | Epoch | Batch | GPU | Memor | File | Statu | Creat |\r\n",
"| on | acy | s | Size | Model | y Foo | Size | s | ed |\r\n",
"| | | | | | tprin | | | Date |\r\n",
"| | | | | | t | | | |\r\n",
"+-------+-------+-------+-------+-------+-------+-------+-------+-------+\r\n",
"| vgg19 | 77.56 | 80 | 1 | V100 | 153.7 | 153.7 | UPLOA | Apr |\r\n",
"| | | | | | | 2 MB | D_COM | 29, |\r\n",
"| | | | | | | | PLETE | 2020 |\r\n",
"| vgg16 | 77.17 | 80 | 1 | V100 | 515.1 | 515.0 | UPLOA | Apr |\r\n",
"| | | | | | | 9 MB | D_COM | 29, |\r\n",
"| | | | | | | | PLETE | 2020 |\r\n",
"| squee | 65.13 | 80 | 1 | V100 | 6.5 | 6.46 | UPLOA | Apr |\r\n",
"| zenet | | | | | | MB | D_COM | 29, |\r\n",
"| | | | | | | | PLETE | 2020 |\r\n",
"| resne | 77.91 | 80 | 1 | V100 | 294.2 | 294.2 | UPLOA | Apr |\r\n",
"| t50 | | | | | | MB | D_COM | 29, |\r\n",
"| | | | | | | | PLETE | 2020 |\r\n",
"| resne | 77.04 | 80 | 1 | V100 | 170.7 | 170.6 | UPLOA | Apr |\r\n",
"| t34 | | | | | | 5 MB | D_COM | 29, |\r\n",
"| | | | | | | | PLETE | 2020 |\r\n",
"| resne | 76.74 | 80 | 1 | V100 | 89.0 | 88.96 | UPLOA | Apr |\r\n",
"| t18 | | | | | | MB | D_COM | 29, |\r\n",
"| | | | | | | | PLETE | 2020 |\r\n",
"| resne | 77.78 | 80 | 1 | V100 | 328.4 | 328.4 | UPLOA | Apr |\r\n",
"| t101 | | | | | | 2 MB | D_COM | 29, |\r\n",
"| | | | | | | | PLETE | 2020 |\r\n",
"| resne | 74.38 | 80 | 1 | V100 | 38.3 | 38.31 | UPLOA | Apr |\r\n",
"| t10 | | | | | | MB | D_COM | 29, |\r\n",
"| | | | | | | | PLETE | 2020 |\r\n",
"| mobil | 72.75 | 80 | 1 | V100 | 5.0 | 5.01 | UPLOA | Apr |\r\n",
"| enet_ | | | | | | MB | D_COM | 29, |\r\n",
"| v2 | | | | | | | PLETE | 2020 |\r\n",
"| mobil | 79.5 | 80 | 1 | V100 | 26.2 | 26.22 | UPLOA | Apr |\r\n",
"| enet_ | | | | | | MB | D_COM | 29, |\r\n",
"| v1 | | | | | | | PLETE | 2020 |\r\n",
"| googl | 77.11 | 80 | 1 | V100 | 47.6 | 47.64 | UPLOA | Apr |\r\n",
"| enet | | | | | | MB | D_COM | 29, |\r\n",
"| | | | | | | | PLETE | 2020 |\r\n",
"| effic | 77.9 | 80 | 1 | V100 | 16.9 | 16.9 | UPLOA | Feb |\r\n",
"| ientn | | | | | | MB | D_COM | 09, |\r\n",
"| et_b0 | | | | | | | PLETE | 2021 |\r\n",
"| _swis | | | | | | | | |\r\n",
"| h | | | | | | | | |\r\n",
"| effic | 77.6 | 80 | 1 | V100 | 16.9 | 16.9 | UPLOA | Feb |\r\n",
"| ientn | | | | | | MB | D_COM | 09, |\r\n",
"| et_b0 | | | | | | | PLETE | 2021 |\r\n",
"| _relu | | | | | | | | |\r\n",
"| darkn | 76.44 | 80 | 1 | V100 | 311.7 | 311.6 | UPLOA | Apr |\r\n",
"| et53 | | | | | | 8 MB | D_COM | 29, |\r\n",
"| | | | | | | | PLETE | 2020 |\r\n",
"| darkn | 77.52 | 80 | 1 | V100 | 152.8 | 152.8 | UPLOA | Apr |\r\n",
"| et19 | | | | | | 2 MB | D_COM | 29, |\r\n",
"| | | | | | | | PLETE | 2020 |\r\n",
"| cspda | 76.44 | 80 | 1 | V100 | 103.0 | 102.9 | UPLOA | Feb |\r\n",
"| rknet | | | | | | 9 MB | D_COM | 02, |\r\n",
"| 53 | | | | | | | PLETE | 2021 |\r\n",
"| cspda | 77.52 | 80 | 1 | V100 | 62.9 | 62.86 | UPLOA | Feb |\r\n",
"| rknet | | | | | | MB | D_COM | 02, |\r\n",
"| 19 | | | | | | | PLETE | 2021 |\r\n",
"+-------+-------+-------+-------+-------+-------+-------+-------+-------+\r\n"
]
}
],
"source": [
"!ngc registry model list nvidia/tlt_pretrained_object_detection:*"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"!mkdir -p $LOCAL_EXPERIMENT_DIR/pretrained_mobilenet_v2/"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Downloaded 4.3 MB in 12s, Download speed: 366.19 KB/s \n",
"----------------------------------------------------\n",
"Transfer id: tlt_pretrained_object_detection_vmobilenet_v2 Download status: Completed.\n",
"Downloaded local path: /home/dell/tlt-experiments/ssd/pretrained_mobilenet_v2/tlt_pretrained_object_detection_vmobilenet_v2\n",
"Total files downloaded: 1 \n",
"Total downloaded size: 4.3 MB\n",
"Started at: 2021-03-12 11:37:21.732748\n",
"Completed at: 2021-03-12 11:37:33.753477\n",
"Duration taken: 12s\n",
"----------------------------------------------------\n"
]
}
],
"source": [
"# Pull pretrained model from NGC\n",
"!ngc registry model download-version nvidia/tlt_pretrained_object_detection:mobilenet_v2 --dest $LOCAL_EXPERIMENT_DIR/pretrained_mobilenet_v2"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Check that model is downloaded into dir.\n",
"total 5136\r\n",
"-rw------- 1 dell dell 5258048 Mar 12 11:37 mobilenet_v2.hdf5\r\n"
]
}
],
"source": [
"print(\"Check that model is downloaded into dir.\")\n",
"!ls -l $LOCAL_EXPERIMENT_DIR/pretrained_mobilenet_v2/tlt_pretrained_object_detection_vmobilenet_v2"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Provide training specification \n",
"* Dataset for the train datasets\n",
" * In order to use the newly generated dataset, update the dataset_config parameter in the spec file at `$LOCAL_SPECS_DIR/ssd_train_resnet18_kitti.txt` \n",
"* Augmentation parameters for on the fly data augmentation\n",
"* Other training (hyper-)parameters such as batch size, number of epochs, learning rate etc.\n",
"* Whether to use quantization aware training (QAT)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# To enable QAT training on sample spec file, uncomment following lines\n",
"# !sed -i \"s/enable_qat: false/enable_qat: true/g\" $LOCAL_SPECS_DIR/ssd_train_resnet18_kitti.txt\n",
"# !sed -i \"s/enable_qat: false/enable_qat: true/g\" $LOCAL_SPECS_DIR/ssd_retrain_resnet18_kitti.txt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# By default, the sample spec file disables QAT training. You can force non-QAT training by running lines below\n",
"# !sed -i \"s/enable_qat: true/enable_qat: false/g\" $LOCAL_SPECS_DIR/ssd_train_resnet18_kitti.txt\n",
"# !sed -i \"s/enable_qat: true/enable_qat: false/g\" $LOCAL_SPECS_DIR/ssd_retrain_resnet18_kitti.txt"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"random_seed: 42\r\n",
"ssd_config {\r\n",
" aspect_ratios_global: \"[1.0, 2.0, 0.5, 3.0, 1.0/3.0]\"\r\n",
" scales: \"[0.05, 0.1, 0.25, 0.4, 0.55, 0.7, 0.85]\"\r\n",
" two_boxes_for_ar1: true\r\n",
" clip_boxes: false\r\n",
" variances: \"[0.1, 0.1, 0.2, 0.2]\"\r\n",
" arch: \"mobilenet_v2\"\r\n",
" freeze_bn: false\r\n",
" freeze_blocks: 0\r\n",
"}\r\n",
"training_config {\r\n",
" batch_size_per_gpu: 8\r\n",
" num_epochs: 80\r\n",
" enable_qat: false\r\n",
" learning_rate {\r\n",
" soft_start_annealing_schedule {\r\n",
" min_learning_rate: 5e-5\r\n",
" max_learning_rate: 2e-2\r\n",
" soft_start: 0.15\r\n",
" annealing: 0.8\r\n",
" }\r\n",
" }\r\n",
" regularizer {\r\n",
" type: L1\r\n",
" weight: 3e-5\r\n",
" }\r\n",
"}\r\n",
"eval_config {\r\n",
" validation_period_during_training: 5\r\n",
" average_precision_mode: SAMPLE\r\n",
" batch_size: 8\r\n",
" matching_iou_threshold: 0.5\r\n",
"}\r\n",
"nms_config {\r\n",
" confidence_threshold: 0.01\r\n",
" clustering_iou_threshold: 0.6\r\n",
" top_k: 200\r\n",
"}\r\n",
"augmentation_config {\r\n",
" output_width: 300\r\n",
" output_height: 300\r\n",
" output_channel: 3\r\n",
"}\r\n",
"dataset_config {\r\n",
" data_sources: {\r\n",
" label_directory_path: \"/workspace/tlt-experiments/data/HEAD/training/labels_300x300\"\r\n",
" image_directory_path: \"/workspace/tlt-experiments/data/HEAD/training/images_300x300\"\r\n",
" }\r\n",
" include_difficult_in_training: true\r\n",
"\r\n",
" target_class_mapping {\r\n",
" key: \"head\"\r\n",
" value: \"head\"\r\n",
" }\r\n",
" validation_data_sources: {\r\n",
" label_directory_path: \"/workspace/tlt-experiments/data/HEAD/val/label\"\r\n",
" image_directory_path: \"/workspace/tlt-experiments/data/HEAD/val/image\"\r\n",
" }\r\n",
"}\r\n"
]
}
],
"source": [
"!cat $LOCAL_SPECS_DIR/ssd_train_mobilenet_v2.txt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. Run TLT training \n",
"* Provide the sample spec file and the output directory location for models\n",
"* WARNING: training will take several hours or one day to complete"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"!mkdir -p $LOCAL_EXPERIMENT_DIR/experiment_dir_unpruned_300x300_ssdmobilenet"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"To run with multigpu, please change --gpus based on the number of available GPUs in your machine.\n",
"2021-03-12 19:52:54,780 [WARNING] tlt.components.docker_handler.docker_handler: \n",
"Docker will run the commands as root. If you would like to retain your\n",
"local host permissions, please add the \"user\":\"UID:GID\" in the\n",
"DockerOptions portion of the ~/.tlt_mounts.json file. You can obtain your\n",
"users UID and GID by using the \"id -u\" and \"id -g\" commands on the\n",
"terminal.\n",
"Using TensorFlow backend.\n",
"WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.\n",
"Using TensorFlow backend.\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/__init__.py:117: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n",
"\n",
"2021-03-12 16:53:00,850 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/__init__.py:117: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/__init__.py:143: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n",
"\n",
"2021-03-12 16:53:00,850 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/horovod/tensorflow/__init__.py:143: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n",
"\n",
"WARNING:tensorflow:From /home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/ssd/scripts/train.py:63: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n",
"\n",
"2021-03-12 16:53:00,918 [WARNING] tensorflow: From /home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/ssd/scripts/train.py:63: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n",
"\n",
"WARNING:tensorflow:From /home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/ssd/scripts/train.py:66: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n",
"\n",
"2021-03-12 16:53:00,919 [WARNING] tensorflow: From /home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/ssd/scripts/train.py:66: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n",
"\n",
"2021-03-12 16:53:01,191 [INFO] /usr/local/lib/python3.6/dist-packages/iva/ssd/utils/spec_loader.pyc: Merging specification from /workspace/tlt-experiments/ssd/specs/ssd_train_mobilenet_v2.txt\n",
"2021-03-12 16:53:01,203 [INFO] __main__: Loading pretrained weights. This may take a while...\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n",
"\n",
"2021-03-12 16:53:01,203 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n",
"\n",
"2021-03-12 16:53:01,206 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.\n",
"\n",
"2021-03-12 16:53:01,220 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4185: The name tf.truncated_normal is deprecated. Please use tf.random.truncated_normal instead.\n",
"\n",
"2021-03-12 16:53:02,035 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4185: The name tf.truncated_normal is deprecated. Please use tf.random.truncated_normal instead.\n",
"\n",
"WARNING:tensorflow:From /opt/nvidia/third_party/keras/tensorflow_backend.py:187: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.\n",
"\n",
"2021-03-12 16:53:04,327 [WARNING] tensorflow: From /opt/nvidia/third_party/keras/tensorflow_backend.py:187: The name tf.nn.avg_pool is deprecated. Please use tf.nn.avg_pool2d instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.\n",
"\n",
"2021-03-12 16:53:04,606 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:199: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.\n",
"\n",
"2021-03-12 16:53:04,607 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:199: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:206: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.\n",
"\n",
"2021-03-12 16:53:05,429 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:206: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n",
"\n",
"2021-03-12 16:53:06,297 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3295: The name tf.log is deprecated. Please use tf.math.log instead.\n",
"\n",
"2021-03-12 16:53:06,300 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:3295: The name tf.log is deprecated. Please use tf.math.log instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.\n",
"\n",
"2021-03-12 16:53:06,960 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:973: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.\n",
"\n",
"2021-03-12 16:53:07,198 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:973: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.\n",
"\n",
"Weights for those layers can not be loaded: ['re_lu_0']\n",
"STOP trainig now and check the pre-train model if this is not expected!\n",
"Initialize optimizer\n",
"WARNING:tensorflow:From /home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/ssd/utils/tensor_utils.py:121: The name tf.local_variables_initializer is deprecated. Please use tf.compat.v1.local_variables_initializer instead.\n",
"\n",
"2021-03-12 16:53:55,276 [WARNING] tensorflow: From /home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/ssd/utils/tensor_utils.py:121: The name tf.local_variables_initializer is deprecated. Please use tf.compat.v1.local_variables_initializer instead.\n",
"\n",
"WARNING:tensorflow:From /home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/ssd/utils/tensor_utils.py:122: The name tf.tables_initializer is deprecated. Please use tf.compat.v1.tables_initializer instead.\n",
"\n",
"2021-03-12 16:53:55,277 [WARNING] tensorflow: From /home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/ssd/utils/tensor_utils.py:122: The name tf.tables_initializer is deprecated. Please use tf.compat.v1.tables_initializer instead.\n",
"\n",
"WARNING:tensorflow:From /home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/ssd/utils/tensor_utils.py:123: The name tf.get_collection is deprecated. Please use tf.compat.v1.get_collection instead.\n",
"\n",
"2021-03-12 16:53:55,277 [WARNING] tensorflow: From /home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/ssd/utils/tensor_utils.py:123: The name tf.get_collection is deprecated. Please use tf.compat.v1.get_collection instead.\n",
"\n",
"__________________________________________________________________________________________________\r\n",
"Layer (type) Output Shape Param # Connected to \r\n",
"==================================================================================================\r\n",
"Input (InputLayer) (None, 3, 300, 300) 0 \r\n",
"__________________________________________________________________________________________________\r\n",
"conv1_pad (ZeroPadding2D) (None, 3, 302, 302) 0 Input[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"conv1 (Conv2D) (None, 32, 150, 150) 864 conv1_pad[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"bn_conv1 (BatchNormalization) (None, 32, 150, 150) 128 conv1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"re_lu_0 (ReLU) (None, 32, 150, 150) 0 bn_conv1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"expanded_conv_depthwise_pad (Ze (None, 32, 152, 152) 0 re_lu_0[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"expanded_conv_depthwise (Depthw (None, 32, 150, 150) 288 expanded_conv_depthwise_pad[0][0]\r\n",
"__________________________________________________________________________________________________\r\n",
"expanded_conv_depthwise_bn (Bat (None, 32, 150, 150) 128 expanded_conv_depthwise[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"expanded_conv_relu (ReLU) (None, 32, 150, 150) 0 expanded_conv_depthwise_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"expanded_conv_project (Conv2D) (None, 16, 150, 150) 512 expanded_conv_relu[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"expanded_conv_project_bn (Batch (None, 16, 150, 150) 64 expanded_conv_project[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_1_expand (Conv2D) (None, 96, 150, 150) 1536 expanded_conv_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_1_expand_bn (BatchNormali (None, 96, 150, 150) 384 block_1_expand[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"re_lu_2 (ReLU) (None, 96, 150, 150) 0 block_1_expand_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_1_depthwise_pad (ZeroPadd (None, 96, 152, 152) 0 re_lu_2[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_1_depthwise (DepthwiseCon (None, 96, 75, 75) 864 block_1_depthwise_pad[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_1_depthwise_bn (BatchNorm (None, 96, 75, 75) 384 block_1_depthwise[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_1_relu (ReLU) (None, 96, 75, 75) 0 block_1_depthwise_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_1_project (Conv2D) (None, 24, 75, 75) 2304 block_1_relu[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_1_project_bn (BatchNormal (None, 24, 75, 75) 96 block_1_project[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_2_expand (Conv2D) (None, 144, 75, 75) 3456 block_1_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_2_expand_bn (BatchNormali (None, 144, 75, 75) 576 block_2_expand[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"re_lu_3 (ReLU) (None, 144, 75, 75) 0 block_2_expand_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_2_depthwise_pad (ZeroPadd (None, 144, 77, 77) 0 re_lu_3[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_2_depthwise (DepthwiseCon (None, 144, 75, 75) 1296 block_2_depthwise_pad[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_2_depthwise_bn (BatchNorm (None, 144, 75, 75) 576 block_2_depthwise[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_2_relu (ReLU) (None, 144, 75, 75) 0 block_2_depthwise_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_2_project (Conv2D) (None, 24, 75, 75) 3456 block_2_relu[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_2_projected_inputs (Conv2 (None, 24, 75, 75) 576 block_1_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_2_project_bn (BatchNormal (None, 24, 75, 75) 96 block_2_project[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_2_add (Add) (None, 24, 75, 75) 0 block_2_projected_inputs[0][0] \r\n",
" block_2_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_3_expand (Conv2D) (None, 144, 75, 75) 3456 block_2_add[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_3_expand_bn (BatchNormali (None, 144, 75, 75) 576 block_3_expand[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"re_lu_4 (ReLU) (None, 144, 75, 75) 0 block_3_expand_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_3_depthwise_pad (ZeroPadd (None, 144, 77, 77) 0 re_lu_4[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_3_depthwise (DepthwiseCon (None, 144, 38, 38) 1296 block_3_depthwise_pad[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_3_depthwise_bn (BatchNorm (None, 144, 38, 38) 576 block_3_depthwise[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_3_relu (ReLU) (None, 144, 38, 38) 0 block_3_depthwise_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_3_project (Conv2D) (None, 32, 38, 38) 4608 block_3_relu[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_3_project_bn (BatchNormal (None, 32, 38, 38) 128 block_3_project[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_4_expand (Conv2D) (None, 192, 38, 38) 6144 block_3_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_4_expand_bn (BatchNormali (None, 192, 38, 38) 768 block_4_expand[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"re_lu_5 (ReLU) (None, 192, 38, 38) 0 block_4_expand_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_4_depthwise_pad (ZeroPadd (None, 192, 40, 40) 0 re_lu_5[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_4_depthwise (DepthwiseCon (None, 192, 38, 38) 1728 block_4_depthwise_pad[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_4_depthwise_bn (BatchNorm (None, 192, 38, 38) 768 block_4_depthwise[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_4_relu (ReLU) (None, 192, 38, 38) 0 block_4_depthwise_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_4_project (Conv2D) (None, 32, 38, 38) 6144 block_4_relu[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_4_projected_inputs (Conv2 (None, 32, 38, 38) 1024 block_3_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_4_project_bn (BatchNormal (None, 32, 38, 38) 128 block_4_project[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_4_add (Add) (None, 32, 38, 38) 0 block_4_projected_inputs[0][0] \r\n",
" block_4_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_5_expand (Conv2D) (None, 192, 38, 38) 6144 block_4_add[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_5_expand_bn (BatchNormali (None, 192, 38, 38) 768 block_5_expand[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"re_lu_6 (ReLU) (None, 192, 38, 38) 0 block_5_expand_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_5_depthwise_pad (ZeroPadd (None, 192, 40, 40) 0 re_lu_6[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_5_depthwise (DepthwiseCon (None, 192, 38, 38) 1728 block_5_depthwise_pad[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_5_depthwise_bn (BatchNorm (None, 192, 38, 38) 768 block_5_depthwise[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_5_relu (ReLU) (None, 192, 38, 38) 0 block_5_depthwise_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_5_project (Conv2D) (None, 32, 38, 38) 6144 block_5_relu[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_5_projected_inputs (Conv2 (None, 32, 38, 38) 1024 block_4_add[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_5_project_bn (BatchNormal (None, 32, 38, 38) 128 block_5_project[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_5_add (Add) (None, 32, 38, 38) 0 block_5_projected_inputs[0][0] \r\n",
" block_5_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_6_expand (Conv2D) (None, 192, 38, 38) 6144 block_5_add[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_6_expand_bn (BatchNormali (None, 192, 38, 38) 768 block_6_expand[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"re_lu_7 (ReLU) (None, 192, 38, 38) 0 block_6_expand_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_6_depthwise_pad (ZeroPadd (None, 192, 40, 40) 0 re_lu_7[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_6_depthwise (DepthwiseCon (None, 192, 19, 19) 1728 block_6_depthwise_pad[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_6_depthwise_bn (BatchNorm (None, 192, 19, 19) 768 block_6_depthwise[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_6_relu (ReLU) (None, 192, 19, 19) 0 block_6_depthwise_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_6_project (Conv2D) (None, 64, 19, 19) 12288 block_6_relu[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_6_project_bn (BatchNormal (None, 64, 19, 19) 256 block_6_project[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_7_expand (Conv2D) (None, 384, 19, 19) 24576 block_6_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_7_expand_bn (BatchNormali (None, 384, 19, 19) 1536 block_7_expand[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"re_lu_8 (ReLU) (None, 384, 19, 19) 0 block_7_expand_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_7_depthwise_pad (ZeroPadd (None, 384, 21, 21) 0 re_lu_8[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_7_depthwise (DepthwiseCon (None, 384, 19, 19) 3456 block_7_depthwise_pad[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_7_depthwise_bn (BatchNorm (None, 384, 19, 19) 1536 block_7_depthwise[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_7_relu (ReLU) (None, 384, 19, 19) 0 block_7_depthwise_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_7_project (Conv2D) (None, 64, 19, 19) 24576 block_7_relu[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_7_projected_inputs (Conv2 (None, 64, 19, 19) 4096 block_6_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_7_project_bn (BatchNormal (None, 64, 19, 19) 256 block_7_project[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_7_add (Add) (None, 64, 19, 19) 0 block_7_projected_inputs[0][0] \r\n",
" block_7_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_8_expand (Conv2D) (None, 384, 19, 19) 24576 block_7_add[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_8_expand_bn (BatchNormali (None, 384, 19, 19) 1536 block_8_expand[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"re_lu_9 (ReLU) (None, 384, 19, 19) 0 block_8_expand_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_8_depthwise_pad (ZeroPadd (None, 384, 21, 21) 0 re_lu_9[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_8_depthwise (DepthwiseCon (None, 384, 19, 19) 3456 block_8_depthwise_pad[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_8_depthwise_bn (BatchNorm (None, 384, 19, 19) 1536 block_8_depthwise[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_8_relu (ReLU) (None, 384, 19, 19) 0 block_8_depthwise_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_8_project (Conv2D) (None, 64, 19, 19) 24576 block_8_relu[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_8_projected_inputs (Conv2 (None, 64, 19, 19) 4096 block_7_add[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_8_project_bn (BatchNormal (None, 64, 19, 19) 256 block_8_project[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_8_add (Add) (None, 64, 19, 19) 0 block_8_projected_inputs[0][0] \r\n",
" block_8_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_9_expand (Conv2D) (None, 384, 19, 19) 24576 block_8_add[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_9_expand_bn (BatchNormali (None, 384, 19, 19) 1536 block_9_expand[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"re_lu_10 (ReLU) (None, 384, 19, 19) 0 block_9_expand_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_9_depthwise_pad (ZeroPadd (None, 384, 21, 21) 0 re_lu_10[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_9_depthwise (DepthwiseCon (None, 384, 19, 19) 3456 block_9_depthwise_pad[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_9_depthwise_bn (BatchNorm (None, 384, 19, 19) 1536 block_9_depthwise[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_9_relu (ReLU) (None, 384, 19, 19) 0 block_9_depthwise_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_9_project (Conv2D) (None, 64, 19, 19) 24576 block_9_relu[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_9_projected_inputs (Conv2 (None, 64, 19, 19) 4096 block_8_add[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_9_project_bn (BatchNormal (None, 64, 19, 19) 256 block_9_project[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_9_add (Add) (None, 64, 19, 19) 0 block_9_projected_inputs[0][0] \r\n",
" block_9_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_10_expand (Conv2D) (None, 384, 19, 19) 24576 block_9_add[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_10_expand_bn (BatchNormal (None, 384, 19, 19) 1536 block_10_expand[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"re_lu_11 (ReLU) (None, 384, 19, 19) 0 block_10_expand_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_10_depthwise_pad (ZeroPad (None, 384, 21, 21) 0 re_lu_11[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_10_depthwise (DepthwiseCo (None, 384, 19, 19) 3456 block_10_depthwise_pad[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_10_depthwise_bn (BatchNor (None, 384, 19, 19) 1536 block_10_depthwise[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_10_relu (ReLU) (None, 384, 19, 19) 0 block_10_depthwise_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_10_project (Conv2D) (None, 96, 19, 19) 36864 block_10_relu[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_10_project_bn (BatchNorma (None, 96, 19, 19) 384 block_10_project[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_11_expand (Conv2D) (None, 576, 19, 19) 55296 block_10_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_11_expand_bn (BatchNormal (None, 576, 19, 19) 2304 block_11_expand[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"re_lu_12 (ReLU) (None, 576, 19, 19) 0 block_11_expand_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_11_depthwise_pad (ZeroPad (None, 576, 21, 21) 0 re_lu_12[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_11_depthwise (DepthwiseCo (None, 576, 19, 19) 5184 block_11_depthwise_pad[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_11_depthwise_bn (BatchNor (None, 576, 19, 19) 2304 block_11_depthwise[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_11_relu (ReLU) (None, 576, 19, 19) 0 block_11_depthwise_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_11_project (Conv2D) (None, 96, 19, 19) 55296 block_11_relu[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_11_projected_inputs (Conv (None, 96, 19, 19) 9216 block_10_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_11_project_bn (BatchNorma (None, 96, 19, 19) 384 block_11_project[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_11_add (Add) (None, 96, 19, 19) 0 block_11_projected_inputs[0][0] \r\n",
" block_11_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_12_expand (Conv2D) (None, 576, 19, 19) 55296 block_11_add[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_12_expand_bn (BatchNormal (None, 576, 19, 19) 2304 block_12_expand[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"re_lu_13 (ReLU) (None, 576, 19, 19) 0 block_12_expand_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_12_depthwise_pad (ZeroPad (None, 576, 21, 21) 0 re_lu_13[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_12_depthwise (DepthwiseCo (None, 576, 19, 19) 5184 block_12_depthwise_pad[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_12_depthwise_bn (BatchNor (None, 576, 19, 19) 2304 block_12_depthwise[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_12_relu (ReLU) (None, 576, 19, 19) 0 block_12_depthwise_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_12_project (Conv2D) (None, 96, 19, 19) 55296 block_12_relu[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_12_projected_inputs (Conv (None, 96, 19, 19) 9216 block_11_add[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_12_project_bn (BatchNorma (None, 96, 19, 19) 384 block_12_project[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"block_12_add (Add) (None, 96, 19, 19) 0 block_12_projected_inputs[0][0] \r\n",
" block_12_project_bn[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_1_conv_0 (Conv (None, 64, 19, 19) 6208 block_12_add[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_1_relu_0 (ReLU (None, 64, 19, 19) 0 ssd_expand_block_1_conv_0[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_1_conv_1 (Conv (None, 128, 10, 10) 73728 ssd_expand_block_1_relu_0[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_1_bn_1 (BatchN (None, 128, 10, 10) 512 ssd_expand_block_1_conv_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_1_relu_1 (ReLU (None, 128, 10, 10) 0 ssd_expand_block_1_bn_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_2_conv_0 (Conv (None, 64, 10, 10) 8256 ssd_expand_block_1_relu_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_2_relu_0 (ReLU (None, 64, 10, 10) 0 ssd_expand_block_2_conv_0[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_2_conv_1 (Conv (None, 128, 5, 5) 73728 ssd_expand_block_2_relu_0[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_2_bn_1 (BatchN (None, 128, 5, 5) 512 ssd_expand_block_2_conv_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_2_relu_1 (ReLU (None, 128, 5, 5) 0 ssd_expand_block_2_bn_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_3_conv_0 (Conv (None, 64, 5, 5) 8256 ssd_expand_block_2_relu_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_3_relu_0 (ReLU (None, 64, 5, 5) 0 ssd_expand_block_3_conv_0[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_3_conv_1 (Conv (None, 128, 3, 3) 73728 ssd_expand_block_3_relu_0[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_3_bn_1 (BatchN (None, 128, 3, 3) 512 ssd_expand_block_3_conv_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_3_relu_1 (ReLU (None, 128, 3, 3) 0 ssd_expand_block_3_bn_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_4_conv_0 (Conv (None, 64, 3, 3) 8256 ssd_expand_block_3_relu_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_4_relu_0 (ReLU (None, 64, 3, 3) 0 ssd_expand_block_4_conv_0[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_4_conv_1 (Conv (None, 128, 2, 2) 73728 ssd_expand_block_4_relu_0[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_4_bn_1 (BatchN (None, 128, 2, 2) 512 ssd_expand_block_4_conv_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_expand_block_4_relu_1 (ReLU (None, 128, 2, 2) 0 ssd_expand_block_4_bn_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_conf_0 (Conv2D) (None, 12, 38, 38) 20748 re_lu_7[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_conf_1 (Conv2D) (None, 12, 19, 19) 10380 block_12_add[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_conf_2 (Conv2D) (None, 12, 10, 10) 13836 ssd_expand_block_1_relu_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_conf_3 (Conv2D) (None, 12, 5, 5) 13836 ssd_expand_block_2_relu_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_conf_4 (Conv2D) (None, 12, 3, 3) 13836 ssd_expand_block_3_relu_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_conf_5 (Conv2D) (None, 12, 2, 2) 13836 ssd_expand_block_4_relu_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"permute_1 (Permute) (None, 38, 38, 12) 0 ssd_conf_0[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"permute_2 (Permute) (None, 19, 19, 12) 0 ssd_conf_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"permute_3 (Permute) (None, 10, 10, 12) 0 ssd_conf_2[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"permute_4 (Permute) (None, 5, 5, 12) 0 ssd_conf_3[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"permute_5 (Permute) (None, 3, 3, 12) 0 ssd_conf_4[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"permute_6 (Permute) (None, 2, 2, 12) 0 ssd_conf_5[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"conf_reshape_0 (Reshape) (None, 8664, 1, 2) 0 permute_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"conf_reshape_1 (Reshape) (None, 2166, 1, 2) 0 permute_2[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"conf_reshape_2 (Reshape) (None, 600, 1, 2) 0 permute_3[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"conf_reshape_3 (Reshape) (None, 150, 1, 2) 0 permute_4[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"conf_reshape_4 (Reshape) (None, 54, 1, 2) 0 permute_5[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"conf_reshape_5 (Reshape) (None, 24, 1, 2) 0 permute_6[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"mbox_conf (Concatenate) (None, 11658, 1, 2) 0 conf_reshape_0[0][0] \r\n",
" conf_reshape_1[0][0] \r\n",
" conf_reshape_2[0][0] \r\n",
" conf_reshape_3[0][0] \r\n",
" conf_reshape_4[0][0] \r\n",
" conf_reshape_5[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_loc_0 (Conv2D) (None, 24, 38, 38) 41496 re_lu_7[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_loc_1 (Conv2D) (None, 24, 19, 19) 20760 block_12_add[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_loc_2 (Conv2D) (None, 24, 10, 10) 27672 ssd_expand_block_1_relu_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_loc_3 (Conv2D) (None, 24, 5, 5) 27672 ssd_expand_block_2_relu_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_loc_4 (Conv2D) (None, 24, 3, 3) 27672 ssd_expand_block_3_relu_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_loc_5 (Conv2D) (None, 24, 2, 2) 27672 ssd_expand_block_4_relu_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"before_softmax_permute (Permute (None, 2, 1, 11658) 0 mbox_conf[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"permute_7 (Permute) (None, 38, 38, 24) 0 ssd_loc_0[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"permute_8 (Permute) (None, 19, 19, 24) 0 ssd_loc_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"permute_9 (Permute) (None, 10, 10, 24) 0 ssd_loc_2[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"permute_10 (Permute) (None, 5, 5, 24) 0 ssd_loc_3[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"permute_11 (Permute) (None, 3, 3, 24) 0 ssd_loc_4[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"permute_12 (Permute) (None, 2, 2, 24) 0 ssd_loc_5[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_anchor_0 (AnchorBoxes) (None, 1444, 6, 8) 0 ssd_loc_0[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_anchor_1 (AnchorBoxes) (None, 361, 6, 8) 0 ssd_loc_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_anchor_2 (AnchorBoxes) (None, 100, 6, 8) 0 ssd_loc_2[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_anchor_3 (AnchorBoxes) (None, 25, 6, 8) 0 ssd_loc_3[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_anchor_4 (AnchorBoxes) (None, 9, 6, 8) 0 ssd_loc_4[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_anchor_5 (AnchorBoxes) (None, 4, 6, 8) 0 ssd_loc_5[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"mbox_conf_softmax_ (Softmax) (None, 2, 1, 11658) 0 before_softmax_permute[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"loc_reshape_0 (Reshape) (None, 8664, 1, 4) 0 permute_7[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"loc_reshape_1 (Reshape) (None, 2166, 1, 4) 0 permute_8[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"loc_reshape_2 (Reshape) (None, 600, 1, 4) 0 permute_9[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"loc_reshape_3 (Reshape) (None, 150, 1, 4) 0 permute_10[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"loc_reshape_4 (Reshape) (None, 54, 1, 4) 0 permute_11[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"loc_reshape_5 (Reshape) (None, 24, 1, 4) 0 permute_12[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"anchor_reshape_0 (Reshape) (None, 8664, 1, 8) 0 ssd_anchor_0[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"anchor_reshape_1 (Reshape) (None, 2166, 1, 8) 0 ssd_anchor_1[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"anchor_reshape_2 (Reshape) (None, 600, 1, 8) 0 ssd_anchor_2[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"anchor_reshape_3 (Reshape) (None, 150, 1, 8) 0 ssd_anchor_3[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"anchor_reshape_4 (Reshape) (None, 54, 1, 8) 0 ssd_anchor_4[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"anchor_reshape_5 (Reshape) (None, 24, 1, 8) 0 ssd_anchor_5[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"mbox_conf_softmax (Permute) (None, 11658, 1, 2) 0 mbox_conf_softmax_[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"mbox_loc (Concatenate) (None, 11658, 1, 4) 0 loc_reshape_0[0][0] \r\n",
" loc_reshape_1[0][0] \r\n",
" loc_reshape_2[0][0] \r\n",
" loc_reshape_3[0][0] \r\n",
" loc_reshape_4[0][0] \r\n",
" loc_reshape_5[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"mbox_priorbox (Concatenate) (None, 11658, 1, 8) 0 anchor_reshape_0[0][0] \r\n",
" anchor_reshape_1[0][0] \r\n",
" anchor_reshape_2[0][0] \r\n",
" anchor_reshape_3[0][0] \r\n",
" anchor_reshape_4[0][0] \r\n",
" anchor_reshape_5[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"concatenate_1 (Concatenate) (None, 11658, 1, 14) 0 mbox_conf_softmax[0][0] \r\n",
" mbox_loc[0][0] \r\n",
" mbox_priorbox[0][0] \r\n",
"__________________________________________________________________________________________________\r\n",
"ssd_predictions (Reshape) (None, 11658, 14) 0 concatenate_1[0][0] \r\n",
"==================================================================================================\r\n",
"Total params: 1,179,352\r\n",
"Trainable params: 1,161,336\r\n",
"Non-trainable params: 18,016\r\n",
"__________________________________________________________________________________________________\r\n",
"2021-03-12 16:53:55,579 [INFO] __main__: Number of images in the training dataset:\t 439\r\n",
"2021-03-12 16:53:55,579 [INFO] __main__: Number of images in the validation dataset:\t 48\r\n",
"Epoch 1/80\n",
" 2/55 [>.............................] - ETA: 4:40 - loss: 99.8937/usr/local/lib/python3.6/dist-packages/keras/callbacks.py:122: UserWarning: Method on_batch_end() is slow compared to the batch update (1.123164). Check your callbacks.\n",
" % delta_t_median)\n",
"55/55 [==============================] - 24s 429ms/step - loss: 70.1613\n",
"3898f6d223da:48:76 [0] NCCL INFO Bootstrap : Using [0]lo:127.0.0.1<0> [1]eth0:172.17.0.3<0>\n",
"3898f6d223da:48:76 [0] NCCL INFO NET/Plugin : No plugin found (libnccl-net.so), using internal implementation\n",
"3898f6d223da:48:76 [0] NCCL INFO NET/IB : No device found.\n",
"3898f6d223da:48:76 [0] NCCL INFO NET/Socket : Using [0]lo:127.0.0.1<0> [1]eth0:172.17.0.3<0>\n",
"3898f6d223da:48:76 [0] NCCL INFO Using network Socket\n",
"NCCL version 2.7.8+cuda11.1\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 00/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 01/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 02/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 03/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 04/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 05/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 06/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 07/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 08/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 09/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 10/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 11/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 12/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 13/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 14/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 15/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 16/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 17/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 18/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 19/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 20/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 21/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 22/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 23/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 24/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 25/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 26/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 27/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 28/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 29/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 30/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Channel 31/32 : 0\n",
"3898f6d223da:48:76 [0] NCCL INFO Trees [0] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [1] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [2] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [3] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [4] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [5] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [6] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [7] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [8] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [9] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [10] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [11] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [12] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [13] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [14] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [15] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [16] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [17] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [18] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [19] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [20] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [21] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [22] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [23] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [24] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [25] -1/-1/-1->0->-1|-1->0->-1/-1/-1 [26] -1/-1/-1->0->-1|-1->0->-1/-\n",
"3898f6d223da:48:76 [0] NCCL INFO 32 coll channels, 32 p2p channels, 32 p2p channels per peer\n",
"3898f6d223da:48:76 [0] NCCL INFO comm 0x7f352432c1d0 rank 0 nranks 1 cudaDev 0 busId 1000 - Init COMPLETE\n",
"\n",
"Epoch 00001: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_001.tlt\n",
"Epoch 2/80\n",
"55/55 [==============================] - 9s 173ms/step - loss: 37.4464\n",
"\n",
"Epoch 00002: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_002.tlt\n",
"Epoch 3/80\n",
"55/55 [==============================] - 10s 175ms/step - loss: 24.2742\n",
"\n",
"Epoch 00003: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_003.tlt\n",
"Epoch 4/80\n",
"55/55 [==============================] - 9s 171ms/step - loss: 16.4649\n",
"\n",
"Epoch 00004: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_004.tlt\n",
"Epoch 5/80\n",
"55/55 [==============================] - 10s 190ms/step - loss: 12.7545\n",
"\n",
"Epoch 00005: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_005.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:01<00:00, 3.35it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 0.00834\n",
" mAP 0.00834\n",
"*******************************\n",
"Validation loss: 244.01341247558594\n",
"Epoch 6/80\n",
"55/55 [==============================] - 12s 225ms/step - loss: 7.9696\n",
"\n",
"Epoch 00006: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_006.tlt\n",
"Epoch 7/80\n",
"55/55 [==============================] - 12s 225ms/step - loss: 7.1477\n",
"\n",
"Epoch 00007: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_007.tlt\n",
"Epoch 8/80\n",
"55/55 [==============================] - 12s 225ms/step - loss: 14.1549\n",
"\n",
"Epoch 00008: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_008.tlt\n",
"Epoch 9/80\n",
"55/55 [==============================] - 13s 229ms/step - loss: 43.8171\n",
"\n",
"Epoch 00009: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_009.tlt\n",
"Epoch 10/80\n",
"55/55 [==============================] - 13s 228ms/step - loss: 193.0987\n",
"\n",
"Epoch 00010: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_010.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:00<00:00, 8.61it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 5e-05\n",
" mAP 5e-05\n",
"*******************************\n",
"Validation loss: 7.323096782890598e+16\n",
"Epoch 11/80\n",
"55/55 [==============================] - 13s 236ms/step - loss: 152.6107\n",
"\n",
"Epoch 00011: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_011.tlt\n",
"Epoch 12/80\n",
"55/55 [==============================] - 12s 224ms/step - loss: 117.8140\n",
"\n",
"Epoch 00012: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_012.tlt\n",
"Epoch 13/80\n",
"55/55 [==============================] - 13s 235ms/step - loss: 112.6649\n",
"\n",
"Epoch 00013: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_013.tlt\n",
"Epoch 14/80\n",
"55/55 [==============================] - 13s 237ms/step - loss: 109.1478\n",
"\n",
"Epoch 00014: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_014.tlt\n",
"Epoch 15/80\n",
"55/55 [==============================] - 14s 250ms/step - loss: 108.6506\n",
"\n",
"Epoch 00015: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_015.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:00<00:00, 8.87it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 3e-05\n",
" mAP 3e-05\n",
"*******************************\n",
"Validation loss: 25503938.333333332\n",
"Epoch 16/80\n",
"55/55 [==============================] - 14s 258ms/step - loss: 108.4098\n",
"\n",
"Epoch 00016: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_016.tlt\n",
"Epoch 17/80\n",
"55/55 [==============================] - 13s 233ms/step - loss: 108.3715\n",
"\n",
"Epoch 00017: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_017.tlt\n",
"Epoch 18/80\n",
"55/55 [==============================] - 12s 227ms/step - loss: 108.4152\n",
"\n",
"Epoch 00018: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_018.tlt\n",
"Epoch 19/80\n",
"55/55 [==============================] - 13s 229ms/step - loss: 108.0759\n",
"\n",
"Epoch 00019: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_019.tlt\n",
"Epoch 20/80\n",
"55/55 [==============================] - 12s 227ms/step - loss: 108.1557\n",
"\n",
"Epoch 00020: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_020.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:00<00:00, 9.00it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 5e-05\n",
" mAP 5e-05\n",
"*******************************\n",
"Validation loss: 57522.952473958336\n",
"Epoch 21/80\n",
"55/55 [==============================] - 12s 226ms/step - loss: 108.1116\n",
"\n",
"Epoch 00021: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_021.tlt\n",
"Epoch 22/80\n",
"55/55 [==============================] - 13s 230ms/step - loss: 107.9887\n",
"\n",
"Epoch 00022: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_022.tlt\n",
"Epoch 23/80\n",
"55/55 [==============================] - 13s 228ms/step - loss: 109.1517\n",
"\n",
"Epoch 00023: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_023.tlt\n",
"Epoch 24/80\n",
"55/55 [==============================] - 13s 230ms/step - loss: 108.2904\n",
"\n",
"Epoch 00024: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_024.tlt\n",
"Epoch 25/80\n",
"55/55 [==============================] - 13s 232ms/step - loss: 108.2921\n",
"\n",
"Epoch 00025: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_025.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:00<00:00, 8.63it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 4e-05\n",
" mAP 4e-05\n",
"*******************************\n",
"Validation loss: 1027.188252766927\n",
"Epoch 26/80\n",
"55/55 [==============================] - 12s 225ms/step - loss: 107.6494\n",
"\n",
"Epoch 00026: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_026.tlt\n",
"Epoch 27/80\n",
"55/55 [==============================] - 13s 227ms/step - loss: 108.3787\n",
"\n",
"Epoch 00027: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_027.tlt\n",
"Epoch 28/80\n",
"55/55 [==============================] - 13s 238ms/step - loss: 107.7841\n",
"\n",
"Epoch 00028: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_028.tlt\n",
"Epoch 29/80\n",
"55/55 [==============================] - 13s 228ms/step - loss: 107.9141\n",
"\n",
"Epoch 00029: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_029.tlt\n",
"Epoch 30/80\n",
"55/55 [==============================] - 12s 226ms/step - loss: 107.6277\n",
"\n",
"Epoch 00030: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_030.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:00<00:00, 8.83it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 0.00023\n",
" mAP 0.00023\n",
"*******************************\n",
"Validation loss: 959.6404724121094\n",
"Epoch 31/80\n",
"55/55 [==============================] - 12s 222ms/step - loss: 107.5459\n",
"\n",
"Epoch 00031: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_031.tlt\n",
"Epoch 32/80\n",
"55/55 [==============================] - 12s 227ms/step - loss: 107.8589\n",
"\n",
"Epoch 00032: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_032.tlt\n",
"Epoch 33/80\n",
"55/55 [==============================] - 13s 231ms/step - loss: 107.6295\n",
"\n",
"Epoch 00033: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_033.tlt\n",
"Epoch 34/80\n",
"55/55 [==============================] - 13s 231ms/step - loss: 107.3930\n",
"\n",
"Epoch 00034: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_034.tlt\n",
"Epoch 35/80\n",
"55/55 [==============================] - 13s 228ms/step - loss: 107.5679\n",
"\n",
"Epoch 00035: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_035.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:00<00:00, 8.79it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 0.00088\n",
" mAP 0.00088\n",
"*******************************\n",
"Validation loss: 843.8634847005209\n",
"Epoch 36/80\n",
"55/55 [==============================] - 12s 227ms/step - loss: 107.4376\n",
"\n",
"Epoch 00036: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_036.tlt\n",
"Epoch 37/80\n",
"55/55 [==============================] - 13s 231ms/step - loss: 107.4342\n",
"\n",
"Epoch 00037: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_037.tlt\n",
"Epoch 38/80\n",
"55/55 [==============================] - 13s 229ms/step - loss: 107.1970\n",
"\n",
"Epoch 00038: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_038.tlt\n",
"Epoch 39/80\n",
"55/55 [==============================] - 13s 229ms/step - loss: 107.3438\n",
"\n",
"Epoch 00039: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_039.tlt\n",
"Epoch 40/80\n",
"55/55 [==============================] - 13s 229ms/step - loss: 107.1737\n",
"\n",
"Epoch 00040: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_040.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:00<00:00, 8.17it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 0.00044\n",
" mAP 0.00044\n",
"*******************************\n",
"Validation loss: 841.319101969401\n",
"Epoch 41/80\n",
"55/55 [==============================] - 13s 240ms/step - loss: 107.1899\n",
"\n",
"Epoch 00041: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_041.tlt\n",
"Epoch 42/80\n",
"55/55 [==============================] - 13s 230ms/step - loss: 107.2388\n",
"\n",
"Epoch 00042: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_042.tlt\n",
"Epoch 43/80\n",
"55/55 [==============================] - 13s 241ms/step - loss: 107.2273\n",
"\n",
"Epoch 00043: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_043.tlt\n",
"Epoch 44/80\n",
"55/55 [==============================] - 14s 253ms/step - loss: 107.1758\n",
"\n",
"Epoch 00044: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_044.tlt\n",
"Epoch 45/80\n",
"55/55 [==============================] - 13s 244ms/step - loss: 107.2456\n",
"\n",
"Epoch 00045: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_045.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:00<00:00, 6.85it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 0.00018\n",
" mAP 0.00018\n",
"*******************************\n",
"Validation loss: 841.3893330891927\n",
"Epoch 46/80\n",
"55/55 [==============================] - 13s 240ms/step - loss: 107.2445\n",
"\n",
"Epoch 00046: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_046.tlt\n",
"Epoch 47/80\n",
"55/55 [==============================] - 13s 231ms/step - loss: 107.1235\n",
"\n",
"Epoch 00047: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_047.tlt\n",
"Epoch 48/80\n",
"55/55 [==============================] - 14s 254ms/step - loss: 107.2016\n",
"\n",
"Epoch 00048: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_048.tlt\n",
"Epoch 49/80\n",
"55/55 [==============================] - 13s 245ms/step - loss: 107.0736\n",
"\n",
"Epoch 00049: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_049.tlt\n",
"Epoch 50/80\n",
"55/55 [==============================] - 13s 233ms/step - loss: 107.0893\n",
"\n",
"Epoch 00050: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_050.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:00<00:00, 8.89it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 0.0\n",
" mAP 0.0\n",
"*******************************\n",
"Validation loss: 840.0692647298177\n",
"Epoch 51/80\n",
"55/55 [==============================] - 12s 225ms/step - loss: 107.0672\n",
"\n",
"Epoch 00051: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_051.tlt\n",
"Epoch 52/80\n",
"55/55 [==============================] - 12s 226ms/step - loss: 106.9927\n",
"\n",
"Epoch 00052: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_052.tlt\n",
"Epoch 53/80\n",
"55/55 [==============================] - 13s 239ms/step - loss: 107.0595\n",
"\n",
"Epoch 00053: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_053.tlt\n",
"Epoch 54/80\n",
"55/55 [==============================] - 13s 245ms/step - loss: 107.0017\n",
"\n",
"Epoch 00054: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_054.tlt\n",
"Epoch 55/80\n",
"55/55 [==============================] - 14s 250ms/step - loss: 107.0012\n",
"\n",
"Epoch 00055: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_055.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:00<00:00, 8.09it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 4e-05\n",
" mAP 4e-05\n",
"*******************************\n",
"Validation loss: 841.4677734375\n",
"Epoch 56/80\n",
"55/55 [==============================] - 13s 243ms/step - loss: 107.0215\n",
"\n",
"Epoch 00056: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_056.tlt\n",
"Epoch 57/80\n",
"55/55 [==============================] - 12s 226ms/step - loss: 106.9455\n",
"\n",
"Epoch 00057: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_057.tlt\n",
"Epoch 58/80\n",
"55/55 [==============================] - 13s 236ms/step - loss: 106.9921\n",
"\n",
"Epoch 00058: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_058.tlt\n",
"Epoch 59/80\n",
"55/55 [==============================] - 13s 233ms/step - loss: 106.8944\n",
"\n",
"Epoch 00059: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_059.tlt\n",
"Epoch 60/80\n",
"55/55 [==============================] - 13s 235ms/step - loss: 106.8480\n",
"\n",
"Epoch 00060: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_060.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:00<00:00, 8.81it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 0.0\n",
" mAP 0.0\n",
"*******************************\n",
"Validation loss: 841.7695821126302\n",
"Epoch 61/80\n",
"55/55 [==============================] - 13s 230ms/step - loss: 106.7744\n",
"\n",
"Epoch 00061: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_061.tlt\n",
"Epoch 62/80\n",
"55/55 [==============================] - 12s 224ms/step - loss: 106.7841\n",
"\n",
"Epoch 00062: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_062.tlt\n",
"Epoch 63/80\n",
"55/55 [==============================] - 13s 227ms/step - loss: 106.7692\n",
"\n",
"Epoch 00063: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_063.tlt\n",
"Epoch 64/80\n",
"55/55 [==============================] - 13s 232ms/step - loss: 106.8326\n",
"\n",
"Epoch 00064: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_064.tlt\n",
"Epoch 65/80\n",
"55/55 [==============================] - 13s 228ms/step - loss: 106.8735\n",
"\n",
"Epoch 00065: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_065.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:00<00:00, 8.74it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 0.0\n",
" mAP 0.0\n",
"*******************************\n",
"Validation loss: 840.977549235026\n",
"Epoch 66/80\n",
"55/55 [==============================] - 13s 234ms/step - loss: 106.7171\n",
"\n",
"Epoch 00066: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_066.tlt\n",
"Epoch 67/80\n",
"55/55 [==============================] - 13s 232ms/step - loss: 106.8040\n",
"\n",
"Epoch 00067: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_067.tlt\n",
"Epoch 68/80\n",
"55/55 [==============================] - 13s 228ms/step - loss: 106.6579\n",
"\n",
"Epoch 00068: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_068.tlt\n",
"Epoch 69/80\n",
"55/55 [==============================] - 13s 229ms/step - loss: 106.6542\n",
"\n",
"Epoch 00069: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_069.tlt\n",
"Epoch 70/80\n",
"55/55 [==============================] - 13s 239ms/step - loss: 106.6326\n",
"\n",
"Epoch 00070: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_070.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:00<00:00, 8.80it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 0.0\n",
" mAP 0.0\n",
"*******************************\n",
"Validation loss: 839.5242411295573\n",
"Epoch 71/80\n",
"55/55 [==============================] - 13s 233ms/step - loss: 106.7141\n",
"\n",
"Epoch 00071: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_071.tlt\n",
"Epoch 72/80\n",
"55/55 [==============================] - 13s 235ms/step - loss: 106.6531\n",
"\n",
"Epoch 00072: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_072.tlt\n",
"Epoch 73/80\n",
"55/55 [==============================] - 13s 245ms/step - loss: 106.7782\n",
"\n",
"Epoch 00073: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_073.tlt\n",
"Epoch 74/80\n",
"55/55 [==============================] - 13s 239ms/step - loss: 106.6432\n",
"\n",
"Epoch 00074: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_074.tlt\n",
"Epoch 75/80\n",
"55/55 [==============================] - 12s 227ms/step - loss: 106.5717\n",
"\n",
"Epoch 00075: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_075.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:00<00:00, 8.75it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 0.0\n",
" mAP 0.0\n",
"*******************************\n",
"Validation loss: 838.9827575683594\n",
"Epoch 76/80\n",
"55/55 [==============================] - 12s 227ms/step - loss: 106.6857\n",
"\n",
"Epoch 00076: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_076.tlt\n",
"Epoch 77/80\n",
"55/55 [==============================] - 13s 231ms/step - loss: 106.5406\n",
"\n",
"Epoch 00077: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_077.tlt\n",
"Epoch 78/80\n",
"55/55 [==============================] - 13s 238ms/step - loss: 106.6570\n",
"\n",
"Epoch 00078: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_078.tlt\n",
"Epoch 79/80\n",
"55/55 [==============================] - 13s 228ms/step - loss: 106.5847\n",
"\n",
"Epoch 00079: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_079.tlt\n",
"Epoch 80/80\n",
"55/55 [==============================] - 13s 233ms/step - loss: 106.6276\n",
"\n",
"Epoch 00080: saving model to /workspace/tlt-experiments/ssd/experiment_dir_unpruned_300x300_ssdmobilenet/weights/ssd_mobilenet_v2_epoch_080.tlt\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:00<00:00, 8.08it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 0.0\n",
" mAP 0.0\n",
"*******************************\n",
"Validation loss: 838.4864807128906\n",
"2021-03-12 20:16:36,931 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.\n"
]
}
],
"source": [
"print(\"To run with multigpu, please change --gpus based on the number of available GPUs in your machine.\")\n",
"!tlt ssd train --gpus 1 --gpu_index=$GPU_INDEX \\\n",
" -e $SPECS_DIR/ssd_train_mobilenet_v2.txt \\\n",
" -r $USER_EXPERIMENT_DIR/experiment_dir_unpruned_300x300_ssdmobilenet \\\n",
" -k $KEY \\\n",
" -m $USER_EXPERIMENT_DIR/pretrained_mobilenet_v2/tlt_pretrained_object_detection_vmobilenet_v2/mobilenet_v2.hdf5"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print(\"To resume from checkpoint, please uncomment and run this instead. Change last two arguments accordingly.\")\n",
"# !tlt ssd train --gpus 1 --gpu_index=$GPU_INDEX \\\n",
"# -e $SPECS_DIR/ssd_train_resnet18_kitti.txt \\\n",
"# -r $USER_EXPERIMENT_DIR/experiment_dir_unpruned \\\n",
"# -k $KEY \\\n",
"# -m $USER_EXPERIMENT_DIR/experiment_dir_unpruned/weights/ssd_resnet18_epoch_001.tlt \\\n",
"# --initial_epoch 2"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Model for each epoch:\n",
"---------------------\n",
"total 762M\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:15 ssd_mobilenet_v2_epoch_001.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:15 ssd_mobilenet_v2_epoch_002.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:15 ssd_mobilenet_v2_epoch_003.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:15 ssd_mobilenet_v2_epoch_004.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:16 ssd_mobilenet_v2_epoch_005.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:16 ssd_mobilenet_v2_epoch_006.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:16 ssd_mobilenet_v2_epoch_007.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:17 ssd_mobilenet_v2_epoch_008.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:17 ssd_mobilenet_v2_epoch_009.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:17 ssd_mobilenet_v2_epoch_010.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:17 ssd_mobilenet_v2_epoch_011.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:18 ssd_mobilenet_v2_epoch_012.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:18 ssd_mobilenet_v2_epoch_013.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:18 ssd_mobilenet_v2_epoch_014.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:18 ssd_mobilenet_v2_epoch_015.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:19 ssd_mobilenet_v2_epoch_016.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:19 ssd_mobilenet_v2_epoch_017.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:19 ssd_mobilenet_v2_epoch_018.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:20 ssd_mobilenet_v2_epoch_019.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:20 ssd_mobilenet_v2_epoch_020.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:20 ssd_mobilenet_v2_epoch_021.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:20 ssd_mobilenet_v2_epoch_022.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:21 ssd_mobilenet_v2_epoch_023.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:21 ssd_mobilenet_v2_epoch_024.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:21 ssd_mobilenet_v2_epoch_025.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:21 ssd_mobilenet_v2_epoch_026.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:22 ssd_mobilenet_v2_epoch_027.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:22 ssd_mobilenet_v2_epoch_028.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:22 ssd_mobilenet_v2_epoch_029.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:23 ssd_mobilenet_v2_epoch_030.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:23 ssd_mobilenet_v2_epoch_031.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:23 ssd_mobilenet_v2_epoch_032.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:23 ssd_mobilenet_v2_epoch_033.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:24 ssd_mobilenet_v2_epoch_034.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:24 ssd_mobilenet_v2_epoch_035.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:24 ssd_mobilenet_v2_epoch_036.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:25 ssd_mobilenet_v2_epoch_037.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:25 ssd_mobilenet_v2_epoch_038.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:25 ssd_mobilenet_v2_epoch_039.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:25 ssd_mobilenet_v2_epoch_040.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:26 ssd_mobilenet_v2_epoch_041.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:26 ssd_mobilenet_v2_epoch_042.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:26 ssd_mobilenet_v2_epoch_043.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:27 ssd_mobilenet_v2_epoch_044.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:27 ssd_mobilenet_v2_epoch_045.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:27 ssd_mobilenet_v2_epoch_046.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:27 ssd_mobilenet_v2_epoch_047.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:28 ssd_mobilenet_v2_epoch_048.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:28 ssd_mobilenet_v2_epoch_049.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:28 ssd_mobilenet_v2_epoch_050.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:29 ssd_mobilenet_v2_epoch_051.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:29 ssd_mobilenet_v2_epoch_052.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:29 ssd_mobilenet_v2_epoch_053.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:29 ssd_mobilenet_v2_epoch_054.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:30 ssd_mobilenet_v2_epoch_055.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:30 ssd_mobilenet_v2_epoch_056.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:30 ssd_mobilenet_v2_epoch_057.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:31 ssd_mobilenet_v2_epoch_058.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:31 ssd_mobilenet_v2_epoch_059.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:31 ssd_mobilenet_v2_epoch_060.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:31 ssd_mobilenet_v2_epoch_061.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:32 ssd_mobilenet_v2_epoch_062.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:32 ssd_mobilenet_v2_epoch_063.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:32 ssd_mobilenet_v2_epoch_064.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:32 ssd_mobilenet_v2_epoch_065.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:33 ssd_mobilenet_v2_epoch_066.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:33 ssd_mobilenet_v2_epoch_067.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:33 ssd_mobilenet_v2_epoch_068.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:34 ssd_mobilenet_v2_epoch_069.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:34 ssd_mobilenet_v2_epoch_070.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:34 ssd_mobilenet_v2_epoch_071.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:34 ssd_mobilenet_v2_epoch_072.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:35 ssd_mobilenet_v2_epoch_073.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:35 ssd_mobilenet_v2_epoch_074.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:35 ssd_mobilenet_v2_epoch_075.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:36 ssd_mobilenet_v2_epoch_076.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:36 ssd_mobilenet_v2_epoch_077.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:36 ssd_mobilenet_v2_epoch_078.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:37 ssd_mobilenet_v2_epoch_079.tlt\r\n",
"-rw-r--r-- 1 root root 9,6M Mar 12 12:37 ssd_mobilenet_v2_epoch_080.tlt\r\n"
]
}
],
"source": [
"print('Model for each epoch:')\n",
"print('---------------------')\n",
"!ls -ltrh $LOCAL_EXPERIMENT_DIR/experiment_dir_unpruned_300x300/weights"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch,AP_head,loss,lr,mAP,validation_loss\n",
"1,nan,71.66014382399297,8.2377446e-05,nan,nan\n",
"2,nan,37.671551102657794,0.00013572087,nan,nan\n",
"3,nan,23.70247550217057,0.00022360678,nan,nan\n",
"4,nan,16.38045983108138,0.00036840313,nan,nan\n",
"5,0.006535391776368941,11.930178915992684,0.00060696213,0.006535391776368941,212.00954945882162\n",
"6,nan,8.686380714381746,0.0009999998,nan,nan\n",
"7,nan,7.464418506839553,0.0016475488,nan,nan\n",
"8,nan,7.476165621590234,0.0027144172,nan,nan\n",
"9,nan,10.249315744108925,0.004472135,nan,nan\n",
"10,5.3225463061528635e-05,94.06979975776412,0.007368061,5.3225463061528635e-05,16427126784.0\n",
"11,nan,112.44676658612993,0.012139242,nan,nan\n",
"12,nan,109.57277521294179,0.019999994,nan,nan\n",
"13,nan,109.23674909726363,0.02,nan,nan\n",
"14,nan,108.80930218881247,0.02,nan,nan\n",
"15,7.986157327299349e-05,108.54428827029426,0.02,7.986157327299349e-05,132370.72265625\n",
"16,nan,108.35668102429506,0.02,nan,nan\n",
"17,nan,108.39642492133555,0.02,nan,nan\n",
"18,nan,111.07851222218576,0.02,nan,nan\n",
"19,nan,118.00245214649105,0.02,nan,nan\n",
"20,5.328786102525844e-05,118.19362967106639,0.02,5.328786102525844e-05,9461375189.333334\n",
"21,nan,111.86381499479464,0.02,nan,nan\n",
"22,nan,115.83711006320962,0.02,nan,nan\n",
"23,nan,116.18034432309092,0.02,nan,nan\n",
"24,nan,111.93502275818844,0.02,nan,nan\n",
"25,9.964824170677509e-06,110.42875963354436,0.02,9.964824170677509e-06,77181059.33333333\n",
"26,nan,109.39318649535299,0.02,nan,nan\n",
"27,nan,108.53018336198325,0.02,nan,nan\n",
"28,nan,108.36117009747272,0.02,nan,nan\n",
"29,nan,108.06503061281522,0.02,nan,nan\n",
"30,7.736943907156673e-05,107.91077153720725,0.02,7.736943907156673e-05,25512.849853515625\n",
"31,nan,107.82136252366328,0.02,nan,nan\n",
"32,nan,107.90821071963647,0.02,nan,nan\n",
"33,nan,108.05723846366031,0.02,nan,nan\n",
"34,nan,107.83417081724266,0.02,nan,nan\n",
"35,4.887585532746823e-05,107.69056008236826,0.02,4.887585532746823e-05,9840.915323893229\n",
"36,nan,107.64639475133805,0.02,nan,nan\n",
"37,nan,107.67100045241094,0.02,nan,nan\n",
"38,nan,107.75566551191118,0.02,nan,nan\n",
"39,nan,107.80332938087828,0.02,nan,nan\n",
"40,0.00022558087074216106,107.51877133449824,0.02,0.00022558087074216106,1255.6031392415364\n",
"41,nan,107.53948421956194,0.02,nan,nan\n",
"42,nan,107.74425084281349,0.02,nan,nan\n",
"43,nan,107.97039049361453,0.02,nan,nan\n",
"44,nan,110.63389079934643,0.02,nan,nan\n",
"45,0.00023450324396154148,107.90707074210964,0.02,0.00023450324396154148,856.354237874349\n",
"46,nan,107.90763265831323,0.02,nan,nan\n",
"47,nan,109.32956076863144,0.02,nan,nan\n",
"48,nan,109.11968771689031,0.02,nan,nan\n",
"49,nan,107.54848753075393,0.02,nan,nan\n",
"50,0.0,107.38513651089679,0.02,0.0,849.3625691731771\n",
"51,nan,107.60713093264499,0.02,nan,nan\n",
"52,nan,107.92790541942136,0.02,nan,nan\n",
"53,nan,107.54135362977047,0.02,nan,nan\n",
"54,nan,107.65736331809354,0.02,nan,nan\n",
"55,8.03793907242183e-05,107.25603306103405,0.02,8.03793907242183e-05,868.2169698079427\n",
"56,nan,108.05322715741899,0.02,nan,nan\n",
"57,nan,112.16575931905342,0.02,nan,nan\n",
"58,nan,108.15147410329762,0.02,nan,nan\n",
"59,nan,107.3894328853807,0.02,nan,nan\n",
"60,5.355469272995046e-05,107.51097232062615,0.02,5.355469272995046e-05,853.2933146158854\n",
"61,nan,107.23626158069098,0.02,nan,nan\n",
"62,nan,107.10328064312425,0.02,nan,nan\n",
"63,nan,108.16153624932153,0.02,nan,nan\n",
"64,nan,107.97945850693827,0.02,nan,nan\n",
"65,5.349166867260424e-05,107.42761013230864,0.0137531245,5.349166867260424e-05,846.4722900390625\n",
"66,nan,107.00941919139957,0.009457419,nan,nan\n",
"67,nan,107.06227367288159,0.0065034507,nan,nan\n",
"68,nan,106.96230780182232,0.004472137,nan,nan\n",
"69,nan,106.89057185492375,0.0030752919,nan,nan\n",
"70,5.333475559348249e-05,106.90933071127785,0.002114743,5.333475559348249e-05,841.4671020507812\n",
"71,nan,106.97416448918996,0.0014542157,nan,nan\n",
"72,nan,106.96787331479014,0.0010000002,nan,nan\n",
"73,nan,106.87869038505815,0.0006876561,nan,nan\n",
"74,nan,106.82620941370658,0.00047287086,nan,nan\n",
"75,5.2548607461902265e-05,106.87001538113743,0.00032517247,5.2548607461902265e-05,840.3218180338541\n",
"76,nan,106.82834599175594,0.00022360681,nan,nan\n",
"77,nan,106.99571686833758,0.00015376457,nan,nan\n",
"78,nan,106.90084430459963,0.00010573713,nan,nan\n",
"79,nan,106.82583970958389,7.271077e-05,nan,nan\n",
"80,5.0844010575554196e-05,106.89218551531468,5e-05,5.0844010575554196e-05,839.6044311523438\n",
"env: EPOCH=080\n"
]
}
],
"source": [
"# Now check the evaluation stats in the csv file and pick the model with highest eval accuracy.\n",
"!cat $LOCAL_EXPERIMENT_DIR/experiment_dir_unpruned_300x300/ssd_training_log_mobilenet_v2.csv\n",
"%set_env EPOCH=080"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5. Evaluate trained models "
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2021-03-12 12:44:00,272 [WARNING] tlt.components.docker_handler.docker_handler: \n",
"Docker will run the commands as root. If you would like to retain your\n",
"local host permissions, please add the \"user\":\"UID:GID\" in the\n",
"DockerOptions portion of the ~/.tlt_mounts.json file. You can obtain your\n",
"users UID and GID by using the \"id -u\" and \"id -g\" commands on the\n",
"terminal.\n",
"Using TensorFlow backend.\n",
"Using TensorFlow backend.\n",
"WARNING:tensorflow:Deprecation warnings have been disabled. Set TF_ENABLE_DEPRECATION_WARNINGS=1 to re-enable them.\n",
"2021-03-12 09:44:06,414 [INFO] /usr/local/lib/python3.6/dist-packages/iva/ssd/utils/spec_loader.pyc: Merging specification from /workspace/tlt-experiments/ssd/specs/ssd_train_mobilenet_v2.txt\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:95: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.\n",
"\n",
"2021-03-12 09:44:06,416 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:95: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:98: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.\n",
"\n",
"2021-03-12 09:44:06,416 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:98: The name tf.placeholder_with_default is deprecated. Please use tf.compat.v1.placeholder_with_default instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:102: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n",
"\n",
"2021-03-12 09:44:06,419 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:102: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n",
"\n",
"2021-03-12 09:44:06,553 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:517: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n",
"\n",
"2021-03-12 09:44:06,580 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4138: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.\n",
"\n",
"2021-03-12 09:44:06,595 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:1834: The name tf.nn.fused_batch_norm is deprecated. Please use tf.compat.v1.nn.fused_batch_norm instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4185: The name tf.truncated_normal is deprecated. Please use tf.random.truncated_normal instead.\n",
"\n",
"2021-03-12 09:44:07,625 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4185: The name tf.truncated_normal is deprecated. Please use tf.random.truncated_normal instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.\n",
"\n",
"2021-03-12 09:44:08,237 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:174: The name tf.get_default_session is deprecated. Please use tf.compat.v1.get_default_session instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n",
"\n",
"2021-03-12 09:44:08,237 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:181: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:186: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n",
"\n",
"2021-03-12 09:44:08,238 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:186: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n",
"\n",
"2021-03-12 09:44:08,558 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:190: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:199: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.\n",
"\n",
"2021-03-12 09:44:08,559 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:199: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:206: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.\n",
"\n",
"2021-03-12 09:44:08,863 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:206: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n",
"\n",
"2021-03-12 09:44:09,309 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/optimizers.py:790: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead.\n",
"\n",
"WARNING:tensorflow:From /home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/ssd/architecture/ssd_loss.py:87: The name tf.log is deprecated. Please use tf.math.log instead.\n",
"\n",
"2021-03-12 09:44:09,318 [WARNING] tensorflow: From /home/vpraveen/.cache/dazel/_dazel_vpraveen/216c8b41e526c3295d3b802489ac2034/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/iva/build_wheel.runfiles/ai_infra/iva/ssd/architecture/ssd_loss.py:87: The name tf.log is deprecated. Please use tf.math.log instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.\n",
"\n",
"2021-03-12 09:44:10,849 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:986: The name tf.assign_add is deprecated. Please use tf.compat.v1.assign_add instead.\n",
"\n",
"WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:973: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.\n",
"\n",
"2021-03-12 09:44:11,166 [WARNING] tensorflow: From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:973: The name tf.assign is deprecated. Please use tf.compat.v1.assign instead.\n",
"\n",
"Using TLT model for inference, setting batch size to the one in eval_config: 8\n",
"Producing predictions: 100%|██████████████████████| 6/6 [00:03<00:00, 1.69it/s]\n",
"Start to calculate AP for each class\n",
"*******************************\n",
"head AP 0.0\n",
" mAP 0.0\n",
"*******************************\n",
"2021-03-12 12:44:18,727 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.\n"
]
}
],
"source": [
"!tlt ssd evaluate --gpu_index=$GPU_INDEX \\\n",
" -e $SPECS_DIR/ssd_train_mobilenet_v2.txt \\\n",
" -m $USER_EXPERIMENT_DIR/experiment_dir_unpruned_300x300/weights/ssd_mobilenet_v2_epoch_$EPOCH.tlt \\\n",
" -k $KEY"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. Prune trained models \n",
"* Specify pre-trained model\n",
"* Equalization criterion (`Only for resnets as they have element wise operations or MobileNets.`)\n",
"* Threshold for pruning.\n",
"* A key to save and load the model\n",
"* Output directory to store the model\n",
"\n",
"Usually, you just need to adjust `-pth` (threshold) for accuracy and model size trade off. Higher `pth` gives you smaller model (and thus higher inference speed) but worse accuracy. The threshold value depends on the dataset and the model. `0.5` in the block below is just a start point. If the retrain accuracy is good, you can increase this value to get smaller models. Otherwise, lower this value to get better accuracy."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!mkdir -p $LOCAL_EXPERIMENT_DIR/experiment_dir_pruned_300x300"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"!tlt ssd prune --gpu_index=$GPU_INDEX \\\n",
" -m $USER_EXPERIMENT_DIR/experiment_dir_unpruned_300x300/weights/ssd_mobilenet_v2_epoch_$EPOCH.tlt \\\n",
" -o $USER_EXPERIMENT_DIR/experiment_dir_pruned_300x300/ssd_mobilenet_v2_pruned.tlt \\\n",
" -eq intersection \\\n",
" -pth 0.1 \\\n",
" -k $KEY"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!ls -rlt $LOCAL_EXPERIMENT_DIR/experiment_dir_pruned/"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 7. Retrain pruned models \n",
"* Model needs to be re-trained to bring back accuracy after pruning\n",
"* Specify re-training specification\n",
"* WARNING: training will take several hours or one day to complete"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Printing the retrain spec file. \n",
"# Here we have updated the spec file to include the newly pruned model as a pretrained weights.\n",
"!cat $LOCAL_SPECS_DIR/ssd_retrain_resnet18_kitti.txt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!mkdir -p $LOCAL_EXPERIMENT_DIR/experiment_dir_retrain"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Retraining using the pruned model as pretrained weights \n",
"!tlt ssd train --gpus 1 --gpu_index=$GPU_INDEX \\\n",
" -e $SPECS_DIR/ssd_retrain_resnet18_kitti.txt \\\n",
" -r $USER_EXPERIMENT_DIR/experiment_dir_retrain \\\n",
" -m $USER_EXPERIMENT_DIR/experiment_dir_pruned/ssd_resnet18_pruned.tlt \\\n",
" -k $KEY"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Listing the newly retrained model.\n",
"!ls -rlt $LOCAL_EXPERIMENT_DIR/experiment_dir_retrain/weights"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Now check the evaluation stats in the csv file and pick the model with highest eval accuracy.\n",
"!cat $LOCAL_EXPERIMENT_DIR/experiment_dir_retrain/ssd_training_log_resnet18.csv\n",
"%set_env EPOCH=080"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 8. Evaluate retrained model "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"!tlt ssd evaluate --gpu_index=$GPU_INDEX \\\n",
" -e $SPECS_DIR/ssd_retrain_resnet18_kitti.txt \\\n",
" -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/ssd_resnet18_epoch_$EPOCH.tlt \\\n",
" -k $KEY"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 9. Visualize inferences \n",
"In this section, we run the tlt-infer tool to generate inferences on the trained models and visualize the results."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Copy some test images\n",
"!mkdir -p $LOCAL_DATA_DIR/test_samples\n",
"!cp $LOCAL_DATA_DIR/testing/image_2/00000* $LOCAL_DATA_DIR/test_samples"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Running inference for detection on n images\n",
"!tlt ssd inference --gpu_index=$GPU_INDEX -i $DATA_DOWNLOAD_DIR/test_samples \\\n",
" -o $USER_EXPERIMENT_DIR/ssd_infer_images \\\n",
" -e $SPECS_DIR/ssd_retrain_resnet18_kitti.txt \\\n",
" -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/ssd_resnet18_epoch_$EPOCH.tlt \\\n",
" -l $USER_EXPERIMENT_DIR/ssd_infer_labels \\\n",
" -k $KEY"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `tlt` inference tool produces two outputs. \n",
"1. Overlain images in `$USER_EXPERIMENT_DIR/ssd_infer_images`\n",
"2. Frame by frame bbox labels in kitti format located in `$USER_EXPERIMENT_DIR/ssd_infer_labels`"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Simple grid visualizer\n",
"!pip3 install matplotlib==3.3.3\n",
"import matplotlib.pyplot as plt\n",
"import os\n",
"from math import ceil\n",
"valid_image_ext = ['.jpg', '.png', '.jpeg', '.ppm']\n",
"\n",
"def visualize_images(image_dir, num_cols=4, num_images=10):\n",
" output_path = os.path.join(os.environ['LOCAL_EXPERIMENT_DIR'], image_dir)\n",
" num_rows = int(ceil(float(num_images) / float(num_cols)))\n",
" f, axarr = plt.subplots(num_rows, num_cols, figsize=[80,30])\n",
" f.tight_layout()\n",
" a = [os.path.join(output_path, image) for image in os.listdir(output_path) \n",
" if os.path.splitext(image)[1].lower() in valid_image_ext]\n",
" for idx, img_path in enumerate(a[:num_images]):\n",
" col_id = idx % num_cols\n",
" row_id = idx // num_cols\n",
" img = plt.imread(img_path)\n",
" axarr[row_id, col_id].imshow(img) "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Visualizing the sample images.\n",
"OUTPUT_PATH = 'ssd_infer_images' # relative path from $USER_EXPERIMENT_DIR.\n",
"COLS = 3 # number of columns in the visualizer grid.\n",
"IMAGES = 9 # number of images to visualize.\n",
"\n",
"visualize_images(OUTPUT_PATH, num_cols=COLS, num_images=IMAGES)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 10. Model Export "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you trained a non-QAT model, you may export in FP32, FP16 or INT8 mode using the code block below. For INT8, you need to provide calibration image directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# tlt-export will fail if .etlt already exists. So we clear the export folder before tlt-export\n",
"!rm -rf $LOCAL_EXPERIMENT_DIR/export\n",
"!mkdir -p $LOCAL_EXPERIMENT_DIR/export\n",
"# Export in FP32 mode. Change --data_type to fp16 for FP16 mode\n",
"!tlt ssd export --gpu_index=$GPU_INDEX \\\n",
" -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/ssd_resnet18_epoch_$EPOCH.tlt \\\n",
" -k $KEY \\\n",
" -o $USER_EXPERIMENT_DIR/export/ssd_resnet18_epoch_$EPOCH.etlt \\\n",
" -e $SPECS_DIR/ssd_retrain_resnet18_kitti.txt \\\n",
" --batch_size 16 \\\n",
" --data_type fp32\n",
"\n",
"# Uncomment to export in INT8 mode (generate calibration cache file).\n",
"# !tlt ssd export --gpu_index=$GPU_INDEX \\\n",
"# -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/ssd_resnet18_epoch_$EPOCH.tlt \\\n",
"# -o $USER_EXPERIMENT_DIR/export/ssd_resnet18_epoch_$EPOCH.etlt \\\n",
"# -e $SPECS_DIR/ssd_retrain_resnet18_kitti.txt \\\n",
"# -k $KEY \\\n",
"# --cal_image_dir $USER_EXPERIMENT_DIR/data/testing/image_2 \\\n",
"# --data_type int8 \\\n",
"# --batch_size 16 \\\n",
"# --batches 10 \\\n",
"# --cal_cache_file $USER_EXPERIMENT_DIR/export/cal.bin \\\n",
"# --cal_data_file $USER_EXPERIMENT_DIR/export/cal.tensorfile"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`Note:` In this example, for ease of execution we restrict the number of calibrating batches to 10. TLT recommends the use of at least 10% of the training dataset for int8 calibration.\n",
"\n",
"If you train a QAT model, you may only export in INT8 mode using following code block. This generates an etlt file and the corresponding calibration cache. You can throw away the calibration cache and just use the etlt file in tlt-converter or DeepStream for FP32 or FP16 mode. But please note this gives sub-optimal results. If you want to deploy in FP32 or FP16, you should disable QAT in training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Uncomment to export QAT model in INT8 mode (generate calibration cache file).\n",
"# !rm -rf $LOCAL_EXPERIMENT_DIR/export\n",
"# !mkdir -p $LOCAL_EXPERIMENT_DIR/export\n",
"# !tlt ssd export --gpu_index=$GPU_INDEX \\\n",
"# -m $USER_EXPERIMENT_DIR/experiment_dir_retrain/weights/ssd_resnet18_epoch_$EPOCH.tlt \\\n",
"# -o $USER_EXPERIMENT_DIR/export/ssd_resnet18_epoch_$EPOCH.etlt \\\n",
"# -e $SPECS_DIR/ssd_retrain_resnet18_kitti.txt \\\n",
"# -k $KEY \\\n",
"# --data_type int8 \\\n",
"# --cal_cache_file $USER_EXPERIMENT_DIR/export/cal.bin"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print('Exported model:')\n",
"print('------------')\n",
"!ls -lh $LOCAL_EXPERIMENT_DIR/export"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Verify engine generation using the `tlt-converter` utility included with the docker.\n",
"\n",
"The `tlt-converter` produces optimized tensorrt engines for the platform that it resides on. Therefore, to get maximum performance, please instantiate this docker and execute the `tlt-converter` command, with the exported `.etlt` file and calibration cache (for int8 mode) on your target device. The converter utility included in this docker only works for x86 devices, with discrete NVIDIA GPU's. \n",
"\n",
"For the jetson devices, please download the converter for jetson from the dev zone link [here](https://developer.nvidia.com/tlt-converter). \n",
"\n",
"If you choose to integrate your model into deepstream directly, you may do so by simply copying the exported `.etlt` file along with the calibration cache to the target device and updating the spec file that configures the `gst-nvinfer` element to point to this newly exported model. Usually this file is called `config_infer_primary.txt` for detection models and `config_infer_secondary_*.txt` for classification models."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Convert to TensorRT engine (FP32)\n",
"!tlt tlt-converter -k $KEY \\\n",
" -d 3,300,300 \\\n",
" -o NMS \\\n",
" -e $USER_EXPERIMENT_DIR/export/trt.engine \\\n",
" -m 16 \\\n",
" -t fp32 \\\n",
" -i nchw \\\n",
" $USER_EXPERIMENT_DIR/export/ssd_resnet18_epoch_$EPOCH.etlt\n",
"\n",
"# Convert to TensorRT engine (FP16)\n",
"# !tlt tlt-converter -k $KEY \\\n",
"# -d 3,300,300 \\\n",
"# -o NMS \\\n",
"# -e $USER_EXPERIMENT_DIR/export/trt.engine \\\n",
"# -m 16 \\\n",
"# -t fp16 \\\n",
"# -i nchw \\\n",
"# $USER_EXPERIMENT_DIR/export/ssd_resnet18_epoch_$EPOCH.etlt\n",
"\n",
"# Convert to TensorRT engine (INT8).\n",
"# !tlt tlt-converter -k $KEY \\\n",
"# -d 3,300,300 \\\n",
"# -o NMS \\\n",
"# -c $USER_EXPERIMENT_DIR/export/cal.bin \\\n",
"# -e $USER_EXPERIMENT_DIR/export/trt.engine \\\n",
"# -b 8 \\\n",
"# -m 16 \\\n",
"# -t int8 \\\n",
"# -i nchw \\\n",
"# $USER_EXPERIMENT_DIR/export/ssd_resnet18_epoch_$EPOCH.etlt"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"print('Exported engine:')\n",
"print('------------')\n",
"!ls -lh $LOCAL_EXPERIMENT_DIR/export/trt.engine"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 11. Verify the deployed model \n",
"Verify the converted engine by visualizing TensorRT inferences."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Infer using TensorRT engine\n",
"\n",
"# The engine batch size once created, cannot be alterred. So if you wish to run with a different batch-size,\n",
"# please re-run tlt-convert.\n",
"\n",
"!tlt ssd inference --gpu_index=$GPU_INDEX \\\n",
" -m $USER_EXPERIMENT_DIR/export/trt.engine \\\n",
" -e $SPECS_DIR/ssd_retrain_resnet18_kitti.txt \\\n",
" -i $DATA_DOWNLOAD_DIR/test_samples \\\n",
" -o $USER_EXPERIMENT_DIR/ssd_infer_images \\\n",
" -t 0.4"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Visualizing the sample images.\n",
"OUTPUT_PATH = 'ssd_infer_images' # relative path from $USER_EXPERIMENT_DIR.\n",
"COLS = 3 # number of columns in the visualizer grid.\n",
"IMAGES = 9 # number of images to visualize.\n",
"\n",
"visualize_images(OUTPUT_PATH, num_cols=COLS, num_images=IMAGES)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.9"
}
},
"nbformat": 4,
"nbformat_minor": 2
}