I0724 12:55:31.296195 38738 upgrade_proto.cpp:1044] Attempting to upgrade input file specified using deprecated 'solver_type' field (enum)': /home/mikulis/digits/digits/jobs/20190724-125529-6c6c/solver.prototxt I0724 12:55:31.296464 38738 upgrade_proto.cpp:1051] Successfully upgraded file specified using deprecated 'solver_type' field (enum) to 'type' field (string). W0724 12:55:31.296473 38738 upgrade_proto.cpp:1053] Note that future Caffe releases will only support 'type' field (string) for a solver's type. I0724 12:55:31.421175 38738 caffe.cpp:197] Using GPUs 0 I0724 12:55:31.421761 38738 caffe.cpp:202] GPU 0: GeForce GTX 980 I0724 12:55:31.687989 38738 solver.cpp:48] Initializing solver from parameters: test_iter: 1 test_interval: 1 base_lr: 0.01 display: 1 max_iter: 30 lr_policy: "step" gamma: 0.1 momentum: 0.9 weight_decay: 0.0001 stepsize: 10 snapshot: 1 snapshot_prefix: "snapshot" solver_mode: GPU device_id: 0 net: "train_val.prototxt" type: "SGD" I0724 12:55:31.688235 38738 solver.cpp:91] Creating training net from net file: train_val.prototxt I0724 12:55:31.688493 38738 net.cpp:323] The NetState phase (0) differed from the phase (1) specified by a rule in layer val-data I0724 12:55:31.688504 38738 net.cpp:323] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracy I0724 12:55:31.688593 38738 net.cpp:52] Initializing net from parameters: state { phase: TRAIN } layer { name: "train-data" type: "Data" top: "data" top: "label" include { phase: TRAIN } transform_param { mean_file: "/home/mikulis/digits/digits/jobs/20190724-125442-f606/mean.binaryproto" } data_param { source: "/home/mikulis/digits/digits/jobs/20190724-125442-f606/train_db" batch_size: 64 backend: LMDB } } layer { name: "scale" type: "Power" bottom: "data" top: "scaled" power_param { scale: 0.0125 } } layer { name: "conv1" type: "Convolution" bottom: "scaled" top: "conv1" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 20 kernel_size: 5 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 50 kernel_size: 5 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "pool2" type: "Pooling" bottom: "conv2" top: "pool2" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "ip1" type: "InnerProduct" bottom: "pool2" top: "ip1" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 500 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu1" type: "ReLU" bottom: "ip1" top: "ip1" } layer { name: "ip2" type: "InnerProduct" bottom: "ip1" top: "ip2" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 2 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "ip2" bottom: "label" top: "loss" } I0724 12:55:31.688654 38738 layer_factory.hpp:77] Creating layer train-data I0724 12:55:31.692129 38738 net.cpp:94] Creating Layer train-data I0724 12:55:31.692169 38738 net.cpp:409] train-data -> data I0724 12:55:31.692221 38738 net.cpp:409] train-data -> label I0724 12:55:31.692250 38738 data_transformer.cpp:25] Loading mean file from: /home/mikulis/digits/digits/jobs/20190724-125442-f606/mean.binaryproto I0724 12:55:31.693441 38743 db_lmdb.cpp:35] Opened lmdb /home/mikulis/digits/digits/jobs/20190724-125442-f606/train_db I0724 12:55:31.694453 38738 data_layer.cpp:78] ReshapePrefetch 64, 1, 76, 109 I0724 12:55:31.694540 38738 data_layer.cpp:83] output data size: 64,1,76,109 I0724 12:55:31.701970 38738 net.cpp:144] Setting up train-data I0724 12:55:31.702000 38738 net.cpp:151] Top shape: 64 1 76 109 (530176) I0724 12:55:31.702054 38738 net.cpp:151] Top shape: 64 (64) I0724 12:55:31.702061 38738 net.cpp:159] Memory required for data: 2120960 I0724 12:55:31.702076 38738 layer_factory.hpp:77] Creating layer scale I0724 12:55:31.702098 38738 net.cpp:94] Creating Layer scale I0724 12:55:31.702109 38738 net.cpp:435] scale <- data I0724 12:55:31.702129 38738 net.cpp:409] scale -> scaled I0724 12:55:31.702402 38738 net.cpp:144] Setting up scale I0724 12:55:31.702445 38738 net.cpp:151] Top shape: 64 1 76 109 (530176) I0724 12:55:31.702452 38738 net.cpp:159] Memory required for data: 4241664 I0724 12:55:31.702463 38738 layer_factory.hpp:77] Creating layer conv1 I0724 12:55:31.702513 38738 net.cpp:94] Creating Layer conv1 I0724 12:55:31.702522 38738 net.cpp:435] conv1 <- scaled I0724 12:55:31.702543 38738 net.cpp:409] conv1 -> conv1 I0724 12:55:31.703068 38738 net.cpp:144] Setting up conv1 I0724 12:55:31.703083 38738 net.cpp:151] Top shape: 64 20 72 105 (9676800) I0724 12:55:31.703090 38738 net.cpp:159] Memory required for data: 42948864 I0724 12:55:31.703125 38738 layer_factory.hpp:77] Creating layer pool1 I0724 12:55:31.703140 38738 net.cpp:94] Creating Layer pool1 I0724 12:55:31.703147 38738 net.cpp:435] pool1 <- conv1 I0724 12:55:31.703158 38738 net.cpp:409] pool1 -> pool1 I0724 12:55:31.703256 38738 net.cpp:144] Setting up pool1 I0724 12:55:31.703269 38738 net.cpp:151] Top shape: 64 20 36 53 (2442240) I0724 12:55:31.703275 38738 net.cpp:159] Memory required for data: 52717824 I0724 12:55:31.703281 38738 layer_factory.hpp:77] Creating layer conv2 I0724 12:55:31.703305 38738 net.cpp:94] Creating Layer conv2 I0724 12:55:31.703311 38738 net.cpp:435] conv2 <- pool1 I0724 12:55:31.703325 38738 net.cpp:409] conv2 -> conv2 I0724 12:55:31.704109 38738 net.cpp:144] Setting up conv2 I0724 12:55:31.704125 38738 net.cpp:151] Top shape: 64 50 32 49 (5017600) I0724 12:55:31.704131 38738 net.cpp:159] Memory required for data: 72788224 I0724 12:55:31.704149 38738 layer_factory.hpp:77] Creating layer pool2 I0724 12:55:31.704159 38738 net.cpp:94] Creating Layer pool2 I0724 12:55:31.704166 38738 net.cpp:435] pool2 <- conv2 I0724 12:55:31.704182 38738 net.cpp:409] pool2 -> pool2 I0724 12:55:31.704254 38738 net.cpp:144] Setting up pool2 I0724 12:55:31.704269 38738 net.cpp:151] Top shape: 64 50 16 25 (1280000) I0724 12:55:31.704275 38738 net.cpp:159] Memory required for data: 77908224 I0724 12:55:31.704282 38738 layer_factory.hpp:77] Creating layer ip1 I0724 12:55:31.704295 38738 net.cpp:94] Creating Layer ip1 I0724 12:55:31.704303 38738 net.cpp:435] ip1 <- pool2 I0724 12:55:31.704334 38738 net.cpp:409] ip1 -> ip1 I0724 12:55:31.818130 38738 net.cpp:144] Setting up ip1 I0724 12:55:31.818166 38738 net.cpp:151] Top shape: 64 500 (32000) I0724 12:55:31.818169 38738 net.cpp:159] Memory required for data: 78036224 I0724 12:55:31.818192 38738 layer_factory.hpp:77] Creating layer relu1 I0724 12:55:31.818207 38738 net.cpp:94] Creating Layer relu1 I0724 12:55:31.818213 38738 net.cpp:435] relu1 <- ip1 I0724 12:55:31.818222 38738 net.cpp:396] relu1 -> ip1 (in-place) I0724 12:55:31.818236 38738 net.cpp:144] Setting up relu1 I0724 12:55:31.818240 38738 net.cpp:151] Top shape: 64 500 (32000) I0724 12:55:31.818244 38738 net.cpp:159] Memory required for data: 78164224 I0724 12:55:31.818248 38738 layer_factory.hpp:77] Creating layer ip2 I0724 12:55:31.818261 38738 net.cpp:94] Creating Layer ip2 I0724 12:55:31.818265 38738 net.cpp:435] ip2 <- ip1 I0724 12:55:31.818272 38738 net.cpp:409] ip2 -> ip2 I0724 12:55:31.818406 38738 net.cpp:144] Setting up ip2 I0724 12:55:31.818414 38738 net.cpp:151] Top shape: 64 2 (128) I0724 12:55:31.818418 38738 net.cpp:159] Memory required for data: 78164736 I0724 12:55:31.818424 38738 layer_factory.hpp:77] Creating layer loss I0724 12:55:31.818439 38738 net.cpp:94] Creating Layer loss I0724 12:55:31.818442 38738 net.cpp:435] loss <- ip2 I0724 12:55:31.818447 38738 net.cpp:435] loss <- label I0724 12:55:31.818459 38738 net.cpp:409] loss -> loss I0724 12:55:31.818471 38738 layer_factory.hpp:77] Creating layer loss I0724 12:55:31.818614 38738 net.cpp:144] Setting up loss I0724 12:55:31.818620 38738 net.cpp:151] Top shape: (1) I0724 12:55:31.818624 38738 net.cpp:154] with loss weight 1 I0724 12:55:31.818653 38738 net.cpp:159] Memory required for data: 78164740 I0724 12:55:31.818657 38738 net.cpp:220] loss needs backward computation. I0724 12:55:31.818667 38738 net.cpp:220] ip2 needs backward computation. I0724 12:55:31.818670 38738 net.cpp:220] relu1 needs backward computation. I0724 12:55:31.818675 38738 net.cpp:220] ip1 needs backward computation. I0724 12:55:31.818678 38738 net.cpp:220] pool2 needs backward computation. I0724 12:55:31.818681 38738 net.cpp:220] conv2 needs backward computation. I0724 12:55:31.818686 38738 net.cpp:220] pool1 needs backward computation. I0724 12:55:31.818689 38738 net.cpp:220] conv1 needs backward computation. I0724 12:55:31.818694 38738 net.cpp:222] scale does not need backward computation. I0724 12:55:31.818698 38738 net.cpp:222] train-data does not need backward computation. I0724 12:55:31.818701 38738 net.cpp:264] This network produces output loss I0724 12:55:31.818718 38738 net.cpp:284] Network initialization done. I0724 12:55:31.818996 38738 solver.cpp:181] Creating test net (#0) specified by net file: train_val.prototxt I0724 12:55:31.819032 38738 net.cpp:323] The NetState phase (1) differed from the phase (0) specified by a rule in layer train-data I0724 12:55:31.819151 38738 net.cpp:52] Initializing net from parameters: state { phase: TEST } layer { name: "val-data" type: "Data" top: "data" top: "label" include { phase: TEST } transform_param { mean_file: "/home/mikulis/digits/digits/jobs/20190724-125442-f606/mean.binaryproto" } data_param { source: "/home/mikulis/digits/digits/jobs/20190724-125442-f606/val_db" batch_size: 32 backend: LMDB } } layer { name: "scale" type: "Power" bottom: "data" top: "scaled" power_param { scale: 0.0125 } } layer { name: "conv1" type: "Convolution" bottom: "scaled" top: "conv1" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 20 kernel_size: 5 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "pool1" type: "Pooling" bottom: "conv1" top: "pool1" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 50 kernel_size: 5 stride: 1 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "pool2" type: "Pooling" bottom: "conv2" top: "pool2" pooling_param { pool: MAX kernel_size: 2 stride: 2 } } layer { name: "ip1" type: "InnerProduct" bottom: "pool2" top: "ip1" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 500 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "relu1" type: "ReLU" bottom: "ip1" top: "ip1" } layer { name: "ip2" type: "InnerProduct" bottom: "ip1" top: "ip2" param { lr_mult: 1 } param { lr_mult: 2 } inner_product_param { num_output: 2 weight_filler { type: "xavier" } bias_filler { type: "constant" } } } layer { name: "accuracy" type: "Accuracy" bottom: "ip2" bottom: "label" top: "accuracy" include { phase: TEST } } layer { name: "loss" type: "SoftmaxWithLoss" bottom: "ip2" bottom: "label" top: "loss" } I0724 12:55:31.819255 38738 layer_factory.hpp:77] Creating layer val-data I0724 12:55:31.823047 38738 net.cpp:94] Creating Layer val-data I0724 12:55:31.823083 38738 net.cpp:409] val-data -> data I0724 12:55:31.823109 38738 net.cpp:409] val-data -> label I0724 12:55:31.823128 38738 data_transformer.cpp:25] Loading mean file from: /home/mikulis/digits/digits/jobs/20190724-125442-f606/mean.binaryproto I0724 12:55:31.823361 38777 db_lmdb.cpp:35] Opened lmdb /home/mikulis/digits/digits/jobs/20190724-125442-f606/val_db I0724 12:55:31.824228 38738 data_layer.cpp:78] ReshapePrefetch 32, 1, 76, 109 I0724 12:55:31.824312 38738 data_layer.cpp:83] output data size: 32,1,76,109 I0724 12:55:31.829952 38738 net.cpp:144] Setting up val-data I0724 12:55:31.829985 38738 net.cpp:151] Top shape: 32 1 76 109 (265088) I0724 12:55:31.829995 38738 net.cpp:151] Top shape: 32 (32) I0724 12:55:31.830001 38738 net.cpp:159] Memory required for data: 1060480 I0724 12:55:31.830014 38738 layer_factory.hpp:77] Creating layer label_val-data_1_split I0724 12:55:31.830034 38738 net.cpp:94] Creating Layer label_val-data_1_split I0724 12:55:31.830042 38738 net.cpp:435] label_val-data_1_split <- label I0724 12:55:31.830055 38738 net.cpp:409] label_val-data_1_split -> label_val-data_1_split_0 I0724 12:55:31.830073 38738 net.cpp:409] label_val-data_1_split -> label_val-data_1_split_1 I0724 12:55:31.830164 38738 net.cpp:144] Setting up label_val-data_1_split I0724 12:55:31.830176 38738 net.cpp:151] Top shape: 32 (32) I0724 12:55:31.830185 38738 net.cpp:151] Top shape: 32 (32) I0724 12:55:31.830190 38738 net.cpp:159] Memory required for data: 1060736 I0724 12:55:31.830197 38738 layer_factory.hpp:77] Creating layer scale I0724 12:55:31.830212 38738 net.cpp:94] Creating Layer scale I0724 12:55:31.830221 38738 net.cpp:435] scale <- data I0724 12:55:31.830233 38738 net.cpp:409] scale -> scaled I0724 12:55:31.830376 38738 net.cpp:144] Setting up scale I0724 12:55:31.830397 38738 net.cpp:151] Top shape: 32 1 76 109 (265088) I0724 12:55:31.830402 38738 net.cpp:159] Memory required for data: 2121088 I0724 12:55:31.830407 38738 layer_factory.hpp:77] Creating layer conv1 I0724 12:55:31.830425 38738 net.cpp:94] Creating Layer conv1 I0724 12:55:31.830430 38738 net.cpp:435] conv1 <- scaled I0724 12:55:31.830444 38738 net.cpp:409] conv1 -> conv1 I0724 12:55:31.830785 38738 net.cpp:144] Setting up conv1 I0724 12:55:31.830796 38738 net.cpp:151] Top shape: 32 20 72 105 (4838400) I0724 12:55:31.830799 38738 net.cpp:159] Memory required for data: 21474688 I0724 12:55:31.830814 38738 layer_factory.hpp:77] Creating layer pool1 I0724 12:55:31.830824 38738 net.cpp:94] Creating Layer pool1 I0724 12:55:31.830829 38738 net.cpp:435] pool1 <- conv1 I0724 12:55:31.830837 38738 net.cpp:409] pool1 -> pool1 I0724 12:55:31.830893 38738 net.cpp:144] Setting up pool1 I0724 12:55:31.830901 38738 net.cpp:151] Top shape: 32 20 36 53 (1221120) I0724 12:55:31.830905 38738 net.cpp:159] Memory required for data: 26359168 I0724 12:55:31.830909 38738 layer_factory.hpp:77] Creating layer conv2 I0724 12:55:31.830920 38738 net.cpp:94] Creating Layer conv2 I0724 12:55:31.830924 38738 net.cpp:435] conv2 <- pool1 I0724 12:55:31.830935 38738 net.cpp:409] conv2 -> conv2 I0724 12:55:31.831390 38738 net.cpp:144] Setting up conv2 I0724 12:55:31.831399 38738 net.cpp:151] Top shape: 32 50 32 49 (2508800) I0724 12:55:31.831403 38738 net.cpp:159] Memory required for data: 36394368 I0724 12:55:31.831413 38738 layer_factory.hpp:77] Creating layer pool2 I0724 12:55:31.831419 38738 net.cpp:94] Creating Layer pool2 I0724 12:55:31.831423 38738 net.cpp:435] pool2 <- conv2 I0724 12:55:31.831434 38738 net.cpp:409] pool2 -> pool2 I0724 12:55:31.831483 38738 net.cpp:144] Setting up pool2 I0724 12:55:31.831490 38738 net.cpp:151] Top shape: 32 50 16 25 (640000) I0724 12:55:31.831493 38738 net.cpp:159] Memory required for data: 38954368 I0724 12:55:31.831497 38738 layer_factory.hpp:77] Creating layer ip1 I0724 12:55:31.831508 38738 net.cpp:94] Creating Layer ip1 I0724 12:55:31.831512 38738 net.cpp:435] ip1 <- pool2 I0724 12:55:31.831521 38738 net.cpp:409] ip1 -> ip1 I0724 12:55:31.917477 38738 net.cpp:144] Setting up ip1 I0724 12:55:31.917510 38738 net.cpp:151] Top shape: 32 500 (16000) I0724 12:55:31.917515 38738 net.cpp:159] Memory required for data: 39018368 I0724 12:55:31.917531 38738 layer_factory.hpp:77] Creating layer relu1 I0724 12:55:31.917547 38738 net.cpp:94] Creating Layer relu1 I0724 12:55:31.917593 38738 net.cpp:435] relu1 <- ip1 I0724 12:55:31.917606 38738 net.cpp:396] relu1 -> ip1 (in-place) I0724 12:55:31.917618 38738 net.cpp:144] Setting up relu1 I0724 12:55:31.917623 38738 net.cpp:151] Top shape: 32 500 (16000) I0724 12:55:31.917625 38738 net.cpp:159] Memory required for data: 39082368 I0724 12:55:31.917629 38738 layer_factory.hpp:77] Creating layer ip2 I0724 12:55:31.917637 38738 net.cpp:94] Creating Layer ip2 I0724 12:55:31.917639 38738 net.cpp:435] ip2 <- ip1 I0724 12:55:31.917650 38738 net.cpp:409] ip2 -> ip2 I0724 12:55:31.917776 38738 net.cpp:144] Setting up ip2 I0724 12:55:31.917783 38738 net.cpp:151] Top shape: 32 2 (64) I0724 12:55:31.917785 38738 net.cpp:159] Memory required for data: 39082624 I0724 12:55:31.917791 38738 layer_factory.hpp:77] Creating layer ip2_ip2_0_split I0724 12:55:31.917798 38738 net.cpp:94] Creating Layer ip2_ip2_0_split I0724 12:55:31.917802 38738 net.cpp:435] ip2_ip2_0_split <- ip2 I0724 12:55:31.917809 38738 net.cpp:409] ip2_ip2_0_split -> ip2_ip2_0_split_0 I0724 12:55:31.917815 38738 net.cpp:409] ip2_ip2_0_split -> ip2_ip2_0_split_1 I0724 12:55:31.917850 38738 net.cpp:144] Setting up ip2_ip2_0_split I0724 12:55:31.917855 38738 net.cpp:151] Top shape: 32 2 (64) I0724 12:55:31.917858 38738 net.cpp:151] Top shape: 32 2 (64) I0724 12:55:31.917861 38738 net.cpp:159] Memory required for data: 39083136 I0724 12:55:31.917863 38738 layer_factory.hpp:77] Creating layer accuracy I0724 12:55:31.917874 38738 net.cpp:94] Creating Layer accuracy I0724 12:55:31.917877 38738 net.cpp:435] accuracy <- ip2_ip2_0_split_0 I0724 12:55:31.917882 38738 net.cpp:435] accuracy <- label_val-data_1_split_0 I0724 12:55:31.917887 38738 net.cpp:409] accuracy -> accuracy I0724 12:55:31.917896 38738 net.cpp:144] Setting up accuracy I0724 12:55:31.917899 38738 net.cpp:151] Top shape: (1) I0724 12:55:31.917902 38738 net.cpp:159] Memory required for data: 39083140 I0724 12:55:31.917906 38738 layer_factory.hpp:77] Creating layer loss I0724 12:55:31.917913 38738 net.cpp:94] Creating Layer loss I0724 12:55:31.917917 38738 net.cpp:435] loss <- ip2_ip2_0_split_1 I0724 12:55:31.917920 38738 net.cpp:435] loss <- label_val-data_1_split_1 I0724 12:55:31.917925 38738 net.cpp:409] loss -> loss I0724 12:55:31.917937 38738 layer_factory.hpp:77] Creating layer loss I0724 12:55:31.918015 38738 net.cpp:144] Setting up loss I0724 12:55:31.918020 38738 net.cpp:151] Top shape: (1) I0724 12:55:31.918022 38738 net.cpp:154] with loss weight 1 I0724 12:55:31.918035 38738 net.cpp:159] Memory required for data: 39083144 I0724 12:55:31.918040 38738 net.cpp:220] loss needs backward computation. I0724 12:55:31.918045 38738 net.cpp:222] accuracy does not need backward computation. I0724 12:55:31.918048 38738 net.cpp:220] ip2_ip2_0_split needs backward computation. I0724 12:55:31.918051 38738 net.cpp:220] ip2 needs backward computation. I0724 12:55:31.918054 38738 net.cpp:220] relu1 needs backward computation. I0724 12:55:31.918057 38738 net.cpp:220] ip1 needs backward computation. I0724 12:55:31.918061 38738 net.cpp:220] pool2 needs backward computation. I0724 12:55:31.918063 38738 net.cpp:220] conv2 needs backward computation. I0724 12:55:31.918066 38738 net.cpp:220] pool1 needs backward computation. I0724 12:55:31.918071 38738 net.cpp:220] conv1 needs backward computation. I0724 12:55:31.918073 38738 net.cpp:222] scale does not need backward computation. I0724 12:55:31.918077 38738 net.cpp:222] label_val-data_1_split does not need backward computation. I0724 12:55:31.918081 38738 net.cpp:222] val-data does not need backward computation. I0724 12:55:31.918083 38738 net.cpp:264] This network produces output accuracy I0724 12:55:31.918087 38738 net.cpp:264] This network produces output loss I0724 12:55:31.918105 38738 net.cpp:284] Network initialization done. I0724 12:55:31.918193 38738 solver.cpp:60] Solver scaffolding done. I0724 12:55:31.918435 38738 caffe.cpp:231] Starting Optimization I0724 12:55:31.918440 38738 solver.cpp:304] Solving I0724 12:55:31.918443 38738 solver.cpp:305] Learning Rate Policy: step I0724 12:55:31.919015 38738 solver.cpp:362] Iteration 0, Testing net (#0) I0724 12:55:31.919025 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:31.939061 38738 solver.cpp:429] Test net output #0: accuracy = 0.59375 I0724 12:55:31.939083 38738 solver.cpp:429] Test net output #1: loss = 0.695693 (* 1 = 0.695693 loss) I0724 12:55:31.972784 38738 solver.cpp:242] Iteration 0 (0 iter/s, 0.05429s/1 iter), loss = 0.704084 I0724 12:55:31.972807 38738 solver.cpp:261] Train net output #0: loss = 0.704084 (* 1 = 0.704084 loss) I0724 12:55:31.972831 38738 sgd_solver.cpp:106] Iteration 0, lr = 0.01 I0724 12:55:31.973253 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_1.caffemodel I0724 12:55:32.314153 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_1.solverstate I0724 12:55:32.472164 38738 solver.cpp:362] Iteration 1, Testing net (#0) I0724 12:55:32.472188 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:32.476723 38738 solver.cpp:429] Test net output #0: accuracy = 0.8125 I0724 12:55:32.476745 38738 solver.cpp:429] Test net output #1: loss = 0.645157 (* 1 = 0.645157 loss) I0724 12:55:32.509014 38738 solver.cpp:242] Iteration 1 (1.86504 iter/s, 0.536182s/1 iter), loss = 0.65786 I0724 12:55:32.509075 38738 solver.cpp:261] Train net output #0: loss = 0.65786 (* 1 = 0.65786 loss) I0724 12:55:32.509094 38738 sgd_solver.cpp:106] Iteration 1, lr = 0.01 I0724 12:55:32.509284 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_2.caffemodel I0724 12:55:32.901664 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_2.solverstate I0724 12:55:33.062335 38738 solver.cpp:362] Iteration 2, Testing net (#0) I0724 12:55:33.062372 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:33.068068 38738 solver.cpp:429] Test net output #0: accuracy = 0.6875 I0724 12:55:33.068127 38738 solver.cpp:429] Test net output #1: loss = 0.643376 (* 1 = 0.643376 loss) I0724 12:55:33.101214 38738 solver.cpp:242] Iteration 2 (1.68882 iter/s, 0.592128s/1 iter), loss = 0.639075 I0724 12:55:33.101263 38738 solver.cpp:261] Train net output #0: loss = 0.639075 (* 1 = 0.639075 loss) I0724 12:55:33.101294 38738 sgd_solver.cpp:106] Iteration 2, lr = 0.01 I0724 12:55:33.101596 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_3.caffemodel I0724 12:55:33.474802 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_3.solverstate I0724 12:55:33.635082 38738 solver.cpp:362] Iteration 3, Testing net (#0) I0724 12:55:33.635105 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:33.639667 38738 solver.cpp:429] Test net output #0: accuracy = 0.71875 I0724 12:55:33.639690 38738 solver.cpp:429] Test net output #1: loss = 0.59553 (* 1 = 0.59553 loss) I0724 12:55:33.673014 38738 solver.cpp:242] Iteration 3 (1.74907 iter/s, 0.571732s/1 iter), loss = 0.572367 I0724 12:55:33.673074 38738 solver.cpp:261] Train net output #0: loss = 0.572367 (* 1 = 0.572367 loss) I0724 12:55:33.673094 38738 sgd_solver.cpp:106] Iteration 3, lr = 0.01 I0724 12:55:33.673281 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_4.caffemodel I0724 12:55:34.047498 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_4.solverstate I0724 12:55:34.207492 38738 solver.cpp:362] Iteration 4, Testing net (#0) I0724 12:55:34.207516 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:34.212440 38738 solver.cpp:429] Test net output #0: accuracy = 0.6875 I0724 12:55:34.212477 38738 solver.cpp:429] Test net output #1: loss = 0.627055 (* 1 = 0.627055 loss) I0724 12:55:34.245708 38738 solver.cpp:242] Iteration 4 (1.74635 iter/s, 0.572622s/1 iter), loss = 0.541532 I0724 12:55:34.245743 38738 solver.cpp:261] Train net output #0: loss = 0.541532 (* 1 = 0.541532 loss) I0724 12:55:34.245754 38738 sgd_solver.cpp:106] Iteration 4, lr = 0.01 I0724 12:55:34.245857 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_5.caffemodel I0724 12:55:34.575742 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_5.solverstate I0724 12:55:34.729727 38738 solver.cpp:362] Iteration 5, Testing net (#0) I0724 12:55:34.729748 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:34.734719 38738 solver.cpp:429] Test net output #0: accuracy = 0.65625 I0724 12:55:34.734772 38738 solver.cpp:429] Test net output #1: loss = 0.621303 (* 1 = 0.621303 loss) I0724 12:55:34.767796 38738 solver.cpp:242] Iteration 5 (1.91554 iter/s, 0.522045s/1 iter), loss = 0.502653 I0724 12:55:34.767827 38738 solver.cpp:261] Train net output #0: loss = 0.502653 (* 1 = 0.502653 loss) I0724 12:55:34.767982 38738 sgd_solver.cpp:106] Iteration 5, lr = 0.01 I0724 12:55:34.768074 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_6.caffemodel I0724 12:55:35.063515 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_6.solverstate I0724 12:55:35.209367 38738 solver.cpp:362] Iteration 6, Testing net (#0) I0724 12:55:35.209388 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:35.214243 38738 solver.cpp:429] Test net output #0: accuracy = 0.6875 I0724 12:55:35.214264 38738 solver.cpp:429] Test net output #1: loss = 0.55486 (* 1 = 0.55486 loss) I0724 12:55:35.248059 38738 solver.cpp:242] Iteration 6 (2.08316 iter/s, 0.48004s/1 iter), loss = 0.516138 I0724 12:55:35.248134 38738 solver.cpp:261] Train net output #0: loss = 0.516138 (* 1 = 0.516138 loss) I0724 12:55:35.248157 38738 sgd_solver.cpp:106] Iteration 6, lr = 0.01 I0724 12:55:35.248385 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_7.caffemodel I0724 12:55:35.625658 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_7.solverstate I0724 12:55:35.854193 38738 solver.cpp:362] Iteration 7, Testing net (#0) I0724 12:55:35.854230 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:35.859210 38738 solver.cpp:429] Test net output #0: accuracy = 0.65625 I0724 12:55:35.859270 38738 solver.cpp:429] Test net output #1: loss = 0.711835 (* 1 = 0.711835 loss) I0724 12:55:35.892699 38738 solver.cpp:242] Iteration 7 (1.55145 iter/s, 0.64456s/1 iter), loss = 0.471349 I0724 12:55:35.892733 38738 solver.cpp:261] Train net output #0: loss = 0.471349 (* 1 = 0.471349 loss) I0724 12:55:35.892755 38738 sgd_solver.cpp:106] Iteration 7, lr = 0.01 I0724 12:55:35.892869 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_8.caffemodel I0724 12:55:36.219732 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_8.solverstate I0724 12:55:36.378167 38738 solver.cpp:362] Iteration 8, Testing net (#0) I0724 12:55:36.378190 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:36.382774 38738 solver.cpp:429] Test net output #0: accuracy = 0.75 I0724 12:55:36.382802 38738 solver.cpp:429] Test net output #1: loss = 0.551362 (* 1 = 0.551362 loss) I0724 12:55:36.415930 38738 solver.cpp:242] Iteration 8 (1.91137 iter/s, 0.523185s/1 iter), loss = 0.442102 I0724 12:55:36.415951 38738 solver.cpp:261] Train net output #0: loss = 0.442102 (* 1 = 0.442102 loss) I0724 12:55:36.415961 38738 sgd_solver.cpp:106] Iteration 8, lr = 0.01 I0724 12:55:36.416060 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_9.caffemodel I0724 12:55:36.735297 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_9.solverstate I0724 12:55:36.882761 38738 solver.cpp:362] Iteration 9, Testing net (#0) I0724 12:55:36.882781 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:36.887480 38738 solver.cpp:429] Test net output #0: accuracy = 0.71875 I0724 12:55:36.887516 38738 solver.cpp:429] Test net output #1: loss = 0.575082 (* 1 = 0.575082 loss) I0724 12:55:36.920635 38738 solver.cpp:242] Iteration 9 (1.98148 iter/s, 0.504674s/1 iter), loss = 0.404667 I0724 12:55:36.920661 38738 solver.cpp:261] Train net output #0: loss = 0.404667 (* 1 = 0.404667 loss) I0724 12:55:36.920692 38738 sgd_solver.cpp:106] Iteration 9, lr = 0.01 I0724 12:55:36.920786 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_10.caffemodel I0724 12:55:37.205490 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_10.solverstate I0724 12:55:37.347187 38738 solver.cpp:362] Iteration 10, Testing net (#0) I0724 12:55:37.347208 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:37.353044 38738 solver.cpp:429] Test net output #0: accuracy = 0.625 I0724 12:55:37.353096 38738 solver.cpp:429] Test net output #1: loss = 0.550047 (* 1 = 0.550047 loss) I0724 12:55:37.386515 38738 solver.cpp:242] Iteration 10 (2.14666 iter/s, 0.465839s/1 iter), loss = 0.357014 I0724 12:55:37.386565 38738 solver.cpp:261] Train net output #0: loss = 0.357014 (* 1 = 0.357014 loss) I0724 12:55:37.386582 38738 sgd_solver.cpp:106] Iteration 10, lr = 0.001 I0724 12:55:37.386755 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_11.caffemodel I0724 12:55:37.732890 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_11.solverstate I0724 12:55:37.888260 38738 solver.cpp:362] Iteration 11, Testing net (#0) I0724 12:55:37.888283 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:37.892990 38738 solver.cpp:429] Test net output #0: accuracy = 0.625 I0724 12:55:37.893015 38738 solver.cpp:429] Test net output #1: loss = 0.561204 (* 1 = 0.561204 loss) I0724 12:55:37.926229 38738 solver.cpp:242] Iteration 11 (1.85303 iter/s, 0.539655s/1 iter), loss = 0.368091 I0724 12:55:37.926249 38738 solver.cpp:261] Train net output #0: loss = 0.368091 (* 1 = 0.368091 loss) I0724 12:55:37.926259 38738 sgd_solver.cpp:106] Iteration 11, lr = 0.001 I0724 12:55:37.926368 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_12.caffemodel I0724 12:55:38.246973 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_12.solverstate I0724 12:55:38.402796 38738 solver.cpp:362] Iteration 12, Testing net (#0) I0724 12:55:38.402820 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:38.407649 38738 solver.cpp:429] Test net output #0: accuracy = 0.65625 I0724 12:55:38.407701 38738 solver.cpp:429] Test net output #1: loss = 0.525227 (* 1 = 0.525227 loss) I0724 12:55:38.440966 38738 solver.cpp:242] Iteration 12 (1.94289 iter/s, 0.514697s/1 iter), loss = 0.321389 I0724 12:55:38.441017 38738 solver.cpp:261] Train net output #0: loss = 0.321389 (* 1 = 0.321389 loss) I0724 12:55:38.441035 38738 sgd_solver.cpp:106] Iteration 12, lr = 0.001 I0724 12:55:38.441212 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_13.caffemodel I0724 12:55:38.810564 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_13.solverstate I0724 12:55:38.968833 38738 solver.cpp:362] Iteration 13, Testing net (#0) I0724 12:55:38.968858 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:38.973964 38738 solver.cpp:429] Test net output #0: accuracy = 0.6875 I0724 12:55:38.974020 38738 solver.cpp:429] Test net output #1: loss = 0.478003 (* 1 = 0.478003 loss) I0724 12:55:39.007601 38738 solver.cpp:242] Iteration 13 (1.76501 iter/s, 0.566569s/1 iter), loss = 0.335002 I0724 12:55:39.007653 38738 solver.cpp:261] Train net output #0: loss = 0.335002 (* 1 = 0.335002 loss) I0724 12:55:39.007675 38738 sgd_solver.cpp:106] Iteration 13, lr = 0.001 I0724 12:55:39.007886 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_14.caffemodel I0724 12:55:39.376236 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_14.solverstate I0724 12:55:39.534348 38738 solver.cpp:362] Iteration 14, Testing net (#0) I0724 12:55:39.534369 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:39.539333 38738 solver.cpp:429] Test net output #0: accuracy = 0.5 I0724 12:55:39.539360 38738 solver.cpp:429] Test net output #1: loss = 0.571909 (* 1 = 0.571909 loss) I0724 12:55:39.572268 38738 solver.cpp:242] Iteration 14 (1.77114 iter/s, 0.564607s/1 iter), loss = 0.311821 I0724 12:55:39.572288 38738 solver.cpp:261] Train net output #0: loss = 0.311821 (* 1 = 0.311821 loss) I0724 12:55:39.572463 38738 sgd_solver.cpp:106] Iteration 14, lr = 0.001 I0724 12:55:39.572561 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_15.caffemodel I0724 12:55:39.886550 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_15.solverstate I0724 12:55:40.042279 38738 solver.cpp:362] Iteration 15, Testing net (#0) I0724 12:55:40.042300 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:40.047166 38738 solver.cpp:429] Test net output #0: accuracy = 0.65625 I0724 12:55:40.047216 38738 solver.cpp:429] Test net output #1: loss = 0.492603 (* 1 = 0.492603 loss) I0724 12:55:40.080438 38738 solver.cpp:242] Iteration 15 (1.96866 iter/s, 0.507959s/1 iter), loss = 0.297363 I0724 12:55:40.080490 38738 solver.cpp:261] Train net output #0: loss = 0.297363 (* 1 = 0.297363 loss) I0724 12:55:40.080533 38738 sgd_solver.cpp:106] Iteration 15, lr = 0.001 I0724 12:55:40.080708 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_16.caffemodel I0724 12:55:40.441946 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_16.solverstate I0724 12:55:40.601061 38738 solver.cpp:362] Iteration 16, Testing net (#0) I0724 12:55:40.601085 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:40.605943 38738 solver.cpp:429] Test net output #0: accuracy = 0.65625 I0724 12:55:40.606000 38738 solver.cpp:429] Test net output #1: loss = 0.491105 (* 1 = 0.491105 loss) I0724 12:55:40.639206 38738 solver.cpp:242] Iteration 16 (1.79002 iter/s, 0.558653s/1 iter), loss = 0.307056 I0724 12:55:40.639266 38738 solver.cpp:261] Train net output #0: loss = 0.307056 (* 1 = 0.307056 loss) I0724 12:55:40.639291 38738 sgd_solver.cpp:106] Iteration 16, lr = 0.001 I0724 12:55:40.639475 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_17.caffemodel I0724 12:55:40.997949 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_17.solverstate I0724 12:55:41.154646 38738 solver.cpp:362] Iteration 17, Testing net (#0) I0724 12:55:41.154667 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:41.159484 38738 solver.cpp:429] Test net output #0: accuracy = 0.6875 I0724 12:55:41.159536 38738 solver.cpp:429] Test net output #1: loss = 0.51134 (* 1 = 0.51134 loss) I0724 12:55:41.192752 38738 solver.cpp:242] Iteration 17 (1.80677 iter/s, 0.553473s/1 iter), loss = 0.288619 I0724 12:55:41.192803 38738 solver.cpp:261] Train net output #0: loss = 0.288619 (* 1 = 0.288619 loss) I0724 12:55:41.192821 38738 sgd_solver.cpp:106] Iteration 17, lr = 0.001 I0724 12:55:41.193001 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_18.caffemodel I0724 12:55:41.545480 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_18.solverstate I0724 12:55:41.700374 38738 solver.cpp:362] Iteration 18, Testing net (#0) I0724 12:55:41.700395 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:41.705555 38738 solver.cpp:429] Test net output #0: accuracy = 0.6875 I0724 12:55:41.705598 38738 solver.cpp:429] Test net output #1: loss = 0.517663 (* 1 = 0.517663 loss) I0724 12:55:41.738940 38738 solver.cpp:242] Iteration 18 (1.83109 iter/s, 0.546123s/1 iter), loss = 0.268345 I0724 12:55:41.738976 38738 solver.cpp:261] Train net output #0: loss = 0.268345 (* 1 = 0.268345 loss) I0724 12:55:41.738993 38738 sgd_solver.cpp:106] Iteration 18, lr = 0.001 I0724 12:55:41.739167 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_19.caffemodel I0724 12:55:42.092267 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_19.solverstate I0724 12:55:42.249219 38738 solver.cpp:362] Iteration 19, Testing net (#0) I0724 12:55:42.249240 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:42.254576 38738 solver.cpp:429] Test net output #0: accuracy = 0.75 I0724 12:55:42.254629 38738 solver.cpp:429] Test net output #1: loss = 0.490299 (* 1 = 0.490299 loss) I0724 12:55:42.287560 38738 solver.cpp:242] Iteration 19 (1.82293 iter/s, 0.548567s/1 iter), loss = 0.272401 I0724 12:55:42.287611 38738 solver.cpp:261] Train net output #0: loss = 0.272401 (* 1 = 0.272401 loss) I0724 12:55:42.287629 38738 sgd_solver.cpp:106] Iteration 19, lr = 0.001 I0724 12:55:42.287808 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_20.caffemodel I0724 12:55:42.648197 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_20.solverstate I0724 12:55:42.805315 38738 solver.cpp:362] Iteration 20, Testing net (#0) I0724 12:55:42.805337 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:42.810204 38738 solver.cpp:429] Test net output #0: accuracy = 0.75 I0724 12:55:42.810256 38738 solver.cpp:429] Test net output #1: loss = 0.470441 (* 1 = 0.470441 loss) I0724 12:55:42.843608 38738 solver.cpp:242] Iteration 20 (1.79862 iter/s, 0.555982s/1 iter), loss = 0.248954 I0724 12:55:42.843657 38738 solver.cpp:261] Train net output #0: loss = 0.248954 (* 1 = 0.248954 loss) I0724 12:55:42.843677 38738 sgd_solver.cpp:106] Iteration 20, lr = 0.0001 I0724 12:55:42.843863 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_21.caffemodel I0724 12:55:43.210093 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_21.solverstate I0724 12:55:43.370326 38738 solver.cpp:362] Iteration 21, Testing net (#0) I0724 12:55:43.370352 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:43.374891 38738 solver.cpp:429] Test net output #0: accuracy = 0.625 I0724 12:55:43.374920 38738 solver.cpp:429] Test net output #1: loss = 0.577193 (* 1 = 0.577193 loss) I0724 12:55:43.407719 38738 solver.cpp:242] Iteration 21 (1.77292 iter/s, 0.564041s/1 iter), loss = 0.251627 I0724 12:55:43.407779 38738 solver.cpp:261] Train net output #0: loss = 0.251627 (* 1 = 0.251627 loss) I0724 12:55:43.407799 38738 sgd_solver.cpp:106] Iteration 21, lr = 0.0001 I0724 12:55:43.407989 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_22.caffemodel I0724 12:55:43.775966 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_22.solverstate I0724 12:55:43.932374 38738 solver.cpp:362] Iteration 22, Testing net (#0) I0724 12:55:43.932397 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:43.937147 38738 solver.cpp:429] Test net output #0: accuracy = 0.75 I0724 12:55:43.937172 38738 solver.cpp:429] Test net output #1: loss = 0.491892 (* 1 = 0.491892 loss) I0724 12:55:43.970141 38738 solver.cpp:242] Iteration 22 (1.77823 iter/s, 0.562357s/1 iter), loss = 0.257193 I0724 12:55:43.970163 38738 solver.cpp:261] Train net output #0: loss = 0.257193 (* 1 = 0.257193 loss) I0724 12:55:43.970173 38738 sgd_solver.cpp:106] Iteration 22, lr = 0.0001 I0724 12:55:43.970273 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_23.caffemodel I0724 12:55:44.288911 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_23.solverstate I0724 12:55:44.442808 38738 solver.cpp:362] Iteration 23, Testing net (#0) I0724 12:55:44.442828 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:44.448009 38738 solver.cpp:429] Test net output #0: accuracy = 0.8125 I0724 12:55:44.448063 38738 solver.cpp:429] Test net output #1: loss = 0.503198 (* 1 = 0.503198 loss) I0724 12:55:44.481248 38738 solver.cpp:242] Iteration 23 (1.95667 iter/s, 0.511073s/1 iter), loss = 0.245278 I0724 12:55:44.481276 38738 solver.cpp:261] Train net output #0: loss = 0.245278 (* 1 = 0.245278 loss) I0724 12:55:44.481285 38738 sgd_solver.cpp:106] Iteration 23, lr = 0.0001 I0724 12:55:44.481377 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_24.caffemodel I0724 12:55:44.766582 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_24.solverstate I0724 12:55:44.908340 38738 solver.cpp:362] Iteration 24, Testing net (#0) I0724 12:55:44.908362 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:44.913404 38738 solver.cpp:429] Test net output #0: accuracy = 0.78125 I0724 12:55:44.913457 38738 solver.cpp:429] Test net output #1: loss = 0.517577 (* 1 = 0.517577 loss) I0724 12:55:44.946213 38738 solver.cpp:242] Iteration 24 (2.15092 iter/s, 0.464918s/1 iter), loss = 0.241017 I0724 12:55:44.946264 38738 solver.cpp:261] Train net output #0: loss = 0.241017 (* 1 = 0.241017 loss) I0724 12:55:44.946302 38738 sgd_solver.cpp:106] Iteration 24, lr = 0.0001 I0724 12:55:44.946478 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_25.caffemodel I0724 12:55:45.308558 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_25.solverstate I0724 12:55:45.464437 38738 solver.cpp:362] Iteration 25, Testing net (#0) I0724 12:55:45.464459 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:45.469271 38738 solver.cpp:429] Test net output #0: accuracy = 0.75 I0724 12:55:45.469296 38738 solver.cpp:429] Test net output #1: loss = 0.58045 (* 1 = 0.58045 loss) I0724 12:55:45.501907 38738 solver.cpp:242] Iteration 25 (1.79986 iter/s, 0.555599s/1 iter), loss = 0.240672 I0724 12:55:45.501927 38738 solver.cpp:261] Train net output #0: loss = 0.240672 (* 1 = 0.240672 loss) I0724 12:55:45.501936 38738 sgd_solver.cpp:106] Iteration 25, lr = 0.0001 I0724 12:55:45.502032 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_26.caffemodel I0724 12:55:45.805462 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_26.solverstate I0724 12:55:45.957098 38738 solver.cpp:362] Iteration 26, Testing net (#0) I0724 12:55:45.957121 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:45.961614 38738 solver.cpp:429] Test net output #0: accuracy = 0.75 I0724 12:55:45.961638 38738 solver.cpp:429] Test net output #1: loss = 0.506691 (* 1 = 0.506691 loss) I0724 12:55:45.994837 38738 solver.cpp:242] Iteration 26 (2.02881 iter/s, 0.4929s/1 iter), loss = 0.230404 I0724 12:55:45.994855 38738 solver.cpp:261] Train net output #0: loss = 0.230404 (* 1 = 0.230404 loss) I0724 12:55:45.994864 38738 sgd_solver.cpp:106] Iteration 26, lr = 0.0001 I0724 12:55:45.994954 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_27.caffemodel I0724 12:55:46.279364 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_27.solverstate I0724 12:55:46.426966 38738 solver.cpp:362] Iteration 27, Testing net (#0) I0724 12:55:46.426996 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:46.432111 38738 solver.cpp:429] Test net output #0: accuracy = 0.75 I0724 12:55:46.432162 38738 solver.cpp:429] Test net output #1: loss = 0.500975 (* 1 = 0.500975 loss) I0724 12:55:46.465745 38738 solver.cpp:242] Iteration 27 (2.12372 iter/s, 0.470873s/1 iter), loss = 0.242132 I0724 12:55:46.465787 38738 solver.cpp:261] Train net output #0: loss = 0.242132 (* 1 = 0.242132 loss) I0724 12:55:46.465816 38738 sgd_solver.cpp:106] Iteration 27, lr = 0.0001 I0724 12:55:46.466117 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_28.caffemodel I0724 12:55:46.823005 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_28.solverstate I0724 12:55:46.979176 38738 solver.cpp:362] Iteration 28, Testing net (#0) I0724 12:55:46.979199 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:46.984138 38738 solver.cpp:429] Test net output #0: accuracy = 0.65625 I0724 12:55:46.984163 38738 solver.cpp:429] Test net output #1: loss = 0.624734 (* 1 = 0.624734 loss) I0724 12:55:47.017271 38738 solver.cpp:242] Iteration 28 (1.81331 iter/s, 0.551477s/1 iter), loss = 0.224726 I0724 12:55:47.017293 38738 solver.cpp:261] Train net output #0: loss = 0.224726 (* 1 = 0.224726 loss) I0724 12:55:47.017303 38738 sgd_solver.cpp:106] Iteration 28, lr = 0.0001 I0724 12:55:47.017429 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_29.caffemodel I0724 12:55:47.333952 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_29.solverstate I0724 12:55:47.487700 38738 solver.cpp:362] Iteration 29, Testing net (#0) I0724 12:55:47.487721 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:47.492224 38738 solver.cpp:429] Test net output #0: accuracy = 0.71875 I0724 12:55:47.492247 38738 solver.cpp:429] Test net output #1: loss = 0.514066 (* 1 = 0.514066 loss) I0724 12:55:47.525374 38738 solver.cpp:242] Iteration 29 (1.96824 iter/s, 0.508069s/1 iter), loss = 0.233065 I0724 12:55:47.525398 38738 solver.cpp:261] Train net output #0: loss = 0.233065 (* 1 = 0.233065 loss) I0724 12:55:47.525406 38738 sgd_solver.cpp:106] Iteration 29, lr = 0.0001 I0724 12:55:47.525498 38738 solver.cpp:479] Snapshotting to binary proto file snapshot_iter_30.caffemodel I0724 12:55:47.808516 38738 sgd_solver.cpp:273] Snapshotting solver state to binary proto file snapshot_iter_30.solverstate I0724 12:55:47.958603 38738 solver.cpp:342] Iteration 30, loss = 0.240922 I0724 12:55:47.958636 38738 solver.cpp:362] Iteration 30, Testing net (#0) I0724 12:55:47.958640 38738 net.cpp:723] Ignoring source layer train-data I0724 12:55:47.963238 38738 solver.cpp:429] Test net output #0: accuracy = 0.75 I0724 12:55:47.963287 38738 solver.cpp:429] Test net output #1: loss = 0.523966 (* 1 = 0.523966 loss) I0724 12:55:47.963299 38738 solver.cpp:347] Optimization Done. I0724 12:55:47.963305 38738 caffe.cpp:234] Optimization Done.