Error trying to create engine from DIGIT trained model files

Hi,

I trained FaceNet caffe model on DIGIT. The classification works there. However, I failed to deploy to DeepStream.

Here’s the callstack:

Creating LL OSD context new
0:00:02.507183963 25464   0x7f40002390 INFO                 nvinfer gstnvinfer.cpp:519:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:initialize(): Trying to create engine from model files
Weights for layer block35_1/Scale doesn't exist
0:00:03.656227753 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block35_2/Scale doesn't exist
0:00:03.658541197 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block35_3/Scale doesn't exist
0:00:03.661110013 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block35_4/Scale doesn't exist
0:00:03.663601380 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block35_5/Scale doesn't exist
0:00:03.666072538 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block17_1/Scale doesn't exist
0:00:03.719697779 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block17_2/Scale doesn't exist
0:00:03.735173960 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block17_3/Scale doesn't exist
0:00:03.751555061 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block17_4/Scale doesn't exist
0:00:03.768300650 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block17_5/Scale doesn't exist
0:00:03.783780894 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block17_6/Scale doesn't exist
0:00:03.800753885 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block17_7/Scale doesn't exist
0:00:03.816613931 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block17_8/Scale doesn't exist
0:00:03.833887971 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block17_9/Scale doesn't exist
0:00:03.850282770 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block17_10/Scale doesn't exist
0:00:03.867608426 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block8_1/Scale doesn't exist
0:00:04.036783473 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block8_2/Scale doesn't exist
0:00:04.072508493 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block8_3/Scale doesn't exist
0:00:04.108916970 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block8_4/Scale doesn't exist
0:00:04.145886869 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer block8_5/Scale doesn't exist
0:00:04.183114067 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights
Weights for layer Block8/Scale doesn't exist
0:00:04.220014641 25464   0x7f40002390 ERROR                nvinfer gstnvinfer.cpp:511:gst_nvinfer_logger:<primary_gie_classifier> NvDsInferContext[UID 1]:log(): CaffeParser: ERROR: Attempting to access NULL weights

Here’s my deploy.prototxt:

input: "data"
input_shape {
  dim: 1
  dim: 3
  dim: 160
  dim: 160
}
layer {
  name: "Conv2d_1a_3x3"
  type: "Convolution"
  bottom: "data"
  top: "Conv2d_1a_3x3"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "Conv2d_1a_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Conv2d_1a_3x3"
  top: "Conv2d_1a_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Conv2d_1a_3x3/Scale"
  type: "Scale"
  bottom: "Conv2d_1a_3x3"
  top: "Conv2d_1a_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Conv2d_1a_3x3/Relu"
  type: "ReLU"
  bottom: "Conv2d_1a_3x3"
  top: "Conv2d_1a_3x3"
}
layer {
  name: "Conv2d_2a_3x3"
  type: "Convolution"
  bottom: "Conv2d_1a_3x3"
  top: "Conv2d_2a_3x3"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "Conv2d_2a_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Conv2d_2a_3x3"
  top: "Conv2d_2a_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Conv2d_2a_3x3/Scale"
  type: "Scale"
  bottom: "Conv2d_2a_3x3"
  top: "Conv2d_2a_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Conv2d_2a_3x3/Relu"
  type: "ReLU"
  bottom: "Conv2d_2a_3x3"
  top: "Conv2d_2a_3x3"
}
layer {
  name: "Conv2d_2b_3x3"
  type: "Convolution"
  bottom: "Conv2d_2a_3x3"
  top: "Conv2d_2b_3x3"
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "Conv2d_2b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Conv2d_2b_3x3"
  top: "Conv2d_2b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Conv2d_2b_3x3/Scale"
  type: "Scale"
  bottom: "Conv2d_2b_3x3"
  top: "Conv2d_2b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Conv2d_2b_3x3/ReLU"
  type: "ReLU"
  bottom: "Conv2d_2b_3x3"
  top: "Conv2d_2b_3x3"
}
layer {
  name: "MaxPool_3a_3x3"
  type: "Pooling"
  bottom: "Conv2d_2b_3x3"
  top: "MaxPool_3a_3x3"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "Conv2d_3b_1x1"
  type: "Convolution"
  bottom: "MaxPool_3a_3x3"
  top: "Conv2d_3b_1x1"
  convolution_param {
    num_output: 80
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "Conv2d_3b_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "Conv2d_3b_1x1"
  top: "Conv2d_3b_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Conv2d_3b_1x1/Scale"
  type: "Scale"
  bottom: "Conv2d_3b_1x1"
  top: "Conv2d_3b_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Conv2d_3b_1x1/Relu"
  type: "ReLU"
  bottom: "Conv2d_3b_1x1"
  top: "Conv2d_3b_1x1"
}
layer {
  name: "Conv2d_4a_3x3"
  type: "Convolution"
  bottom: "Conv2d_3b_1x1"
  top: "Conv2d_4a_3x3"
  convolution_param {
    num_output: 192
    pad: 0
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "Conv2d_4a_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Conv2d_4a_3x3"
  top: "Conv2d_4a_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Conv2d_4a_3x3/Scale"
  type: "Scale"
  bottom: "Conv2d_4a_3x3"
  top: "Conv2d_4a_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Conv2d_4a_3x3/Relu"
  type: "ReLU"
  bottom: "Conv2d_4a_3x3"
  top: "Conv2d_4a_3x3"
}
layer {
  name: "Conv2d_4b_3x3"
  type: "Convolution"
  bottom: "Conv2d_4a_3x3"
  top: "Conv2d_4b_3x3"
  convolution_param {
    num_output: 256
    pad: 0
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "Conv2d_4b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Conv2d_4b_3x3"
  top: "Conv2d_4b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Conv2d_4b_3x3/Scale"
  type: "Scale"
  bottom: "Conv2d_4b_3x3"
  top: "Conv2d_4b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Conv2d_4b_3x3/Relu"
  type: "ReLU"
  bottom: "Conv2d_4b_3x3"
  top: "Conv2d_4b_3x3"
}
layer {
  name: "block35_1/Branch_0/Conv2d_1x1"
  type: "Convolution"
  bottom: "Conv2d_4b_3x3"
  top: "block35_1/Branch_0/Conv2d_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_1/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_1/Branch_0/Conv2d_1x1"
  top: "block35_1/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_1/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "block35_1/Branch_0/Conv2d_1x1"
  top: "block35_1/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_1/Branch_0/Conv2d_1x1/Relu"
  type: "ReLU"
  bottom: "block35_1/Branch_0/Conv2d_1x1"
  top: "block35_1/Branch_0/Conv2d_1x1"
}
layer {
  name: "block35_1/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "Conv2d_4b_3x3"
  top: "block35_1/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_1/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_1/Branch_1/Conv2d_0a_1x1"
  top: "block35_1/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_1/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_1/Branch_1/Conv2d_0a_1x1"
  top: "block35_1/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_1/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "block35_1/Branch_1/Conv2d_0a_1x1"
  top: "block35_1/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "block35_1/Branch_1/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_1/Branch_1/Conv2d_0a_1x1"
  top: "block35_1/Branch_1/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_1/Branch_1/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_1/Branch_1/Conv2d_0b_3x3"
  top: "block35_1/Branch_1/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_1/Branch_1/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_1/Branch_1/Conv2d_0b_3x3"
  top: "block35_1/Branch_1/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_1/Branch_1/Conv2d_0b_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_1/Branch_1/Conv2d_0b_3x3"
  top: "block35_1/Branch_1/Conv2d_0b_3x3"
}
layer {
  name: "block35_1/Branch_2/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "Conv2d_4b_3x3"
  top: "block35_1/Branch_2/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_1/Branch_2/Conv2d_0a_1x1"
  top: "block35_1/Branch_2/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_1/Branch_2/Conv2d_0a_1x1"
  top: "block35_1/Branch_2/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0a_1x1/Relu"
  type: "ReLU"
  bottom: "block35_1/Branch_2/Conv2d_0a_1x1"
  top: "block35_1/Branch_2/Conv2d_0a_1x1"
}
layer {
  name: "block35_1/Branch_2/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_1/Branch_2/Conv2d_0a_1x1"
  top: "block35_1/Branch_2/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_1/Branch_2/Conv2d_0b_3x3"
  top: "block35_1/Branch_2/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_1/Branch_2/Conv2d_0b_3x3"
  top: "block35_1/Branch_2/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0b_3x3/Relu"
  type: "ReLU"
  bottom: "block35_1/Branch_2/Conv2d_0b_3x3"
  top: "block35_1/Branch_2/Conv2d_0b_3x3"
}
layer {
  name: "block35_1/Branch_2/Conv2d_0c_3x3"
  type: "Convolution"
  bottom: "block35_1/Branch_2/Conv2d_0b_3x3"
  top: "block35_1/Branch_2/Conv2d_0c_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0c_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_1/Branch_2/Conv2d_0c_3x3"
  top: "block35_1/Branch_2/Conv2d_0c_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0c_3x3/Scale"
  type: "Scale"
  bottom: "block35_1/Branch_2/Conv2d_0c_3x3"
  top: "block35_1/Branch_2/Conv2d_0c_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0c_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_1/Branch_2/Conv2d_0c_3x3"
  top: "block35_1/Branch_2/Conv2d_0c_3x3"
}
layer {
  name: "block35_1/Concat"
  type: "Concat"
  bottom: "block35_1/Branch_0/Conv2d_1x1"
  bottom: "block35_1/Branch_1/Conv2d_0b_3x3"
  bottom: "block35_1/Branch_2/Conv2d_0c_3x3"
  top: "block35_1/Concat"
}
layer {
  name: "block35_1/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_1/Concat"
  top: "block35_1/Conv2d_1x1"
  convolution_param {
    num_output: 256
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_1/Scale"
  type: "Scale"
  bottom: "block35_1/Conv2d_1x1"
  top: "block35_1/Scale"
  scale_param {
    filler {
      value: 0.170000001788
    }
  }
}
layer {
  name: "block35_1/Sum"
  type: "Eltwise"
  bottom: "block35_1/Scale"
  bottom: "Conv2d_4b_3x3"
  top: "block35_1/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "block35_1/Sum/ReLU"
  type: "ReLU"
  bottom: "block35_1/Sum"
  top: "block35_1/Sum"
}
layer {
  name: "block35_2/Branch_0/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_1/Sum"
  top: "block35_2/Branch_0/Conv2d_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_2/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_2/Branch_0/Conv2d_1x1"
  top: "block35_2/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_2/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "block35_2/Branch_0/Conv2d_1x1"
  top: "block35_2/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_2/Branch_0/Conv2d_1x1/Relu"
  type: "ReLU"
  bottom: "block35_2/Branch_0/Conv2d_1x1"
  top: "block35_2/Branch_0/Conv2d_1x1"
}
layer {
  name: "block35_2/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_1/Sum"
  top: "block35_2/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_2/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_2/Branch_1/Conv2d_0a_1x1"
  top: "block35_2/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_2/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_2/Branch_1/Conv2d_0a_1x1"
  top: "block35_2/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_2/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "block35_2/Branch_1/Conv2d_0a_1x1"
  top: "block35_2/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "block35_2/Branch_1/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_2/Branch_1/Conv2d_0a_1x1"
  top: "block35_2/Branch_1/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_2/Branch_1/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_2/Branch_1/Conv2d_0b_3x3"
  top: "block35_2/Branch_1/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_2/Branch_1/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_2/Branch_1/Conv2d_0b_3x3"
  top: "block35_2/Branch_1/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_2/Branch_1/Conv2d_0b_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_2/Branch_1/Conv2d_0b_3x3"
  top: "block35_2/Branch_1/Conv2d_0b_3x3"
}
layer {
  name: "block35_2/Branch_2/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_1/Sum"
  top: "block35_2/Branch_2/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_2/Branch_2/Conv2d_0a_1x1"
  top: "block35_2/Branch_2/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_2/Branch_2/Conv2d_0a_1x1"
  top: "block35_2/Branch_2/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0a_1x1/Relu"
  type: "ReLU"
  bottom: "block35_2/Branch_2/Conv2d_0a_1x1"
  top: "block35_2/Branch_2/Conv2d_0a_1x1"
}
layer {
  name: "block35_2/Branch_2/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_2/Branch_2/Conv2d_0a_1x1"
  top: "block35_2/Branch_2/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_2/Branch_2/Conv2d_0b_3x3"
  top: "block35_2/Branch_2/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_2/Branch_2/Conv2d_0b_3x3"
  top: "block35_2/Branch_2/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0b_3x3/Relu"
  type: "ReLU"
  bottom: "block35_2/Branch_2/Conv2d_0b_3x3"
  top: "block35_2/Branch_2/Conv2d_0b_3x3"
}
layer {
  name: "block35_2/Branch_2/Conv2d_0c_3x3"
  type: "Convolution"
  bottom: "block35_2/Branch_2/Conv2d_0b_3x3"
  top: "block35_2/Branch_2/Conv2d_0c_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0c_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_2/Branch_2/Conv2d_0c_3x3"
  top: "block35_2/Branch_2/Conv2d_0c_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0c_3x3/Scale"
  type: "Scale"
  bottom: "block35_2/Branch_2/Conv2d_0c_3x3"
  top: "block35_2/Branch_2/Conv2d_0c_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0c_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_2/Branch_2/Conv2d_0c_3x3"
  top: "block35_2/Branch_2/Conv2d_0c_3x3"
}
layer {
  name: "block35_2/Concat"
  type: "Concat"
  bottom: "block35_2/Branch_0/Conv2d_1x1"
  bottom: "block35_2/Branch_1/Conv2d_0b_3x3"
  bottom: "block35_2/Branch_2/Conv2d_0c_3x3"
  top: "block35_2/Concat"
}
layer {
  name: "block35_2/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_2/Concat"
  top: "block35_2/Conv2d_1x1"
  convolution_param {
    num_output: 256
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_2/Scale"
  type: "Scale"
  bottom: "block35_2/Conv2d_1x1"
  top: "block35_2/Scale"
  scale_param {
    filler {
      value: 0.170000001788
    }
  }
}
layer {
  name: "block35_2/Sum"
  type: "Eltwise"
  bottom: "block35_2/Scale"
  bottom: "block35_1/Sum"
  top: "block35_2/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "block35_2/Sum/Relu"
  type: "ReLU"
  bottom: "block35_2/Sum"
  top: "block35_2/Sum"
}
layer {
  name: "block35_3/Branch_0/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_2/Sum"
  top: "block35_3/Branch_0/Conv2d_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_3/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_3/Branch_0/Conv2d_1x1"
  top: "block35_3/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_3/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "block35_3/Branch_0/Conv2d_1x1"
  top: "block35_3/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_3/Branch_0/Conv2d_1x1/Relu"
  type: "ReLU"
  bottom: "block35_3/Branch_0/Conv2d_1x1"
  top: "block35_3/Branch_0/Conv2d_1x1"
}
layer {
  name: "block35_3/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_2/Sum"
  top: "block35_3/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_3/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_3/Branch_1/Conv2d_0a_1x1"
  top: "block35_3/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_3/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_3/Branch_1/Conv2d_0a_1x1"
  top: "block35_3/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_3/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "block35_3/Branch_1/Conv2d_0a_1x1"
  top: "block35_3/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "block35_3/Branch_1/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_3/Branch_1/Conv2d_0a_1x1"
  top: "block35_3/Branch_1/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_3/Branch_1/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_3/Branch_1/Conv2d_0b_3x3"
  top: "block35_3/Branch_1/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_3/Branch_1/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_3/Branch_1/Conv2d_0b_3x3"
  top: "block35_3/Branch_1/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_3/Branch_1/Conv2d_0b_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_3/Branch_1/Conv2d_0b_3x3"
  top: "block35_3/Branch_1/Conv2d_0b_3x3"
}
layer {
  name: "block35_3/Branch_2/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_2/Sum"
  top: "block35_3/Branch_2/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_3/Branch_2/Conv2d_0a_1x1"
  top: "block35_3/Branch_2/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_3/Branch_2/Conv2d_0a_1x1"
  top: "block35_3/Branch_2/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0a_1x1/Relu"
  type: "ReLU"
  bottom: "block35_3/Branch_2/Conv2d_0a_1x1"
  top: "block35_3/Branch_2/Conv2d_0a_1x1"
}
layer {
  name: "block35_3/Branch_2/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_3/Branch_2/Conv2d_0a_1x1"
  top: "block35_3/Branch_2/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_3/Branch_2/Conv2d_0b_3x3"
  top: "block35_3/Branch_2/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_3/Branch_2/Conv2d_0b_3x3"
  top: "block35_3/Branch_2/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0b_3x3/Relu"
  type: "ReLU"
  bottom: "block35_3/Branch_2/Conv2d_0b_3x3"
  top: "block35_3/Branch_2/Conv2d_0b_3x3"
}
layer {
  name: "block35_3/Branch_2/Conv2d_0c_3x3"
  type: "Convolution"
  bottom: "block35_3/Branch_2/Conv2d_0b_3x3"
  top: "block35_3/Branch_2/Conv2d_0c_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0c_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_3/Branch_2/Conv2d_0c_3x3"
  top: "block35_3/Branch_2/Conv2d_0c_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0c_3x3/Scale"
  type: "Scale"
  bottom: "block35_3/Branch_2/Conv2d_0c_3x3"
  top: "block35_3/Branch_2/Conv2d_0c_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0c_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_3/Branch_2/Conv2d_0c_3x3"
  top: "block35_3/Branch_2/Conv2d_0c_3x3"
}
layer {
  name: "block35_3/Concat"
  type: "Concat"
  bottom: "block35_3/Branch_0/Conv2d_1x1"
  bottom: "block35_3/Branch_1/Conv2d_0b_3x3"
  bottom: "block35_3/Branch_2/Conv2d_0c_3x3"
  top: "block35_3/Concat"
}
layer {
  name: "block35_3/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_3/Concat"
  top: "block35_3/Conv2d_1x1"
  convolution_param {
    num_output: 256
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_3/Scale"
  type: "Scale"
  bottom: "block35_3/Conv2d_1x1"
  top: "block35_3/Scale"
  scale_param {
    filler {
      value: 0.170000001788
    }
  }
}
layer {
  name: "block35_3/Sum"
  type: "Eltwise"
  bottom: "block35_3/Scale"
  bottom: "block35_2/Sum"
  top: "block35_3/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "block35_3/Sum/ReLU"
  type: "ReLU"
  bottom: "block35_3/Sum"
  top: "block35_3/Sum"
}
layer {
  name: "block35_4/Branch_0/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_3/Sum"
  top: "block35_4/Branch_0/Conv2d_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_4/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_4/Branch_0/Conv2d_1x1"
  top: "block35_4/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_4/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "block35_4/Branch_0/Conv2d_1x1"
  top: "block35_4/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_4/Branch_0/Conv2d_1x1/Relu"
  type: "ReLU"
  bottom: "block35_4/Branch_0/Conv2d_1x1"
  top: "block35_4/Branch_0/Conv2d_1x1"
}
layer {
  name: "block35_4/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_3/Sum"
  top: "block35_4/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_4/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_4/Branch_1/Conv2d_0a_1x1"
  top: "block35_4/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_4/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_4/Branch_1/Conv2d_0a_1x1"
  top: "block35_4/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_4/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "block35_4/Branch_1/Conv2d_0a_1x1"
  top: "block35_4/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "block35_4/Branch_1/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_4/Branch_1/Conv2d_0a_1x1"
  top: "block35_4/Branch_1/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_4/Branch_1/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_4/Branch_1/Conv2d_0b_3x3"
  top: "block35_4/Branch_1/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_4/Branch_1/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_4/Branch_1/Conv2d_0b_3x3"
  top: "block35_4/Branch_1/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_4/Branch_1/Conv2d_0b_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_4/Branch_1/Conv2d_0b_3x3"
  top: "block35_4/Branch_1/Conv2d_0b_3x3"
}
layer {
  name: "block35_4/Branch_2/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_3/Sum"
  top: "block35_4/Branch_2/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_4/Branch_2/Conv2d_0a_1x1"
  top: "block35_4/Branch_2/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_4/Branch_2/Conv2d_0a_1x1"
  top: "block35_4/Branch_2/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0a_1x1/Relu"
  type: "ReLU"
  bottom: "block35_4/Branch_2/Conv2d_0a_1x1"
  top: "block35_4/Branch_2/Conv2d_0a_1x1"
}
layer {
  name: "block35_4/Branch_2/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_4/Branch_2/Conv2d_0a_1x1"
  top: "block35_4/Branch_2/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_4/Branch_2/Conv2d_0b_3x3"
  top: "block35_4/Branch_2/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_4/Branch_2/Conv2d_0b_3x3"
  top: "block35_4/Branch_2/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0b_3x3/Relu"
  type: "ReLU"
  bottom: "block35_4/Branch_2/Conv2d_0b_3x3"
  top: "block35_4/Branch_2/Conv2d_0b_3x3"
}
layer {
  name: "block35_4/Branch_2/Conv2d_0c_3x3"
  type: "Convolution"
  bottom: "block35_4/Branch_2/Conv2d_0b_3x3"
  top: "block35_4/Branch_2/Conv2d_0c_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0c_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_4/Branch_2/Conv2d_0c_3x3"
  top: "block35_4/Branch_2/Conv2d_0c_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0c_3x3/Scale"
  type: "Scale"
  bottom: "block35_4/Branch_2/Conv2d_0c_3x3"
  top: "block35_4/Branch_2/Conv2d_0c_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0c_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_4/Branch_2/Conv2d_0c_3x3"
  top: "block35_4/Branch_2/Conv2d_0c_3x3"
}
layer {
  name: "block35_4/Concat"
  type: "Concat"
  bottom: "block35_4/Branch_0/Conv2d_1x1"
  bottom: "block35_4/Branch_1/Conv2d_0b_3x3"
  bottom: "block35_4/Branch_2/Conv2d_0c_3x3"
  top: "block35_4/Concat"
}
layer {
  name: "block35_4/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_4/Concat"
  top: "block35_4/Conv2d_1x1"
  convolution_param {
    num_output: 256
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_4/Scale"
  type: "Scale"
  bottom: "block35_4/Conv2d_1x1"
  top: "block35_4/Scale"
  scale_param {
    filler {
      value: 0.170000001788
    }
  }
}
layer {
  name: "block35_4/Sum"
  type: "Eltwise"
  bottom: "block35_4/Scale"
  bottom: "block35_3/Sum"
  top: "block35_4/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "block35_4/Sum/ReLU"
  type: "ReLU"
  bottom: "block35_4/Sum"
  top: "block35_4/Sum"
}
layer {
  name: "block35_5/Branch_0/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_4/Sum"
  top: "block35_5/Branch_0/Conv2d_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_5/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_5/Branch_0/Conv2d_1x1"
  top: "block35_5/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_5/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "block35_5/Branch_0/Conv2d_1x1"
  top: "block35_5/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_5/Branch_0/Conv2d_1x1/Relu"
  type: "ReLU"
  bottom: "block35_5/Branch_0/Conv2d_1x1"
  top: "block35_5/Branch_0/Conv2d_1x1"
}
layer {
  name: "block35_5/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_4/Sum"
  top: "block35_5/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_5/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_5/Branch_1/Conv2d_0a_1x1"
  top: "block35_5/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_5/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_5/Branch_1/Conv2d_0a_1x1"
  top: "block35_5/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_5/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "block35_5/Branch_1/Conv2d_0a_1x1"
  top: "block35_5/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "block35_5/Branch_1/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_5/Branch_1/Conv2d_0a_1x1"
  top: "block35_5/Branch_1/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_5/Branch_1/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_5/Branch_1/Conv2d_0b_3x3"
  top: "block35_5/Branch_1/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_5/Branch_1/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_5/Branch_1/Conv2d_0b_3x3"
  top: "block35_5/Branch_1/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_5/Branch_1/Conv2d_0b_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_5/Branch_1/Conv2d_0b_3x3"
  top: "block35_5/Branch_1/Conv2d_0b_3x3"
}
layer {
  name: "block35_5/Branch_2/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_4/Sum"
  top: "block35_5/Branch_2/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_5/Branch_2/Conv2d_0a_1x1"
  top: "block35_5/Branch_2/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_5/Branch_2/Conv2d_0a_1x1"
  top: "block35_5/Branch_2/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0a_1x1/Relu"
  type: "ReLU"
  bottom: "block35_5/Branch_2/Conv2d_0a_1x1"
  top: "block35_5/Branch_2/Conv2d_0a_1x1"
}
layer {
  name: "block35_5/Branch_2/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_5/Branch_2/Conv2d_0a_1x1"
  top: "block35_5/Branch_2/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_5/Branch_2/Conv2d_0b_3x3"
  top: "block35_5/Branch_2/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_5/Branch_2/Conv2d_0b_3x3"
  top: "block35_5/Branch_2/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0b_3x3/Relu"
  type: "ReLU"
  bottom: "block35_5/Branch_2/Conv2d_0b_3x3"
  top: "block35_5/Branch_2/Conv2d_0b_3x3"
}
layer {
  name: "block35_5/Branch_2/Conv2d_0c_3x3"
  type: "Convolution"
  bottom: "block35_5/Branch_2/Conv2d_0b_3x3"
  top: "block35_5/Branch_2/Conv2d_0c_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0c_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_5/Branch_2/Conv2d_0c_3x3"
  top: "block35_5/Branch_2/Conv2d_0c_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0c_3x3/Scale"
  type: "Scale"
  bottom: "block35_5/Branch_2/Conv2d_0c_3x3"
  top: "block35_5/Branch_2/Conv2d_0c_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0c_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_5/Branch_2/Conv2d_0c_3x3"
  top: "block35_5/Branch_2/Conv2d_0c_3x3"
}
layer {
  name: "block35_5/Concat"
  type: "Concat"
  bottom: "block35_5/Branch_0/Conv2d_1x1"
  bottom: "block35_5/Branch_1/Conv2d_0b_3x3"
  bottom: "block35_5/Branch_2/Conv2d_0c_3x3"
  top: "block35_5/Concat"
}
layer {
  name: "block35_5/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_5/Concat"
  top: "block35_5/Conv2d_1x1"
  convolution_param {
    num_output: 256
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_5/Scale"
  type: "Scale"
  bottom: "block35_5/Conv2d_1x1"
  top: "block35_5/Scale"
  scale_param {
    filler {
      value: 0.170000001788
    }
  }
}
layer {
  name: "block35_5/Sum"
  type: "Eltwise"
  bottom: "block35_5/Scale"
  bottom: "block35_4/Sum"
  top: "block35_5/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "block35_5/Sum/ReLU"
  type: "ReLU"
  bottom: "block35_5/Sum"
  top: "block35_5/Sum"
}
layer {
  name: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  type: "Convolution"
  bottom: "block35_5/Sum"
  top: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  convolution_param {
    num_output: 384
    pad: 0
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "Mixed_6a/Branch_0/Conv2d_1a_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  top: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Mixed_6a/Branch_0/Conv2d_1a_3x3/Scale"
  type: "Scale"
  bottom: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  top: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Mixed_6a/Branch_0/Conv2d_1a_3x3/ReLU"
  type: "ReLU"
  bottom: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  top: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_5/Sum"
  top: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 192
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  top: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  top: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  top: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  top: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  convolution_param {
    num_output: 192
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  top: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  top: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0b_3x3/ReLU"
  type: "ReLU"
  bottom: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  top: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  type: "Convolution"
  bottom: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  top: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  convolution_param {
    num_output: 256
    pad: 0
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_1a_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  top: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_1a_3x3/Scale"
  type: "Scale"
  bottom: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  top: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_1a_3x3/ReLU"
  type: "ReLU"
  bottom: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  top: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
}
layer {
  name: "MaxPool_1a_3x3"
  type: "Pooling"
  bottom: "block35_5/Sum"
  top: "MaxPool_1a_3x3"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
    pad: 0
  }
}
layer {
  name: "Mixed_6a/Concat"
  type: "Concat"
  bottom: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  bottom: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  bottom: "MaxPool_1a_3x3"
  top: "Mixed_6a/Concat"
}
layer {
  name: "block17_1/Branch_0/Conv2d_1x1"
  type: "Convolution"
  bottom: "Mixed_6a/Concat"
  top: "block17_1/Branch_0/Conv2d_1x1"
  convolution_param {
    num_output: 128
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block17_1/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_1/Branch_0/Conv2d_1x1"
  top: "block17_1/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_1/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "block17_1/Branch_0/Conv2d_1x1"
  top: "block17_1/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_1/Branch_0/Conv2d_1x1/ReLU"
  type: "ReLU"
  bottom: "block17_1/Branch_0/Conv2d_1x1"
  top: "block17_1/Branch_0/Conv2d_1x1"
}
layer {
  name: "block17_1/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "Mixed_6a/Concat"
  top: "block17_1/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 128
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_1/Branch_1/Conv2d_0a_1x1"
  top: "block17_1/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block17_1/Branch_1/Conv2d_0a_1x1"
  top: "block17_1/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "block17_1/Branch_1/Conv2d_0a_1x1"
  top: "block17_1/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "block17_1/Branch_1/Conv2d_0b_1x7"
  type: "Convolution"
  bottom: "block17_1/Branch_1/Conv2d_0a_1x1"
  top: "block17_1/Branch_1/Conv2d_0b_1x7"
  convolution_param {
    num_output: 128
    stride: 1
    pad_h: 0
    pad_w: 3
    kernel_h: 1
    kernel_w: 7
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0b_1x7/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_1/Branch_1/Conv2d_0b_1x7"
  top: "block17_1/Branch_1/Conv2d_0b_1x7"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0b_1x7/Scale"
  type: "Scale"
  bottom: "block17_1/Branch_1/Conv2d_0b_1x7"
  top: "block17_1/Branch_1/Conv2d_0b_1x7"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0b_1x7/ReLU"
  type: "ReLU"
  bottom: "block17_1/Branch_1/Conv2d_0b_1x7"
  top: "block17_1/Branch_1/Conv2d_0b_1x7"
}
layer {
  name: "block17_1/Branch_1/Conv2d_0c_7x1"
  type: "Convolution"
  bottom: "block17_1/Branch_1/Conv2d_0b_1x7"
  top: "block17_1/Branch_1/Conv2d_0c_7x1"
  convolution_param {
    num_output: 128
    stride: 1
    pad_h: 3
    pad_w: 0
    kernel_h: 7
    kernel_w: 1
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0c_7x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_1/Branch_1/Conv2d_0c_7x1"
  top: "block17_1/Branch_1/Conv2d_0c_7x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0c_7x1/Scale"
  type: "Scale"
  bottom: "block17_1/Branch_1/Conv2d_0c_7x1"
  top: "block17_1/Branch_1/Conv2d_0c_7x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0c_7x1/ReLU"
  type: "ReLU"
  bottom: "block17_1/Branch_1/Conv2d_0c_7x1"
  top: "block17_1/Branch_1/Conv2d_0c_7x1"
}
layer {
  name: "block17_1/Concat"
  type: "Concat"
  bottom: "block17_1/Branch_0/Conv2d_1x1"
  bottom: "block17_1/Branch_1/Conv2d_0c_7x1"
  top: "block17_1/Concat"
}
layer {
  name: "block17_1/Conv2d_1x1"
  type: "Convolution"
  bottom: "block17_1/Concat"
  top: "block17_1/Conv2d_1x1"
  convolution_param {
    num_output: 896
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block17_1/Scale"
  type: "Scale"
  bottom: "block17_1/Conv2d_1x1"
  top: "block17_1/Scale"
  scale_param {
    filler {
      value: 0.10000000149
    }
  }
}
layer {
  name: "block17_1/Sum"
  type: "Eltwise"
  bottom: "block17_1/Scale"
  bottom: "Mixed_6a/Concat"
  top: "block17_1/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "block17_1/Sum/ReLU"
  type: "ReLU"
  bottom: "block17_1/Sum"
  top: "block17_1/Sum"
}
layer {
  name: "block17_2/Branch_0/Conv2d_1x1"
  type: "Convolution"
  bottom: "block17_1/Sum"
  top: "block17_2/Branch_0/Conv2d_1x1"
  convolution_param {
    num_output: 128
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block17_2/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_2/Branch_0/Conv2d_1x1"
  top: "block17_2/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_2/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "block17_2/Branch_0/Conv2d_1x1"
  top: "block17_2/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_2/Branch_0/Conv2d_1x1/ReLU"
  type: "ReLU"
  bottom: "block17_2/Branch_0/Conv2d_1x1"
  top: "block17_2/Branch_0/Conv2d_1x1"
}
layer {
  name: "block17_2/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block17_1/Sum"
  top: "block17_2/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 128
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_2/Branch_1/Conv2d_0a_1x1"
  top: "block17_2/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block17_2/Branch_1/Conv2d_0a_1x1"
  top: "block17_2/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "block17_2/Branch_1/Conv2d_0a_1x1"
  top: "block17_2/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "block17_2/Branch_1/Conv2d_0b_1x7"
  type: "Convolution"
  bottom: "block17_2/Branch_1/Conv2d_0a_1x1"
  top: "block17_2/Branch_1/Conv2d_0b_1x7"
  convolution_param {
    num_output: 128
    stride: 1
    pad_h: 0
    pad_w: 3
    kernel_h: 1
    kernel_w: 7
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0b_1x7/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_2/Branch_1/Conv2d_0b_1x7"
  top: "block17_2/Branch_1/Conv2d_0b_1x7"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0b_1x7/Scale"
  type: "Scale"
  bottom: "block17_2/Branch_1/Conv2d_0b_1x7"
  top: "block17_2/Branch_1/Conv2d_0b_1x7"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0b_1x7/ReLU"
  type: "ReLU"
  bottom: "block17_2/Branch_1/Conv2d_0b_1x7"
  top: "block17_2/Branch_1/Conv2d_0b_1x7"
}
layer {
  name: "block17_2/Branch_1/Conv2d_0c_7x1"
  type: "Convolution"
  bottom: "block17_2/Branch_1/Conv2d_0b_1x7"
  top: "block17_2/Branch_1/Conv2d_0c_7x1"
  convolution_param {
    num_output: 128
    stride: 1
    pad_h: 3
    pad_w: 0
    kernel_h: 7
    kernel_w: 1
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0c_7x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_2/Branch_1/Conv2d_0c_7x1"
  top: "block17_2/Branch_1/Conv2d_0c_7x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0c_7x1/Scale"
  type: "Scale"
  bottom: "block17_2/Branch_1/Conv2d_0c_7x1"
  top: "block17_2/Branch_1/Conv2d_0c_7x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0c_7x1/ReLU"
  type: "ReLU"
  bottom: "block17_2/Branch_1/Conv2d_0c_7x1"
  top: "block17_2/Branch_1/Conv2d_0c_7x1"
}
layer {
  name: "block17_2/Concat"
  type: "Concat"
  bottom: "block17_2/Branch_0/Conv2d_1x1"
  bottom: "block17_2/Branch_1/Conv2d_0c_7x1"
  top: "block17_2/Concat"
}
layer {
  name: "block17_2/Conv2d_1x1"
  type: "Convolution"
  bottom: "block17_2/Concat"
  top: "block17_2/Conv2d_1x1"
  convolution_param {
    num_output: 896
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block17_2/Scale"
  type: "Scale"
  bottom: "block17_2/Conv2d_1x1"
  top: "block17_2/Scale"
  scale_param {
    filler {
      value: 0.10000000149
    }
  }
}
layer {
  name: "block17_2/Sum"
  type: "Eltwise"
  bottom: "block17_2/Scale"
  bottom: "block17_1/Sum"
  top: "block17_2/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "block17_2/Sum/ReLU"
  type: "ReLU"
  bottom: "block17_2/Sum"
  top: "block17_2/Sum"
}
.
.
.
.
.
.
layer {
  name: "Block8/Conv2d_1x1"
  type: "Convolution"
  bottom: "Block8/Concat"
  top: "Block8/Conv2d_1x1"
  convolution_param {
    num_output: 1792
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "Block8/Scale"
  type: "Scale"
  bottom: "Block8/Conv2d_1x1"
  top: "Block8/Scale"
  scale_param {
    filler {
      value: 1.0
    }
  }
}
layer {
  name: "Block8/Sum"
  type: "Eltwise"
  bottom: "Block8/Scale"
  bottom: "block8_5/Sum"
  top: "Block8/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "AvgPool_1a_8x8"
  type: "Pooling"
  bottom: "Block8/Sum"
  top: "AvgPool_1a_8x8"
  pooling_param {
    pool: AVE
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "Conv1x1_512"
  type: "Convolution"
  bottom: "AvgPool_1a_8x8"
  top: "Conv1x1_512"
  convolution_param {
    num_output: 512
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "Bottleneck/BatchNorm"
  type: "BatchNorm"
  bottom: "Conv1x1_512"
  top: "Bottleneck"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Bottleneck/Scale"
  type: "Scale"
  bottom: "Bottleneck"
  top: "Bottleneck"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "flatten"
  type: "Flatten"
  bottom: "Bottleneck"
  top: "flatten"
}
layer {
  name: "softmax"
  type: "Softmax"
  bottom: "flatten"
  top: "softmax"
}

Would it be to much trouble to post youre train_val.prototxt produced by digits.
I would like to compare it to these:
https://github.com/valdivj/ds_resnet10

layer {
  name: "data"
  type: "Data"
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
  transform_param {
    mean_file: "/workspace/jobs/20191209-200210-86e9/mean.binaryproto"
  }
  data_param {
    source: "/workspace/jobs/20191209-200210-86e9/train_db"
    batch_size: 16
    backend: LMDB
  }
}
layer {
  name: "Conv2d_1a_3x3"
  type: "Convolution"
  bottom: "data"
  top: "Conv2d_1a_3x3"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "Conv2d_1a_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Conv2d_1a_3x3"
  top: "Conv2d_1a_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Conv2d_1a_3x3/Scale"
  type: "Scale"
  bottom: "Conv2d_1a_3x3"
  top: "Conv2d_1a_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Conv2d_1a_3x3/Relu"
  type: "ReLU"
  bottom: "Conv2d_1a_3x3"
  top: "Conv2d_1a_3x3"
}
layer {
  name: "Conv2d_2a_3x3"
  type: "Convolution"
  bottom: "Conv2d_1a_3x3"
  top: "Conv2d_2a_3x3"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "Conv2d_2a_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Conv2d_2a_3x3"
  top: "Conv2d_2a_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Conv2d_2a_3x3/Scale"
  type: "Scale"
  bottom: "Conv2d_2a_3x3"
  top: "Conv2d_2a_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Conv2d_2a_3x3/Relu"
  type: "ReLU"
  bottom: "Conv2d_2a_3x3"
  top: "Conv2d_2a_3x3"
}
layer {
  name: "Conv2d_2b_3x3"
  type: "Convolution"
  bottom: "Conv2d_2a_3x3"
  top: "Conv2d_2b_3x3"
  convolution_param {
    num_output: 64
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "Conv2d_2b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Conv2d_2b_3x3"
  top: "Conv2d_2b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Conv2d_2b_3x3/Scale"
  type: "Scale"
  bottom: "Conv2d_2b_3x3"
  top: "Conv2d_2b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Conv2d_2b_3x3/ReLU"
  type: "ReLU"
  bottom: "Conv2d_2b_3x3"
  top: "Conv2d_2b_3x3"
}
layer {
  name: "MaxPool_3a_3x3"
  type: "Pooling"
  bottom: "Conv2d_2b_3x3"
  top: "MaxPool_3a_3x3"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "Conv2d_3b_1x1"
  type: "Convolution"
  bottom: "MaxPool_3a_3x3"
  top: "Conv2d_3b_1x1"
  convolution_param {
    num_output: 80
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "Conv2d_3b_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "Conv2d_3b_1x1"
  top: "Conv2d_3b_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Conv2d_3b_1x1/Scale"
  type: "Scale"
  bottom: "Conv2d_3b_1x1"
  top: "Conv2d_3b_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Conv2d_3b_1x1/Relu"
  type: "ReLU"
  bottom: "Conv2d_3b_1x1"
  top: "Conv2d_3b_1x1"
}
layer {
  name: "Conv2d_4a_3x3"
  type: "Convolution"
  bottom: "Conv2d_3b_1x1"
  top: "Conv2d_4a_3x3"
  convolution_param {
    num_output: 192
    pad: 0
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "Conv2d_4a_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Conv2d_4a_3x3"
  top: "Conv2d_4a_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Conv2d_4a_3x3/Scale"
  type: "Scale"
  bottom: "Conv2d_4a_3x3"
  top: "Conv2d_4a_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Conv2d_4a_3x3/Relu"
  type: "ReLU"
  bottom: "Conv2d_4a_3x3"
  top: "Conv2d_4a_3x3"
}
layer {
  name: "Conv2d_4b_3x3"
  type: "Convolution"
  bottom: "Conv2d_4a_3x3"
  top: "Conv2d_4b_3x3"
  convolution_param {
    num_output: 256
    pad: 0
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "Conv2d_4b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Conv2d_4b_3x3"
  top: "Conv2d_4b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Conv2d_4b_3x3/Scale"
  type: "Scale"
  bottom: "Conv2d_4b_3x3"
  top: "Conv2d_4b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Conv2d_4b_3x3/Relu"
  type: "ReLU"
  bottom: "Conv2d_4b_3x3"
  top: "Conv2d_4b_3x3"
}
layer {
  name: "block35_1/Branch_0/Conv2d_1x1"
  type: "Convolution"
  bottom: "Conv2d_4b_3x3"
  top: "block35_1/Branch_0/Conv2d_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_1/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_1/Branch_0/Conv2d_1x1"
  top: "block35_1/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_1/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "block35_1/Branch_0/Conv2d_1x1"
  top: "block35_1/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_1/Branch_0/Conv2d_1x1/Relu"
  type: "ReLU"
  bottom: "block35_1/Branch_0/Conv2d_1x1"
  top: "block35_1/Branch_0/Conv2d_1x1"
}
layer {
  name: "block35_1/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "Conv2d_4b_3x3"
  top: "block35_1/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_1/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_1/Branch_1/Conv2d_0a_1x1"
  top: "block35_1/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_1/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_1/Branch_1/Conv2d_0a_1x1"
  top: "block35_1/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_1/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "block35_1/Branch_1/Conv2d_0a_1x1"
  top: "block35_1/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "block35_1/Branch_1/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_1/Branch_1/Conv2d_0a_1x1"
  top: "block35_1/Branch_1/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_1/Branch_1/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_1/Branch_1/Conv2d_0b_3x3"
  top: "block35_1/Branch_1/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_1/Branch_1/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_1/Branch_1/Conv2d_0b_3x3"
  top: "block35_1/Branch_1/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_1/Branch_1/Conv2d_0b_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_1/Branch_1/Conv2d_0b_3x3"
  top: "block35_1/Branch_1/Conv2d_0b_3x3"
}
layer {
  name: "block35_1/Branch_2/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "Conv2d_4b_3x3"
  top: "block35_1/Branch_2/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_1/Branch_2/Conv2d_0a_1x1"
  top: "block35_1/Branch_2/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_1/Branch_2/Conv2d_0a_1x1"
  top: "block35_1/Branch_2/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0a_1x1/Relu"
  type: "ReLU"
  bottom: "block35_1/Branch_2/Conv2d_0a_1x1"
  top: "block35_1/Branch_2/Conv2d_0a_1x1"
}
layer {
  name: "block35_1/Branch_2/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_1/Branch_2/Conv2d_0a_1x1"
  top: "block35_1/Branch_2/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_1/Branch_2/Conv2d_0b_3x3"
  top: "block35_1/Branch_2/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_1/Branch_2/Conv2d_0b_3x3"
  top: "block35_1/Branch_2/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0b_3x3/Relu"
  type: "ReLU"
  bottom: "block35_1/Branch_2/Conv2d_0b_3x3"
  top: "block35_1/Branch_2/Conv2d_0b_3x3"
}
layer {
  name: "block35_1/Branch_2/Conv2d_0c_3x3"
  type: "Convolution"
  bottom: "block35_1/Branch_2/Conv2d_0b_3x3"
  top: "block35_1/Branch_2/Conv2d_0c_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0c_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_1/Branch_2/Conv2d_0c_3x3"
  top: "block35_1/Branch_2/Conv2d_0c_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0c_3x3/Scale"
  type: "Scale"
  bottom: "block35_1/Branch_2/Conv2d_0c_3x3"
  top: "block35_1/Branch_2/Conv2d_0c_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_1/Branch_2/Conv2d_0c_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_1/Branch_2/Conv2d_0c_3x3"
  top: "block35_1/Branch_2/Conv2d_0c_3x3"
}
layer {
  name: "block35_1/Concat"
  type: "Concat"
  bottom: "block35_1/Branch_0/Conv2d_1x1"
  bottom: "block35_1/Branch_1/Conv2d_0b_3x3"
  bottom: "block35_1/Branch_2/Conv2d_0c_3x3"
  top: "block35_1/Concat"
}
layer {
  name: "block35_1/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_1/Concat"
  top: "block35_1/Conv2d_1x1"
  convolution_param {
    num_output: 256
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_1/Scale"
  type: "Scale"
  bottom: "block35_1/Conv2d_1x1"
  top: "block35_1/Scale"
  scale_param {
    filler {
      value: 0.170000001788
    }
  }
}
layer {
  name: "block35_1/Sum"
  type: "Eltwise"
  bottom: "block35_1/Scale"
  bottom: "Conv2d_4b_3x3"
  top: "block35_1/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "block35_1/Sum/ReLU"
  type: "ReLU"
  bottom: "block35_1/Sum"
  top: "block35_1/Sum"
}
layer {
  name: "block35_2/Branch_0/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_1/Sum"
  top: "block35_2/Branch_0/Conv2d_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_2/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_2/Branch_0/Conv2d_1x1"
  top: "block35_2/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_2/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "block35_2/Branch_0/Conv2d_1x1"
  top: "block35_2/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_2/Branch_0/Conv2d_1x1/Relu"
  type: "ReLU"
  bottom: "block35_2/Branch_0/Conv2d_1x1"
  top: "block35_2/Branch_0/Conv2d_1x1"
}
layer {
  name: "block35_2/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_1/Sum"
  top: "block35_2/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_2/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_2/Branch_1/Conv2d_0a_1x1"
  top: "block35_2/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_2/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_2/Branch_1/Conv2d_0a_1x1"
  top: "block35_2/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_2/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "block35_2/Branch_1/Conv2d_0a_1x1"
  top: "block35_2/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "block35_2/Branch_1/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_2/Branch_1/Conv2d_0a_1x1"
  top: "block35_2/Branch_1/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_2/Branch_1/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_2/Branch_1/Conv2d_0b_3x3"
  top: "block35_2/Branch_1/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_2/Branch_1/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_2/Branch_1/Conv2d_0b_3x3"
  top: "block35_2/Branch_1/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_2/Branch_1/Conv2d_0b_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_2/Branch_1/Conv2d_0b_3x3"
  top: "block35_2/Branch_1/Conv2d_0b_3x3"
}
layer {
  name: "block35_2/Branch_2/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_1/Sum"
  top: "block35_2/Branch_2/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_2/Branch_2/Conv2d_0a_1x1"
  top: "block35_2/Branch_2/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_2/Branch_2/Conv2d_0a_1x1"
  top: "block35_2/Branch_2/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0a_1x1/Relu"
  type: "ReLU"
  bottom: "block35_2/Branch_2/Conv2d_0a_1x1"
  top: "block35_2/Branch_2/Conv2d_0a_1x1"
}
layer {
  name: "block35_2/Branch_2/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_2/Branch_2/Conv2d_0a_1x1"
  top: "block35_2/Branch_2/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_2/Branch_2/Conv2d_0b_3x3"
  top: "block35_2/Branch_2/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_2/Branch_2/Conv2d_0b_3x3"
  top: "block35_2/Branch_2/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0b_3x3/Relu"
  type: "ReLU"
  bottom: "block35_2/Branch_2/Conv2d_0b_3x3"
  top: "block35_2/Branch_2/Conv2d_0b_3x3"
}
layer {
  name: "block35_2/Branch_2/Conv2d_0c_3x3"
  type: "Convolution"
  bottom: "block35_2/Branch_2/Conv2d_0b_3x3"
  top: "block35_2/Branch_2/Conv2d_0c_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0c_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_2/Branch_2/Conv2d_0c_3x3"
  top: "block35_2/Branch_2/Conv2d_0c_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0c_3x3/Scale"
  type: "Scale"
  bottom: "block35_2/Branch_2/Conv2d_0c_3x3"
  top: "block35_2/Branch_2/Conv2d_0c_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_2/Branch_2/Conv2d_0c_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_2/Branch_2/Conv2d_0c_3x3"
  top: "block35_2/Branch_2/Conv2d_0c_3x3"
}
layer {
  name: "block35_2/Concat"
  type: "Concat"
  bottom: "block35_2/Branch_0/Conv2d_1x1"
  bottom: "block35_2/Branch_1/Conv2d_0b_3x3"
  bottom: "block35_2/Branch_2/Conv2d_0c_3x3"
  top: "block35_2/Concat"
}
layer {
  name: "block35_2/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_2/Concat"
  top: "block35_2/Conv2d_1x1"
  convolution_param {
    num_output: 256
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_2/Scale"
  type: "Scale"
  bottom: "block35_2/Conv2d_1x1"
  top: "block35_2/Scale"
  scale_param {
    filler {
      value: 0.170000001788
    }
  }
}
layer {
  name: "block35_2/Sum"
  type: "Eltwise"
  bottom: "block35_2/Scale"
  bottom: "block35_1/Sum"
  top: "block35_2/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "block35_2/Sum/Relu"
  type: "ReLU"
  bottom: "block35_2/Sum"
  top: "block35_2/Sum"
}
layer {
  name: "block35_3/Branch_0/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_2/Sum"
  top: "block35_3/Branch_0/Conv2d_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_3/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_3/Branch_0/Conv2d_1x1"
  top: "block35_3/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_3/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "block35_3/Branch_0/Conv2d_1x1"
  top: "block35_3/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_3/Branch_0/Conv2d_1x1/Relu"
  type: "ReLU"
  bottom: "block35_3/Branch_0/Conv2d_1x1"
  top: "block35_3/Branch_0/Conv2d_1x1"
}
layer {
  name: "block35_3/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_2/Sum"
  top: "block35_3/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_3/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_3/Branch_1/Conv2d_0a_1x1"
  top: "block35_3/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_3/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_3/Branch_1/Conv2d_0a_1x1"
  top: "block35_3/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_3/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "block35_3/Branch_1/Conv2d_0a_1x1"
  top: "block35_3/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "block35_3/Branch_1/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_3/Branch_1/Conv2d_0a_1x1"
  top: "block35_3/Branch_1/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_3/Branch_1/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_3/Branch_1/Conv2d_0b_3x3"
  top: "block35_3/Branch_1/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_3/Branch_1/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_3/Branch_1/Conv2d_0b_3x3"
  top: "block35_3/Branch_1/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_3/Branch_1/Conv2d_0b_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_3/Branch_1/Conv2d_0b_3x3"
  top: "block35_3/Branch_1/Conv2d_0b_3x3"
}
layer {
  name: "block35_3/Branch_2/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_2/Sum"
  top: "block35_3/Branch_2/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_3/Branch_2/Conv2d_0a_1x1"
  top: "block35_3/Branch_2/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_3/Branch_2/Conv2d_0a_1x1"
  top: "block35_3/Branch_2/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0a_1x1/Relu"
  type: "ReLU"
  bottom: "block35_3/Branch_2/Conv2d_0a_1x1"
  top: "block35_3/Branch_2/Conv2d_0a_1x1"
}
layer {
  name: "block35_3/Branch_2/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_3/Branch_2/Conv2d_0a_1x1"
  top: "block35_3/Branch_2/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_3/Branch_2/Conv2d_0b_3x3"
  top: "block35_3/Branch_2/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_3/Branch_2/Conv2d_0b_3x3"
  top: "block35_3/Branch_2/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0b_3x3/Relu"
  type: "ReLU"
  bottom: "block35_3/Branch_2/Conv2d_0b_3x3"
  top: "block35_3/Branch_2/Conv2d_0b_3x3"
}
layer {
  name: "block35_3/Branch_2/Conv2d_0c_3x3"
  type: "Convolution"
  bottom: "block35_3/Branch_2/Conv2d_0b_3x3"
  top: "block35_3/Branch_2/Conv2d_0c_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0c_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_3/Branch_2/Conv2d_0c_3x3"
  top: "block35_3/Branch_2/Conv2d_0c_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0c_3x3/Scale"
  type: "Scale"
  bottom: "block35_3/Branch_2/Conv2d_0c_3x3"
  top: "block35_3/Branch_2/Conv2d_0c_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_3/Branch_2/Conv2d_0c_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_3/Branch_2/Conv2d_0c_3x3"
  top: "block35_3/Branch_2/Conv2d_0c_3x3"
}
layer {
  name: "block35_3/Concat"
  type: "Concat"
  bottom: "block35_3/Branch_0/Conv2d_1x1"
  bottom: "block35_3/Branch_1/Conv2d_0b_3x3"
  bottom: "block35_3/Branch_2/Conv2d_0c_3x3"
  top: "block35_3/Concat"
}
layer {
  name: "block35_3/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_3/Concat"
  top: "block35_3/Conv2d_1x1"
  convolution_param {
    num_output: 256
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_3/Scale"
  type: "Scale"
  bottom: "block35_3/Conv2d_1x1"
  top: "block35_3/Scale"
  scale_param {
    filler {
      value: 0.170000001788
    }
  }
}
layer {
  name: "block35_3/Sum"
  type: "Eltwise"
  bottom: "block35_3/Scale"
  bottom: "block35_2/Sum"
  top: "block35_3/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "block35_3/Sum/ReLU"
  type: "ReLU"
  bottom: "block35_3/Sum"
  top: "block35_3/Sum"
}
layer {
  name: "block35_4/Branch_0/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_3/Sum"
  top: "block35_4/Branch_0/Conv2d_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_4/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_4/Branch_0/Conv2d_1x1"
  top: "block35_4/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_4/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "block35_4/Branch_0/Conv2d_1x1"
  top: "block35_4/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_4/Branch_0/Conv2d_1x1/Relu"
  type: "ReLU"
  bottom: "block35_4/Branch_0/Conv2d_1x1"
  top: "block35_4/Branch_0/Conv2d_1x1"
}
layer {
  name: "block35_4/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_3/Sum"
  top: "block35_4/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_4/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_4/Branch_1/Conv2d_0a_1x1"
  top: "block35_4/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_4/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_4/Branch_1/Conv2d_0a_1x1"
  top: "block35_4/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_4/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "block35_4/Branch_1/Conv2d_0a_1x1"
  top: "block35_4/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "block35_4/Branch_1/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_4/Branch_1/Conv2d_0a_1x1"
  top: "block35_4/Branch_1/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_4/Branch_1/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_4/Branch_1/Conv2d_0b_3x3"
  top: "block35_4/Branch_1/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_4/Branch_1/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_4/Branch_1/Conv2d_0b_3x3"
  top: "block35_4/Branch_1/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_4/Branch_1/Conv2d_0b_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_4/Branch_1/Conv2d_0b_3x3"
  top: "block35_4/Branch_1/Conv2d_0b_3x3"
}
layer {
  name: "block35_4/Branch_2/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_3/Sum"
  top: "block35_4/Branch_2/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_4/Branch_2/Conv2d_0a_1x1"
  top: "block35_4/Branch_2/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_4/Branch_2/Conv2d_0a_1x1"
  top: "block35_4/Branch_2/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0a_1x1/Relu"
  type: "ReLU"
  bottom: "block35_4/Branch_2/Conv2d_0a_1x1"
  top: "block35_4/Branch_2/Conv2d_0a_1x1"
}
layer {
  name: "block35_4/Branch_2/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_4/Branch_2/Conv2d_0a_1x1"
  top: "block35_4/Branch_2/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_4/Branch_2/Conv2d_0b_3x3"
  top: "block35_4/Branch_2/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_4/Branch_2/Conv2d_0b_3x3"
  top: "block35_4/Branch_2/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0b_3x3/Relu"
  type: "ReLU"
  bottom: "block35_4/Branch_2/Conv2d_0b_3x3"
  top: "block35_4/Branch_2/Conv2d_0b_3x3"
}
layer {
  name: "block35_4/Branch_2/Conv2d_0c_3x3"
  type: "Convolution"
  bottom: "block35_4/Branch_2/Conv2d_0b_3x3"
  top: "block35_4/Branch_2/Conv2d_0c_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0c_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_4/Branch_2/Conv2d_0c_3x3"
  top: "block35_4/Branch_2/Conv2d_0c_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0c_3x3/Scale"
  type: "Scale"
  bottom: "block35_4/Branch_2/Conv2d_0c_3x3"
  top: "block35_4/Branch_2/Conv2d_0c_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_4/Branch_2/Conv2d_0c_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_4/Branch_2/Conv2d_0c_3x3"
  top: "block35_4/Branch_2/Conv2d_0c_3x3"
}
layer {
  name: "block35_4/Concat"
  type: "Concat"
  bottom: "block35_4/Branch_0/Conv2d_1x1"
  bottom: "block35_4/Branch_1/Conv2d_0b_3x3"
  bottom: "block35_4/Branch_2/Conv2d_0c_3x3"
  top: "block35_4/Concat"
}
layer {
  name: "block35_4/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_4/Concat"
  top: "block35_4/Conv2d_1x1"
  convolution_param {
    num_output: 256
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_4/Scale"
  type: "Scale"
  bottom: "block35_4/Conv2d_1x1"
  top: "block35_4/Scale"
  scale_param {
    filler {
      value: 0.170000001788
    }
  }
}
layer {
  name: "block35_4/Sum"
  type: "Eltwise"
  bottom: "block35_4/Scale"
  bottom: "block35_3/Sum"
  top: "block35_4/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "block35_4/Sum/ReLU"
  type: "ReLU"
  bottom: "block35_4/Sum"
  top: "block35_4/Sum"
}
layer {
  name: "block35_5/Branch_0/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_4/Sum"
  top: "block35_5/Branch_0/Conv2d_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_5/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_5/Branch_0/Conv2d_1x1"
  top: "block35_5/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_5/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "block35_5/Branch_0/Conv2d_1x1"
  top: "block35_5/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_5/Branch_0/Conv2d_1x1/Relu"
  type: "ReLU"
  bottom: "block35_5/Branch_0/Conv2d_1x1"
  top: "block35_5/Branch_0/Conv2d_1x1"
}
layer {
  name: "block35_5/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_4/Sum"
  top: "block35_5/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_5/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_5/Branch_1/Conv2d_0a_1x1"
  top: "block35_5/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_5/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_5/Branch_1/Conv2d_0a_1x1"
  top: "block35_5/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_5/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "block35_5/Branch_1/Conv2d_0a_1x1"
  top: "block35_5/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "block35_5/Branch_1/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_5/Branch_1/Conv2d_0a_1x1"
  top: "block35_5/Branch_1/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_5/Branch_1/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_5/Branch_1/Conv2d_0b_3x3"
  top: "block35_5/Branch_1/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_5/Branch_1/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_5/Branch_1/Conv2d_0b_3x3"
  top: "block35_5/Branch_1/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_5/Branch_1/Conv2d_0b_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_5/Branch_1/Conv2d_0b_3x3"
  top: "block35_5/Branch_1/Conv2d_0b_3x3"
}
layer {
  name: "block35_5/Branch_2/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_4/Sum"
  top: "block35_5/Branch_2/Conv2d_0a_1x1"
  convolution_param {
    num_output: 32
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_5/Branch_2/Conv2d_0a_1x1"
  top: "block35_5/Branch_2/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block35_5/Branch_2/Conv2d_0a_1x1"
  top: "block35_5/Branch_2/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0a_1x1/Relu"
  type: "ReLU"
  bottom: "block35_5/Branch_2/Conv2d_0a_1x1"
  top: "block35_5/Branch_2/Conv2d_0a_1x1"
}
layer {
  name: "block35_5/Branch_2/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "block35_5/Branch_2/Conv2d_0a_1x1"
  top: "block35_5/Branch_2/Conv2d_0b_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_5/Branch_2/Conv2d_0b_3x3"
  top: "block35_5/Branch_2/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "block35_5/Branch_2/Conv2d_0b_3x3"
  top: "block35_5/Branch_2/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0b_3x3/Relu"
  type: "ReLU"
  bottom: "block35_5/Branch_2/Conv2d_0b_3x3"
  top: "block35_5/Branch_2/Conv2d_0b_3x3"
}
layer {
  name: "block35_5/Branch_2/Conv2d_0c_3x3"
  type: "Convolution"
  bottom: "block35_5/Branch_2/Conv2d_0b_3x3"
  top: "block35_5/Branch_2/Conv2d_0c_3x3"
  convolution_param {
    num_output: 32
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0c_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "block35_5/Branch_2/Conv2d_0c_3x3"
  top: "block35_5/Branch_2/Conv2d_0c_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0c_3x3/Scale"
  type: "Scale"
  bottom: "block35_5/Branch_2/Conv2d_0c_3x3"
  top: "block35_5/Branch_2/Conv2d_0c_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block35_5/Branch_2/Conv2d_0c_3x3/ReLU"
  type: "ReLU"
  bottom: "block35_5/Branch_2/Conv2d_0c_3x3"
  top: "block35_5/Branch_2/Conv2d_0c_3x3"
}
layer {
  name: "block35_5/Concat"
  type: "Concat"
  bottom: "block35_5/Branch_0/Conv2d_1x1"
  bottom: "block35_5/Branch_1/Conv2d_0b_3x3"
  bottom: "block35_5/Branch_2/Conv2d_0c_3x3"
  top: "block35_5/Concat"
}
layer {
  name: "block35_5/Conv2d_1x1"
  type: "Convolution"
  bottom: "block35_5/Concat"
  top: "block35_5/Conv2d_1x1"
  convolution_param {
    num_output: 256
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block35_5/Scale"
  type: "Scale"
  bottom: "block35_5/Conv2d_1x1"
  top: "block35_5/Scale"
  scale_param {
    filler {
      value: 0.170000001788
    }
  }
}
layer {
  name: "block35_5/Sum"
  type: "Eltwise"
  bottom: "block35_5/Scale"
  bottom: "block35_4/Sum"
  top: "block35_5/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "block35_5/Sum/ReLU"
  type: "ReLU"
  bottom: "block35_5/Sum"
  top: "block35_5/Sum"
}
layer {
  name: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  type: "Convolution"
  bottom: "block35_5/Sum"
  top: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  convolution_param {
    num_output: 384
    pad: 0
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "Mixed_6a/Branch_0/Conv2d_1a_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  top: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Mixed_6a/Branch_0/Conv2d_1a_3x3/Scale"
  type: "Scale"
  bottom: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  top: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Mixed_6a/Branch_0/Conv2d_1a_3x3/ReLU"
  type: "ReLU"
  bottom: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  top: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block35_5/Sum"
  top: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 192
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  top: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  top: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  top: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  type: "Convolution"
  bottom: "Mixed_6a/Branch_1/Conv2d_0a_1x1"
  top: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  convolution_param {
    num_output: 192
    pad: 1
    kernel_size: 3
    stride: 1
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0b_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  top: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0b_3x3/Scale"
  type: "Scale"
  bottom: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  top: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_0b_3x3/ReLU"
  type: "ReLU"
  bottom: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  top: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  type: "Convolution"
  bottom: "Mixed_6a/Branch_1/Conv2d_0b_3x3"
  top: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  convolution_param {
    num_output: 256
    pad: 0
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_1a_3x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  top: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_1a_3x3/Scale"
  type: "Scale"
  bottom: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  top: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Mixed_6a/Branch_1/Conv2d_1a_3x3/ReLU"
  type: "ReLU"
  bottom: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  top: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
}
layer {
  name: "MaxPool_1a_3x3"
  type: "Pooling"
  bottom: "block35_5/Sum"
  top: "MaxPool_1a_3x3"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
    pad: 0
  }
}
layer {
  name: "Mixed_6a/Concat"
  type: "Concat"
  bottom: "Mixed_6a/Branch_0/Conv2d_1a_3x3"
  bottom: "Mixed_6a/Branch_1/Conv2d_1a_3x3"
  bottom: "MaxPool_1a_3x3"
  top: "Mixed_6a/Concat"
}
layer {
  name: "block17_1/Branch_0/Conv2d_1x1"
  type: "Convolution"
  bottom: "Mixed_6a/Concat"
  top: "block17_1/Branch_0/Conv2d_1x1"
  convolution_param {
    num_output: 128
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block17_1/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_1/Branch_0/Conv2d_1x1"
  top: "block17_1/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_1/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "block17_1/Branch_0/Conv2d_1x1"
  top: "block17_1/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_1/Branch_0/Conv2d_1x1/ReLU"
  type: "ReLU"
  bottom: "block17_1/Branch_0/Conv2d_1x1"
  top: "block17_1/Branch_0/Conv2d_1x1"
}
layer {
  name: "block17_1/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "Mixed_6a/Concat"
  top: "block17_1/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 128
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_1/Branch_1/Conv2d_0a_1x1"
  top: "block17_1/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block17_1/Branch_1/Conv2d_0a_1x1"
  top: "block17_1/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "block17_1/Branch_1/Conv2d_0a_1x1"
  top: "block17_1/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "block17_1/Branch_1/Conv2d_0b_1x7"
  type: "Convolution"
  bottom: "block17_1/Branch_1/Conv2d_0a_1x1"
  top: "block17_1/Branch_1/Conv2d_0b_1x7"
  convolution_param {
    num_output: 128
    stride: 1
    pad_h: 0
    pad_w: 3
    kernel_h: 1
    kernel_w: 7
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0b_1x7/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_1/Branch_1/Conv2d_0b_1x7"
  top: "block17_1/Branch_1/Conv2d_0b_1x7"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0b_1x7/Scale"
  type: "Scale"
  bottom: "block17_1/Branch_1/Conv2d_0b_1x7"
  top: "block17_1/Branch_1/Conv2d_0b_1x7"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0b_1x7/ReLU"
  type: "ReLU"
  bottom: "block17_1/Branch_1/Conv2d_0b_1x7"
  top: "block17_1/Branch_1/Conv2d_0b_1x7"
}
layer {
  name: "block17_1/Branch_1/Conv2d_0c_7x1"
  type: "Convolution"
  bottom: "block17_1/Branch_1/Conv2d_0b_1x7"
  top: "block17_1/Branch_1/Conv2d_0c_7x1"
  convolution_param {
    num_output: 128
    stride: 1
    pad_h: 3
    pad_w: 0
    kernel_h: 7
    kernel_w: 1
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0c_7x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_1/Branch_1/Conv2d_0c_7x1"
  top: "block17_1/Branch_1/Conv2d_0c_7x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0c_7x1/Scale"
  type: "Scale"
  bottom: "block17_1/Branch_1/Conv2d_0c_7x1"
  top: "block17_1/Branch_1/Conv2d_0c_7x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_1/Branch_1/Conv2d_0c_7x1/ReLU"
  type: "ReLU"
  bottom: "block17_1/Branch_1/Conv2d_0c_7x1"
  top: "block17_1/Branch_1/Conv2d_0c_7x1"
}
layer {
  name: "block17_1/Concat"
  type: "Concat"
  bottom: "block17_1/Branch_0/Conv2d_1x1"
  bottom: "block17_1/Branch_1/Conv2d_0c_7x1"
  top: "block17_1/Concat"
}
layer {
  name: "block17_1/Conv2d_1x1"
  type: "Convolution"
  bottom: "block17_1/Concat"
  top: "block17_1/Conv2d_1x1"
  convolution_param {
    num_output: 896
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block17_1/Scale"
  type: "Scale"
  bottom: "block17_1/Conv2d_1x1"
  top: "block17_1/Scale"
  scale_param {
    filler {
      value: 0.10000000149
    }
  }
}
layer {
  name: "block17_1/Sum"
  type: "Eltwise"
  bottom: "block17_1/Scale"
  bottom: "Mixed_6a/Concat"
  top: "block17_1/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "block17_1/Sum/ReLU"
  type: "ReLU"
  bottom: "block17_1/Sum"
  top: "block17_1/Sum"
}
layer {
  name: "block17_2/Branch_0/Conv2d_1x1"
  type: "Convolution"
  bottom: "block17_1/Sum"
  top: "block17_2/Branch_0/Conv2d_1x1"
  convolution_param {
    num_output: 128
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block17_2/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_2/Branch_0/Conv2d_1x1"
  top: "block17_2/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_2/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "block17_2/Branch_0/Conv2d_1x1"
  top: "block17_2/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_2/Branch_0/Conv2d_1x1/ReLU"
  type: "ReLU"
  bottom: "block17_2/Branch_0/Conv2d_1x1"
  top: "block17_2/Branch_0/Conv2d_1x1"
}
layer {
  name: "block17_2/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block17_1/Sum"
  top: "block17_2/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 128
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_2/Branch_1/Conv2d_0a_1x1"
  top: "block17_2/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block17_2/Branch_1/Conv2d_0a_1x1"
  top: "block17_2/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "block17_2/Branch_1/Conv2d_0a_1x1"
  top: "block17_2/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "block17_2/Branch_1/Conv2d_0b_1x7"
  type: "Convolution"
  bottom: "block17_2/Branch_1/Conv2d_0a_1x1"
  top: "block17_2/Branch_1/Conv2d_0b_1x7"
  convolution_param {
    num_output: 128
    stride: 1
    pad_h: 0
    pad_w: 3
    kernel_h: 1
    kernel_w: 7
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0b_1x7/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_2/Branch_1/Conv2d_0b_1x7"
  top: "block17_2/Branch_1/Conv2d_0b_1x7"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0b_1x7/Scale"
  type: "Scale"
  bottom: "block17_2/Branch_1/Conv2d_0b_1x7"
  top: "block17_2/Branch_1/Conv2d_0b_1x7"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0b_1x7/ReLU"
  type: "ReLU"
  bottom: "block17_2/Branch_1/Conv2d_0b_1x7"
  top: "block17_2/Branch_1/Conv2d_0b_1x7"
}
layer {
  name: "block17_2/Branch_1/Conv2d_0c_7x1"
  type: "Convolution"
  bottom: "block17_2/Branch_1/Conv2d_0b_1x7"
  top: "block17_2/Branch_1/Conv2d_0c_7x1"
  convolution_param {
    num_output: 128
    stride: 1
    pad_h: 3
    pad_w: 0
    kernel_h: 7
    kernel_w: 1
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0c_7x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_2/Branch_1/Conv2d_0c_7x1"
  top: "block17_2/Branch_1/Conv2d_0c_7x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0c_7x1/Scale"
  type: "Scale"
  bottom: "block17_2/Branch_1/Conv2d_0c_7x1"
  top: "block17_2/Branch_1/Conv2d_0c_7x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_2/Branch_1/Conv2d_0c_7x1/ReLU"
  type: "ReLU"
  bottom: "block17_2/Branch_1/Conv2d_0c_7x1"
  top: "block17_2/Branch_1/Conv2d_0c_7x1"
}
layer {
  name: "block17_2/Concat"
  type: "Concat"
  bottom: "block17_2/Branch_0/Conv2d_1x1"
  bottom: "block17_2/Branch_1/Conv2d_0c_7x1"
  top: "block17_2/Concat"
}
layer {
  name: "block17_2/Conv2d_1x1"
  type: "Convolution"
  bottom: "block17_2/Concat"
  top: "block17_2/Conv2d_1x1"
  convolution_param {
    num_output: 896
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block17_2/Scale"
  type: "Scale"
  bottom: "block17_2/Conv2d_1x1"
  top: "block17_2/Scale"
  scale_param {
    filler {
      value: 0.10000000149
    }
  }
}
layer {
  name: "block17_2/Sum"
  type: "Eltwise"
  bottom: "block17_2/Scale"
  bottom: "block17_1/Sum"
  top: "block17_2/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "block17_2/Sum/ReLU"
  type: "ReLU"
  bottom: "block17_2/Sum"
  top: "block17_2/Sum"
}
layer {
  name: "block17_3/Branch_0/Conv2d_1x1"
  type: "Convolution"
  bottom: "block17_2/Sum"
  top: "block17_3/Branch_0/Conv2d_1x1"
  convolution_param {
    num_output: 128
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block17_3/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_3/Branch_0/Conv2d_1x1"
  top: "block17_3/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_3/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "block17_3/Branch_0/Conv2d_1x1"
  top: "block17_3/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_3/Branch_0/Conv2d_1x1/ReLU"
  type: "ReLU"
  bottom: "block17_3/Branch_0/Conv2d_1x1"
  top: "block17_3/Branch_0/Conv2d_1x1"
}
layer {
  name: "block17_3/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block17_2/Sum"
  top: "block17_3/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 128
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "block17_3/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_3/Branch_1/Conv2d_0a_1x1"
  top: "block17_3/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "block17_3/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "block17_3/Branch_1/Conv2d_0a_1x1"
  top: "block17_3/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "block17_3/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "block17_3/Branch_1/Conv2d_0a_1x1"
  top: "block17_3/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "block17_3/Branch_1/Conv2d_0b_1x7"
  type: "Convolution"
  bottom: "block17_3/Branch_1/Conv2d_0a_1x1"
  top: "block17_3/Branch_1/Conv2d_0b_1x7"
  convolution_param {
    num_output: 128
    stride: 1
    pad_h: 0
    pad_w: 3
    kernel_h: 1
    kernel_w: 7
  }
}
layer {
  name: "block17_3/Branch_1/Conv2d_0b_1x7/BatchNorm"
  type: "BatchNorm"
  bottom: "block17_3/Branch_1/Conv2d_0b_1x7"
  top: "block17_3/Branch_1/Conv2d_0b_1x7"
  batch_norm_param {
    use_global_stats: true
  }
}
.
.
.
.
.
.
layer {
  name: "Block8/Branch_0/Conv2d_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "Block8/Branch_0/Conv2d_1x1"
  top: "Block8/Branch_0/Conv2d_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Block8/Branch_0/Conv2d_1x1/Scale"
  type: "Scale"
  bottom: "Block8/Branch_0/Conv2d_1x1"
  top: "Block8/Branch_0/Conv2d_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Block8/Branch_0/Conv2d_1x1/ReLU"
  type: "ReLU"
  bottom: "Block8/Branch_0/Conv2d_1x1"
  top: "Block8/Branch_0/Conv2d_1x1"
}
layer {
  name: "Block8/Branch_1/Conv2d_0a_1x1"
  type: "Convolution"
  bottom: "block8_5/Sum"
  top: "Block8/Branch_1/Conv2d_0a_1x1"
  convolution_param {
    num_output: 192
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "Block8/Branch_1/Conv2d_0a_1x1/BatchNorm"
  type: "BatchNorm"
  bottom: "Block8/Branch_1/Conv2d_0a_1x1"
  top: "Block8/Branch_1/Conv2d_0a_1x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Block8/Branch_1/Conv2d_0a_1x1/Scale"
  type: "Scale"
  bottom: "Block8/Branch_1/Conv2d_0a_1x1"
  top: "Block8/Branch_1/Conv2d_0a_1x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Block8/Branch_1/Conv2d_0a_1x1/ReLU"
  type: "ReLU"
  bottom: "Block8/Branch_1/Conv2d_0a_1x1"
  top: "Block8/Branch_1/Conv2d_0a_1x1"
}
layer {
  name: "Block8/Branch_1/Conv2d_0b_1x3"
  type: "Convolution"
  bottom: "Block8/Branch_1/Conv2d_0a_1x1"
  top: "Block8/Branch_1/Conv2d_0b_1x3"
  convolution_param {
    num_output: 192
    stride: 1
    pad_h: 0
    pad_w: 1
    kernel_h: 1
    kernel_w: 3
  }
}
layer {
  name: "Block8/Branch_1/Conv2d_0b_1x3/BatchNorm"
  type: "BatchNorm"
  bottom: "Block8/Branch_1/Conv2d_0b_1x3"
  top: "Block8/Branch_1/Conv2d_0b_1x3"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Block8/Branch_1/Conv2d_0b_1x3/Scale"
  type: "Scale"
  bottom: "Block8/Branch_1/Conv2d_0b_1x3"
  top: "Block8/Branch_1/Conv2d_0b_1x3"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Block8/Branch_1/Conv2d_0b_1x3/ReLU"
  type: "ReLU"
  bottom: "Block8/Branch_1/Conv2d_0b_1x3"
  top: "Block8/Branch_1/Conv2d_0b_1x3"
}
layer {
  name: "Block8/Branch_1/Conv2d_0c_3x1"
  type: "Convolution"
  bottom: "Block8/Branch_1/Conv2d_0b_1x3"
  top: "Block8/Branch_1/Conv2d_0c_3x1"
  convolution_param {
    num_output: 192
    stride: 1
    pad_h: 1
    pad_w: 0
    kernel_h: 3
    kernel_w: 1
  }
}
layer {
  name: "Block8/Branch_1/Conv2d_0c_3x1/BatchNorm"
  type: "BatchNorm"
  bottom: "Block8/Branch_1/Conv2d_0c_3x1"
  top: "Block8/Branch_1/Conv2d_0c_3x1"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Block8/Branch_1/Conv2d_0c_3x1/Scale"
  type: "Scale"
  bottom: "Block8/Branch_1/Conv2d_0c_3x1"
  top: "Block8/Branch_1/Conv2d_0c_3x1"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "Block8/Branch_1/Conv2d_0c_3x1/ReLU"
  type: "ReLU"
  bottom: "Block8/Branch_1/Conv2d_0c_3x1"
  top: "Block8/Branch_1/Conv2d_0c_3x1"
}
layer {
  name: "Block8/Concat"
  type: "Concat"
  bottom: "Block8/Branch_0/Conv2d_1x1"
  bottom: "Block8/Branch_1/Conv2d_0c_3x1"
  top: "Block8/Concat"
}
layer {
  name: "Block8/Conv2d_1x1"
  type: "Convolution"
  bottom: "Block8/Concat"
  top: "Block8/Conv2d_1x1"
  convolution_param {
    num_output: 1792
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "Block8/Scale"
  type: "Scale"
  bottom: "Block8/Conv2d_1x1"
  top: "Block8/Scale"
  scale_param {
    filler {
      value: 1.0
    }
  }
}
layer {
  name: "Block8/Sum"
  type: "Eltwise"
  bottom: "Block8/Scale"
  bottom: "block8_5/Sum"
  top: "Block8/Sum"
  eltwise_param {
    operation: SUM
  }
}
layer {
  name: "AvgPool_1a_8x8"
  type: "Pooling"
  bottom: "Block8/Sum"
  top: "AvgPool_1a_8x8"
  pooling_param {
    pool: AVE
    kernel_size: 3
    stride: 2
  }
}
layer {
  name: "Conv1x1_512"
  type: "Convolution"
  bottom: "AvgPool_1a_8x8"
  top: "Conv1x1_512"
  convolution_param {
    num_output: 512
    pad: 0
    kernel_size: 1
    stride: 1
  }
}
layer {
  name: "Bottleneck/BatchNorm"
  type: "BatchNorm"
  bottom: "Conv1x1_512"
  top: "Bottleneck"
  batch_norm_param {
    use_global_stats: true
  }
}
layer {
  name: "Bottleneck/Scale"
  type: "Scale"
  bottom: "Bottleneck"
  top: "Bottleneck"
  scale_param {
    bias_term: true
  }
}
layer {
  name: "flatten"
  type: "Flatten"
  bottom: "Bottleneck"
  top: "flatten"
}
layer {
  name: "accuracy"
  type: "Accuracy"
  bottom: "flatten"
  bottom: "label"
  top: "accuracy"
  include {
    phase: TEST
  }
}
layer {
  name: "loss"
  type: "SoftmaxWithLoss"
  bottom: "flatten"
  bottom: "label"
  top: "loss"
}

You can deploy it by tensorRT firstly. Do you mind provide your caffe model?

Here’s the model downloaded from DIGIT
https://www.dropbox.com/sh/5m3muojw9n8hlyw/AACwDPE1iEnlnI59fjGl7Lk2a?dl=0

FYI, I tested model in docker nvcaffe 19.10-py2 and it worked.

Thanks

deploy prototxt has problem. You can use trtexec to check.

$ ./trtexec --deploy=/home/cding/tmp/1068181_caffe_prototxt.txt --output=softmax

[I] deploy: /home/cding/tmp/1068181_caffe_prototxt.txt
[I] output: softmax
[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 2102:1: Expected identifier, got: .
[E] [TRT] CaffeParser: Could not parse deploy file
[E] Engine could not be created
[E] Engine could not be created
&&&& FAILED TensorRT.trtexec # ./trtexec_debug --deploy=/home/cding/tmp/1068181_caffe_prototxt.txt --output=softmax

./trtexec --deploy=./TensorRT-5.1.5.0/data/mnist/deploy.prototxt --output=prob

[I] deploy: /home/cding/tensorrt/TensorRT-5.1.5.0/data/mnist/deploy.prototxt
[I] output: prob
[I] Input "data": 1x28x28
[I] Output "prob": 10x1x1
[W] [TRT] TensorRT was compiled against cuBLAS 10.2.0 but is linked against cuBLAS 9.0.10. This mismatch may potentially cause undefined behavior.
[W] [TRT] TensorRT was compiled against cuBLAS 10.2.0 but is linked against cuBLAS 9.0.10. This mismatch may potentially cause undefined behavior.
[W] [TRT] TensorRT was compiled against cuBLAS 10.2.0 but is linked against cuBLAS 9.0.10. This mismatch may potentially cause undefined behavior.
[I] Average over 10 runs is 0.227462 ms (host walltime is 0.308038 ms, 99% percentile time is 0.265152).
[I] Average over 10 runs is 0.233622 ms (host walltime is 0.317284 ms, 99% percentile time is 0.29696).
[I] Average over 10 runs is 0.22392 ms (host walltime is 0.303886 ms, 99% percentile time is 0.23632).
[I] Average over 10 runs is 0.224538 ms (host walltime is 0.30449 ms, 99% percentile time is 0.239616).
[I] Average over 10 runs is 0.222973 ms (host walltime is 0.303562 ms, 99% percentile time is 0.241664).
[I] Average over 10 runs is 0.229222 ms (host walltime is 0.311122 ms, 99% percentile time is 0.240416).
[I] Average over 10 runs is 0.225843 ms (host walltime is 0.308948 ms, 99% percentile time is 0.236544).
[I] Average over 10 runs is 0.22616 ms (host walltime is 0.306785 ms, 99% percentile time is 0.24064).
[I] Average over 10 runs is 0.231062 ms (host walltime is 0.311651 ms, 99% percentile time is 0.237568).
[I] Average over 10 runs is 0.221075 ms (host walltime is 0.299595 ms, 99% percentile time is 0.242656).
&&&& PASSED TensorRT.trtexec # ./trtexec_debug --deploy=/home/cding/tensorrt/TensorRT-5.1.5.0/data/mnist/deploy.prototxt --output=prob

line 2207 ~ 2212, there are some invalid “.”

Hi Chris,

I assume you use the deploy.prototxt from my comment. Because the text area has character limit, I can’t post the complete file, so I use … to skip partial file. Please refer to the dropbox link I attached.

In your link https://www.dropbox.com/sh/5m3muojw9n8hlyw/AACwDPE1iEnlnI59fjGl7Lk2a?dl=0
prototxt and caffemodel are OK.

./trtexec --deploy=/home/cding/tmp/deploy.prototxt --output=softmax --model=/home/cding/tmp/snapshot_iter_600.caffemodel

Yes. I was also able to test the model with nvcaffe. But when I deploy it to deepstream 4.0.1. I’m getting the errors (callstack in very first comment). I’m clueless.

Can you provide your nvinfer config and more deepstream deploy details ?

Hi Chris,

I don’t have the details with me. Basically I just override the caffemodel, deploy.prototxt and labels.txt from the primary infer config file of the resnet-10 example. The errors only affects the block_xxx/Scale layers, which makes me suspect it is the deepstream sdk problem. Could you try deploy on deepstream?

It’s not enough.
“model-engine-file” should be commented,
“output-blob-names” should be changed,
“network-mode” should be 0 if you does not do int8 calibration

Yes. I actually did all these steps. I set output-blob-names to softmax.
I will double check my config file when I get a chance.

Hi ben,
Could you run sucessfully with official caffe?
I got the following errors with officcial caffe when I ran test with your provided model and weights in dropbox.

./caffe test -model deploy.prototxt -weights snapshot_iter_600.caffemodel

There is the error log.

I1212 00:38:03.078389 53292 net.cpp:202] Conv2d_1a_3x3 does not need backward computation.
I1212 00:38:03.078395 53292 net.cpp:202] input does not need backward computation.
I1212 00:38:03.078402 53292 net.cpp:244] This network produces output softmax
I1212 00:38:03.079109 53292 net.cpp:257] Network initialization done.
I1212 00:38:03.232255 53292 upgrade_proto.cpp:79] Attempting to upgrade batch norm layers using deprecated params: /home/scratch.hhao_sw/repos/trt/build-lts-1-5.1-e7bb632fc8-V2/x86_64-linux/snapshot_iter_600.caffemodel
I1212 00:38:03.232364 53292 upgrade_proto.cpp:82] Successfully upgraded batch norm layers using deprecated params.
I1212 00:38:03.232376 53292 net.cpp:746] Ignoring source layer data
F1212 00:38:03.232399 53292 blob.cpp:496] Check failed: count_ == proto.data_size() (864 vs. 0) 
*** Check failure stack trace: ***
    @     0x7f7d3159457d  google::LogMessage::Fail()
    @     0x7f7d31596b33  google::LogMessage::SendToLog()
    @     0x7f7d3159410b  google::LogMessage::Flush()
    @     0x7f7d31595a7e  google::LogMessageFatal::~LogMessageFatal()
    @     0x7f7d31bbf5e8  caffe::Blob<>::FromProto()
    @     0x7f7d31cde24e  caffe::Net<>::CopyTrainedLayersFrom()
    @     0x7f7d31ce30d5  caffe::Net<>::CopyTrainedLayersFromBinaryProto()
    @           0x409a5f  test()
    @           0x407a4e  main
    @     0x7f7d305b6830  __libc_start_main
    @           0x4083c9  _start

Hi,

Yes. I tried with official caffe and got the same error.

My model trained on DIGITS which uses nvcaffe-19.10. I tested directly with nvcaffe-19.10 and worked. Chris also confirmed the model is okay. I believe nvcaffe is slightly different than official caffe on interpreting the model.

Hi ben,

I tried to run your model with nvidia caffe:19.11-py3, and it worked also. So I could come to the conclusion that this is caffe parser bug in TensorRT.

source file:

# Copyright (c) 2019 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
#type=4
#uri=file://../../streams/sample_1080p_h264.mp4
#uri=rtsp://10.234.8.61:50010/live?camera=15&user=admin&pass=A1crUF4%3D
#num-sources=1
#drop-frame-interval=2
#gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
#cudadec-memtype=0

type=5
camera-width=1280
camera-height=720
camera-fps-n=30
camera-fps-d=1
#camera-v4l2-dev-node=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=5
sync=1
source-id=0
gpu-id=0
qos=0
nvbuf-memory-type=0
overlay-id=1

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=4
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_face.txt
[tracker]
enable=0
tracker-width=480
tracker-height=272
#ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_iou.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-4.0/lib/libnvds_mot_klt.so
#ll-config-file required for IOU only
#ll-config-file=iou_config.txt
gpu-id=0

[tests]
file-loop=0

primary inference config file:

[property]
gpu-id=0
net-scale-factor=1
model-file=/home/ben/Downloads/FaceNet/snapshot_iter_600.caffemodel
proto-file=/home/ben/Downloads/FaceNet/deploy.prototxt
labelfile-path=/home/ben/Downloads/FaceNet/labels.txt
batch-size=1
model-color-format=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0
is-classifier=1
output-blob-names=softmax

I also suspect it is the caffe model parser issue.

Hi,

I have a local quick fix of the TensorRT caffe parser issue.

This need to build TensorRT with TensorRT Open Source Software https://github.com/NVIDIA/TensorRT

Use the following modification.

https://github.com/NVIDIA/TensorRT/blob/master/parsers/caffe/caffeParser/opParsers/parseScale.cpp#L34

-    Weights shift = !p.has_bias_term() || p.bias_term() ? (weightFactory.isInitialized() ? weightFactory(msg.name(), WeightType::kBIAS) : weightFactory.allocateWeights(C)) : weightFactory.getNullWeights();
+    // Caffe will learn bias only if bias_term is true.
+    // Otherwise bias is 0.
+    Weights shift = (p.has_bias_term() && p.bias_term())
+        ? (weightFactory.isInitialized() ? weightFactory(msg.name(), WeightType::kBIAS)
+                                         : weightFactory.allocateWeights(C))
+        : weightFactory.getNullWeights();

I will give it a shot. Thank you.