Error calling the interface of createSSDPriorBoxPlugin when prase my caffe model

using Jetson xavier and Jetpack 4.1
Hi,
[i]when i calling the interface of createSSDPriorBoxPlugin to parse my network model,the Segmentation problems offen occurs,the error log is:

[url]=========parse caffe model start==========
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
Warning: Flatten layer ignored. TensorRT implicitly flattens input to FullyConnected layers, but in other circumstances this will result in undefined behavior.
==========createPlugin=========layerName:Inception3/conv/priorbox1
==========createPlugin succ=========
Thread 1 “sample_face_det” received signal SIGSEGV, Segmentation fault.
0x0000007fafe25a30 in nvinfer1::plugin::GridAnchorGeneratorLegacy::getOutputDimensions(int, nvinfer1::Dims const*, int) () from /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5

(gdb) where
#0 0x0000007fafe25a30 in nvinfer1::plugin::GridAnchorGeneratorLegacy::getOutputDimensions(int, nvinfer1::Dims const*, int) () at /usr/lib/aarch64-linux-gnu/libnvinfer_plugin.so.5
#1 0x0000007fb057e954 in nvinfer1::Network::addPluginExt(nvinfer1::ITensor* const*, int, nvinfer1::IPluginExt&) ()
at /usr/lib/aarch64-linux-gnu/libnvinfer.so.5
#2 0x0000007fb00b9924 in () at /usr/lib/aarch64-linux-gnu/libnvparsers.so.5
#3 0x0000005555562e74 in caffeToGIEModel(std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::__cxx11::basic_string<char, std::char_traits, std::allocator > const&, std::vector<std::__cxx11::basic_string<char, std::char_traits, std::allocator >, std::allocator<std::__cxx11::basic_string<char, std::char_traits, std::allocator > > > const&, unsigned int, nvcaffeparser1::IPluginFactoryExt*, nvinfer1::IHostMemory*&) ()
#4 0x0000005555567f38 in main ()

part of my code is simple as follows:

nvinfer1::IPlugin* PluginFactory::createPlugin(const char* layerName, const nvinfer1::Weights* weights, int nbWeights)
{
	assert(PluginFactory::isPluginExt(layerName));
	std::cout << "==========createPlugin=========layerName:" << layerName << std::endl;
	if(!strcmp(layerName,"Inception3/conv/priorbox1"))
	{
		assert(Inception3_conv_priorbox1_layer.get() == nullptr);
		plugin::PriorBoxParameters params;
		float minSize[1] = {32};
		float aspectRatios[1] = {1};
		params.minSize = minSize;
		params.maxSize = nullptr;
		params.aspectRatios = aspectRatios;
		params.numMinSize = 1;
		params.numMaxSize = 0;
		params.numAspectRatios = 1;
		params.flip = true;
		params.clip = true;
		params.variance[0] = 0.1;
		params.variance[1] = 0.1;
		params.variance[2] = 0.2;
		params.variance[3] = 0.2;
		//params.imgH = 0;
		//params.imgW = 0;
		//params.stepH = 0;
		//params.stepW = 0;
		params.offset = 0.5;
		Inception3_conv_priorbox1_layer = std::unique_ptr<nvinfer1::plugin::INvPlugin, decltype(pluginDeleter)>(plugin::createSSDPriorBoxPlugin(params),pluginDeleter);
		std::cout << "==========createPlugin succ=========" << std::endl;
		
		return Inception3_conv_priorbox1_layer.get();			
	}

then,how do i deal with these Segmentation fault?

Hello,

to help us debug, can you please share a small repro that contains the code and model that demonstrate the seg fault you are seeing?

regards,
NVIDIA Enterprise Support

Hello,my code is similar to the sample of sampleFasterRCNN https://docs.nvidia.com/deeplearning/sdk/tensorrt-developer-guide/index.html#fasterrcnn_sample,for the reason of using the API of createSSDPriorBoxPlugin,we only create the class of IPluginFactoryExt as follows:

class PluginFactory : public nvinfer1::IPluginFactory,public nvcaffeparser1::IPluginFactoryExt
{
public:
	virtual nvinfer1::IPlugin* createPlugin(const char* layerName, const nvinfer1::Weights* weights, int nbWeights) override;
	
	nvinfer1::IPlugin* createPlugin(const char* layerName,const void* seriaData,size_t seriaLength)override;
    // caffe parser plugin implementation
	bool isPlugin(const char* name) override { return isPluginExt(name); }
	
    bool isPluginExt(const char* name) override ;
    
    void destroyPlugin();

    void (*pluginDeleter)( nvinfer1::plugin::INvPlugin*) {[]( nvinfer1::plugin::INvPlugin* ptr) {ptr->destroy();}};
	//priorbox layer
    std::unique_ptr< nvinfer1::plugin::INvPlugin, decltype(pluginDeleter)> Inception3_conv_priorbox1_layer{nullptr, pluginDeleter};
 	std::unique_ptr< nvinfer1::plugin::INvPlugin, decltype(pluginDeleter)> Inception3_conv_priorbox2_layer{nullptr, pluginDeleter};
 	std::unique_ptr< nvinfer1::plugin::INvPlugin, decltype(pluginDeleter)> Inception3_conv_priorbox3_layer{nullptr, pluginDeleter};
	std::unique_ptr< nvinfer1::plugin::INvPlugin, decltype(pluginDeleter)> conv6_priorbox_layer{nullptr, pluginDeleter};
	std::unique_ptr< nvinfer1::plugin::INvPlugin, decltype(pluginDeleter)> conv7_priorbox_layer{nullptr, pluginDeleter};
	//detection output layer
	std::unique_ptr< nvinfer1::plugin::INvPlugin, decltype(pluginDeleter)> mDetection_out{nullptr, pluginDeleter};
};

void caffeToGIEModel(const std::string& deployFile,				// name for caffe prototxt
					 const std::string& modelFile,				// name for model 
					 const std::vector<std::string>& outputs,   // network outputs
					 unsigned int maxBatchSize,// batch size - NB must be at least as large as the batch we want to run with)
					 nvcaffeparser1::IPluginFactoryExt* pluginFactory,  //factory for plugin layers
					 IHostMemory *&gieModelStream)    // output buffer for the GIE model
{
	// create the builder
	IBuilder* builder = createInferBuilder(gLogger);

	// parse the caffe model to populate the network, then set the outputs
	INetworkDefinition* network = builder->createNetwork();
	ICaffeParser* parser = createCaffeParser();
	parser->setPluginFactoryExt(pluginFactory);
	std::cout << "=========parse caffe model start==========" << std::endl;
	//if platform support fp16 calculation,change the data tpye
	bool useFp16 = builder->platformHasFastFp16();
	DataType modelDataType = useFp16?DataType::kHALF:DataType::kFLOAT;

	const IBlobNameToTensor* blobNameToTensor = parser->parse(locateFile(deployFile, directories).c_str(),locateFile(modelFile, directories).c_str(), *network, modelDataType);
	std::cout << "=========parse caffe model done==========useFp16:" << useFp16 << std::endl;
	// specify which tensors are outputs
	for (auto& s : outputs)
	{
		network->markOutput(*blobNameToTensor->find(s.c_str()));
	}

	// Build the engine
	builder->setMaxBatchSize(maxBatchSize);
	builder->setMaxWorkspaceSize(2 << 20);
	builder->setFp16Mode(useFp16);

	ICudaEngine* engine = builder->buildCudaEngine(*network);
	assert(engine);
	std::cout << "=========build cuda engine done==========" << std::endl;
	// we don't need the network any more, and we can destroy the parser
	network->destroy();
	parser->destroy();

	// serialize the engine, then close everything down
	gieModelStream = engine->serialize();
	engine->destroy();
	builder->destroy();
	shutdownProtobufLibrary();
}

my caffe model as follows:

name: "FaceBoxes"
input: "data"
input_shape {
  dim: 1
  dim: 3
  dim: 1024
  dim: 1024
}

#conv1
layer {
  name: "conv1"
  type: "Convolution"
  bottom: "data"
  top: "conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  convolution_param {
    num_output: 24
    pad: 0
    kernel_size: 7
	stride: 4
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

layer {
  name: "conv1/bn"
  type: "BatchNorm"
  bottom: "conv1"
  top: "conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv1/neg"
  type: "Power"
  bottom: "conv1"
  top: "conv1/neg"
  power_param {
    power: 1
    scale: -1.0
    shift: 0
  }
}

layer {
  name: "conv1/concat"
  type: "Concat"
  bottom: "conv1"
  bottom: "conv1/neg"
  top: "conv1_CR"
}

layer {
  name: "conv1/scale"
  type: "Scale"
  bottom: "conv1_CR"
  top: "conv1_CR"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv1/relu"
  type: "ReLU"
  bottom: "conv1_CR"
  top: "conv1_CR"
}

layer {
  name: "pool1"
  type: "Pooling"
  bottom: "conv1_CR"
  top: "pool1"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
#conv2
layer {
  name: "conv2"
  type: "Convolution"
  bottom: "pool1"
  top: "conv2"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  convolution_param {
    num_output: 64
    pad: 0
    kernel_size: 5
	stride: 2
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

layer {
  name: "conv2/bn"
  type: "BatchNorm"
  bottom: "conv2"
  top: "conv2"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv2/neg"
  type: "Power"
  bottom: "conv2"
  top: "conv2/neg"
  power_param {
    power: 1
    scale: -1.0
    shift: 0
  }
}

layer {
  name: "conv2/concat"
  type: "Concat"
  bottom: "conv2"
  bottom: "conv2/neg"
  top: "conv2_CR"
}

layer {
  name: "conv2/scale"
  type: "Scale"
  bottom: "conv2_CR"
  top: "conv2_CR"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 2.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv2/relu"
  type: "ReLU"
  bottom: "conv2_CR"
  top: "conv2_CR"
}

layer {
  name: "pool2"
  type: "Pooling"
  bottom: "conv2_CR"
  top: "pool2"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 2
  }
}
#Inception1
layer {
  name: "conv3/incep0/conv"
  type: "Convolution"
  bottom: "pool2"
  top: "conv3/incep0/conv"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 32
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 0
    kernel_size: 1
    stride: 1
  }
}

layer {
  name: "conv3/incep0/bn"
  type: "BatchNorm"
  bottom: "conv3/incep0/conv"
  top: "conv3/incep0/conv"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv3/incep0/bn_scale"
  type: "Scale"
  bottom: "conv3/incep0/conv"
  top: "conv3/incep0/conv"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv3/incep0/relu"
  type: "ReLU"
  bottom: "conv3/incep0/conv"
  top: "conv3/incep0/conv"
}

layer {
  name: "conv3/incep1/conv1"
  type: "Convolution"
  bottom: "pool2"
  top: "conv3/incep1/conv1"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 24
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 0
    kernel_size: 1
    stride: 1
  }
}

layer {
  name: "conv3/incep1/bn1"
  type: "BatchNorm"
  bottom: "conv3/incep1/conv1"
  top: "conv3/incep1/conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv3/incep1/bn_scale1"
  type: "Scale"
  bottom: "conv3/incep1/conv1"
  top: "conv3/incep1/conv1"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv3/incep1/relu1"
  type: "ReLU"
  bottom: "conv3/incep1/conv1"
  top: "conv3/incep1/conv1"
}

layer {
  name: "conv3/incep1/conv2"
  type: "Convolution"
  bottom: "conv3/incep1/conv1"
  top: "conv3/incep1/conv2"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 32
	weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 1
    kernel_size: 3
    stride: 1
  }
}

layer {
  name: "conv3/incep1/bn2"
  type: "BatchNorm"
  bottom: "conv3/incep1/conv2"
  top: "conv3/incep1/conv2"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv3/incep1/bn_scale2"
  type: "Scale"
  bottom: "conv3/incep1/conv2"
  top: "conv3/incep1/conv2"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv3/incep1/relu2"
  type: "ReLU"
  bottom: "conv3/incep1/conv2"
  top: "conv3/incep1/conv2"
}

layer {
  name: "conv3/incep2/conv1"
  type: "Convolution"
  bottom: "pool2"
  top: "conv3/incep2/conv1"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 24
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 0
    kernel_size: 1
    stride: 1
  }
}

layer {
  name: "conv3/incep2/bn1"
  type: "BatchNorm"
  bottom: "conv3/incep2/conv1"
  top: "conv3/incep2/conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv3/incep2/bn_scale1"
  type: "Scale"
  bottom: "conv3/incep2/conv1"
  top: "conv3/incep2/conv1"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv3/incep2/relu1"
  type: "ReLU"
  bottom: "conv3/incep2/conv1"
  top: "conv3/incep2/conv1"
}

layer {
  name: "conv3/incep2/conv2"
  type: "Convolution"
  bottom: "conv3/incep2/conv1"
  top: "conv3/incep2/conv2"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 32
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 1
    kernel_size: 3
    stride: 1
  }
}

layer {
  name: "conv3/incep2/bn2"
  type: "BatchNorm"
  bottom: "conv3/incep2/conv2"
  top: "conv3/incep2/conv2"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv3/incep2/bn_scale2"
  type: "Scale"
  bottom: "conv3/incep2/conv2"
  top: "conv3/incep2/conv2"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv3/incep2/relu2"
  type: "ReLU"
  bottom: "conv3/incep2/conv2"
  top: "conv3/incep2/conv2"
}

layer {
  name: "conv3/incep2/conv3"
  type: "Convolution"
  bottom: "conv3/incep2/conv2"
  top: "conv3/incep2/conv3"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 32
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 1
    kernel_size: 3
    stride: 1
  }
}

layer {
  name: "conv3/incep2/bn3"
  type: "BatchNorm"
  bottom: "conv3/incep2/conv3"
  top: "conv3/incep2/conv3"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv3/incep2/bn_scale3"
  type: "Scale"
  bottom: "conv3/incep2/conv3"
  top: "conv3/incep2/conv3"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv3/incep2/relu3"
  type: "ReLU"
  bottom: "conv3/incep2/conv3"
  top: "conv3/incep2/conv3"
}

layer {
  name: "conv3/incep3/pool"
  type: "Pooling"
  bottom: "pool2"
  top: "conv3/incep3/pool"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 1
    pad: 1
  }
}

layer {
  name: "conv3/incep3/conv"
  type: "Convolution"
  bottom: "conv3/incep3/pool"
  top: "conv3/incep3/conv"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 32
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 0
    kernel_size: 1
    stride: 1
  }
}

layer {
  name: "conv3/incep3/bn"
  type: "BatchNorm"
  bottom: "conv3/incep3/conv"
  top: "conv3/incep3/conv"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv3/incep3/bn_scale"
  type: "Scale"
  bottom: "conv3/incep3/conv"
  top: "conv3/incep3/conv"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv3/incep3/relu"
  type: "ReLU"
  bottom: "conv3/incep3/conv"
  top: "conv3/incep3/conv"
}

layer {
  name: "conv3/incep"
  type: "Concat"
  bottom: "conv3/incep0/conv"
  bottom: "conv3/incep1/conv2"
  bottom: "conv3/incep2/conv3"
  bottom: "conv3/incep3/conv"
  top: "conv3/incep"
}
#Inception2
layer {
  name: "conv4/incep0/conv"
  type: "Convolution"
  bottom: "conv3/incep"
  top: "conv4/incep0/conv"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 32
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 0
    kernel_size: 1
    stride: 1
  }
}

layer {
  name: "conv4/incep0/bn"
  type: "BatchNorm"
  bottom: "conv4/incep0/conv"
  top: "conv4/incep0/conv"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv4/incep0/bn_scale"
  type: "Scale"
  bottom: "conv4/incep0/conv"
  top: "conv4/incep0/conv"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv4/incep0/relu"
  type: "ReLU"
  bottom: "conv4/incep0/conv"
  top: "conv4/incep0/conv"
}

layer {
  name: "conv4/incep1/conv1"
  type: "Convolution"
  bottom: "conv3/incep"
  top: "conv4/incep1/conv1"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 24
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 0
    kernel_size: 1
    stride: 1
  }
}

layer {
  name: "conv4/incep1/bn1"
  type: "BatchNorm"
  bottom: "conv4/incep1/conv1"
  top: "conv4/incep1/conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv4/incep1/bn_scale1"
  type: "Scale"
  bottom: "conv4/incep1/conv1"
  top: "conv4/incep1/conv1"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv4/incep1/relu1"
  type: "ReLU"
  bottom: "conv4/incep1/conv1"
  top: "conv4/incep1/conv1"
}

layer {
  name: "conv4/incep1/conv2"
  type: "Convolution"
  bottom: "conv4/incep1/conv1"
  top: "conv4/incep1/conv2"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 32
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 1
    kernel_size: 3
    stride: 1
  }
}

layer {
  name: "conv4/incep1/bn2"
  type: "BatchNorm"
  bottom: "conv4/incep1/conv2"
  top: "conv4/incep1/conv2"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv4/incep1/bn_scale2"
  type: "Scale"
  bottom: "conv4/incep1/conv2"
  top: "conv4/incep1/conv2"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv4/incep1/relu2"
  type: "ReLU"
  bottom: "conv4/incep1/conv2"
  top: "conv4/incep1/conv2"
}

layer {
  name: "conv4/incep2/conv1"
  type: "Convolution"
  bottom: "conv3/incep"
  top: "conv4/incep2/conv1"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 24
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 0
    kernel_size: 1
    stride: 1
  }
}

layer {
  name: "conv4/incep2/bn1"
  type: "BatchNorm"
  bottom: "conv4/incep2/conv1"
  top: "conv4/incep2/conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv4/incep2/bn_scale1"
  type: "Scale"
  bottom: "conv4/incep2/conv1"
  top: "conv4/incep2/conv1"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv4/incep2/relu1"
  type: "ReLU"
  bottom: "conv4/incep2/conv1"
  top: "conv4/incep2/conv1"
}

layer {
  name: "conv4/incep2/conv2"
  type: "Convolution"
  bottom: "conv4/incep2/conv1"
  top: "conv4/incep2/conv2"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 32
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 1
    kernel_size: 3
    stride: 1
  }
}

layer {
  name: "conv4/incep2/bn2"
  type: "BatchNorm"
  bottom: "conv4/incep2/conv2"
  top: "conv4/incep2/conv2"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv4/incep2/bn_scale2"
  type: "Scale"
  bottom: "conv4/incep2/conv2"
  top: "conv4/incep2/conv2"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv4/incep2/relu2"
  type: "ReLU"
  bottom: "conv4/incep2/conv2"
  top: "conv4/incep2/conv2"
}

layer {
  name: "conv4/incep2/conv3"
  type: "Convolution"
  bottom: "conv4/incep2/conv2"
  top: "conv4/incep2/conv3"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 32
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 1
    kernel_size: 3
    stride: 1
  }
}

layer {
  name: "conv4/incep2/bn3"
  type: "BatchNorm"
  bottom: "conv4/incep2/conv3"
  top: "conv4/incep2/conv3"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv4/incep2/bn_scale3"
  type: "Scale"
  bottom: "conv4/incep2/conv3"
  top: "conv4/incep2/conv3"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv4/incep2/relu3"
  type: "ReLU"
  bottom: "conv4/incep2/conv3"
  top: "conv4/incep2/conv3"
}

layer {
  name: "conv4/incep3/pool"
  type: "Pooling"
  bottom: "conv3/incep"
  top: "conv4/incep3/pool"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 1
    pad: 1
  }
}

layer {
  name: "conv4/incep3/conv"
  type: "Convolution"
  bottom: "conv4/incep3/pool"
  top: "conv4/incep3/conv"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 32
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 0
    kernel_size: 1
    stride: 1
  }
}

layer {
  name: "conv4/incep3/bn"
  type: "BatchNorm"
  bottom: "conv4/incep3/conv"
  top: "conv4/incep3/conv"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv4/incep3/bn_scale"
  type: "Scale"
  bottom: "conv4/incep3/conv"
  top: "conv4/incep3/conv"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv4/incep3/relu"
  type: "ReLU"
  bottom: "conv4/incep3/conv"
  top: "conv4/incep3/conv"
}

layer {
  name: "conv4/incep"
  type: "Concat"
  bottom: "conv4/incep0/conv"
  bottom: "conv4/incep1/conv2"
  bottom: "conv4/incep2/conv3"
  bottom: "conv4/incep3/conv"
  top: "conv4/incep"
}

layer {
  name: "conv5/incep0/conv"
  type: "Convolution"
  bottom: "conv4/incep"
  top: "conv5/incep0/conv"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 32
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 0
    kernel_size: 1
    stride: 1
  }
}

layer {
  name: "conv5/incep0/bn"
  type: "BatchNorm"
  bottom: "conv5/incep0/conv"
  top: "conv5/incep0/conv"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv5/incep0/bn_scale"
  type: "Scale"
  bottom: "conv5/incep0/conv"
  top: "conv5/incep0/conv"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv5/incep0/relu"
  type: "ReLU"
  bottom: "conv5/incep0/conv"
  top: "conv5/incep0/conv"
}

layer {
  name: "conv5/incep1/conv1"
  type: "Convolution"
  bottom: "conv4/incep"
  top: "conv5/incep1/conv1"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 24
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 0
    kernel_size: 1
    stride: 1
  }
}

layer {
  name: "conv5/incep1/bn1"
  type: "BatchNorm"
  bottom: "conv5/incep1/conv1"
  top: "conv5/incep1/conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv5/incep1/bn_scale1"
  type: "Scale"
  bottom: "conv5/incep1/conv1"
  top: "conv5/incep1/conv1"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv5/incep1/relu1"
  type: "ReLU"
  bottom: "conv5/incep1/conv1"
  top: "conv5/incep1/conv1"
}

layer {
  name: "conv5/incep1/conv2"
  type: "Convolution"
  bottom: "conv5/incep1/conv1"
  top: "conv5/incep1/conv2"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 32
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 1
    kernel_size: 3
    stride: 1
  }
}

layer {
  name: "conv5/incep1/bn2"
  type: "BatchNorm"
  bottom: "conv5/incep1/conv2"
  top: "conv5/incep1/conv2"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv5/incep1/bn_scale2"
  type: "Scale"
  bottom: "conv5/incep1/conv2"
  top: "conv5/incep1/conv2"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv5/incep1/relu2"
  type: "ReLU"
  bottom: "conv5/incep1/conv2"
  top: "conv5/incep1/conv2"
}

layer {
  name: "conv5/incep2/conv1"
  type: "Convolution"
  bottom: "conv4/incep"
  top: "conv5/incep2/conv1"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 24
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 0
    kernel_size: 1
    stride: 1
  }
}

layer {
  name: "conv5/incep2/bn1"
  type: "BatchNorm"
  bottom: "conv5/incep2/conv1"
  top: "conv5/incep2/conv1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv5/incep2/bn_scale1"
  type: "Scale"
  bottom: "conv5/incep2/conv1"
  top: "conv5/incep2/conv1"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv5/incep2/relu1"
  type: "ReLU"
  bottom: "conv5/incep2/conv1"
  top: "conv5/incep2/conv1"
}

layer {
  name: "conv5/incep2/conv2"
  type: "Convolution"
  bottom: "conv5/incep2/conv1"
  top: "conv5/incep2/conv2"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 32
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 1
    kernel_size: 3
    stride: 1
  }
}

layer {
  name: "conv5/incep2/bn2"
  type: "BatchNorm"
  bottom: "conv5/incep2/conv2"
  top: "conv5/incep2/conv2"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv5/incep2/bn_scale2"
  type: "Scale"
  bottom: "conv5/incep2/conv2"
  top: "conv5/incep2/conv2"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv5/incep2/relu2"
  type: "ReLU"
  bottom: "conv5/incep2/conv2"
  top: "conv5/incep2/conv2"
}

layer {
  name: "conv5/incep2/conv3"
  type: "Convolution"
  bottom: "conv5/incep2/conv2"
  top: "conv5/incep2/conv3"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 32
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 1
    kernel_size: 3
    stride: 1
  }
}

layer {
  name: "conv5/incep2/bn3"
  type: "BatchNorm"
  bottom: "conv5/incep2/conv3"
  top: "conv5/incep2/conv3"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv5/incep2/bn_scale3"
  type: "Scale"
  bottom: "conv5/incep2/conv3"
  top: "conv5/incep2/conv3"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv5/incep2/relu3"
  type: "ReLU"
  bottom: "conv5/incep2/conv3"
  top: "conv5/incep2/conv3"
}

layer {
  name: "conv5/incep3/pool"
  type: "Pooling"
  bottom: "conv4/incep"
  top: "conv5/incep3/pool"
  pooling_param {
    pool: MAX
    kernel_size: 3
    stride: 1
    pad: 1
  }
}

layer {
  name: "conv5/incep3/conv"
  type: "Convolution"
  bottom: "conv5/incep3/pool"
  top: "conv5/incep3/conv"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 32
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 0
    kernel_size: 1
    stride: 1
  }
}

layer {
  name: "conv5/incep3/bn"
  type: "BatchNorm"
  bottom: "conv5/incep3/conv"
  top: "conv5/incep3/conv"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv5/incep3/bn_scale"
  type: "Scale"
  bottom: "conv5/incep3/conv"
  top: "conv5/incep3/conv"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv5/incep3/relu"
  type: "ReLU"
  bottom: "conv5/incep3/conv"
  top: "conv5/incep3/conv"
}

layer {
  name: "conv5/incep"
  type: "Concat"
  bottom: "conv5/incep0/conv"
  bottom: "conv5/incep1/conv2"
  bottom: "conv5/incep2/conv3"
  bottom: "conv5/incep3/conv"
  top: "conv5/incep"
}

layer {
  name: "Inception3/conv/loc1"
  type: "Convolution"
  bottom: "conv5/incep"
  top: "Inception3/conv/loc1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 4
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

layer {
  name: "Inception3/conv/loc1/perm"
  type: "Permute"
  bottom: "Inception3/conv/loc1"
  top: "Inception3/conv/loc1/perm"
  permute_param {
    order: 0
    order: 2
    order: 3
    order: 1
  }
}

layer {
  name: "Inception3/conv/loc1/flat"
  type: "Flatten"
  bottom: "Inception3/conv/loc1/perm"
  top: "Inception3/conv/loc1/flat"
  flatten_param {
    axis: 1
  }
} 

layer {
  name: "Inception3/conv/conf1"
  type: "Convolution"
  bottom: "conv5/incep"
  top: "Inception3/conv/conf1"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 2
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

layer {
  name: "Inception3/conv/conf1/perm"
  type: "Permute"
  bottom: "Inception3/conv/conf1"
  top: "Inception3/conv/conf1/perm"
  permute_param {
    order: 0
    order: 2
    order: 3
    order: 1
  }
}

layer {
  name: "Inception3/conv/conf1/flat"
  type: "Flatten"
  bottom: "Inception3/conv/conf1/perm"
  top: "Inception3/conv/conf1/flat"
  flatten_param {
    axis: 1
  }
}

layer {
  name: "Inception3/conv/priorbox1"
  type: "PriorBox"
  bottom: "conv5/incep"
  bottom: "data"
  top: "Inception3/conv/priorbox1"
  prior_box_param {
    min_size: 32
    aspect_ratio: 1
    flip: true
    clip: true
    variance: 0.1
    variance: 0.1
    variance: 0.2
    variance: 0.2
  }
}

layer {
  name: "Inception3/conv/loc2"
  type: "Convolution"
  bottom: "conv5/incep"
  top: "Inception3/conv/loc2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 4
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

layer {
  name: "Inception3/conv/loc2/perm"
  type: "Permute"
  bottom: "Inception3/conv/loc2"
  top: "Inception3/conv/loc2/perm"
  permute_param {
    order: 0
    order: 2
    order: 3
    order: 1
  }
}

layer {
  name: "Inception3/conv/loc2/flat"
  type: "Flatten"
  bottom: "Inception3/conv/loc2/perm"
  top: "Inception3/conv/loc2/flat"
  flatten_param {
    axis: 1
  }
} 

layer {
  name: "Inception3/conv/conf2"
  type: "Convolution"
  bottom: "conv5/incep"
  top: "Inception3/conv/conf2"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 2
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

layer {
  name: "Inception3/conv/conf2/perm"
  type: "Permute"
  bottom: "Inception3/conv/conf2"
  top: "Inception3/conv/conf2/perm"
  permute_param {
    order: 0
    order: 2
    order: 3
    order: 1
  }
}

layer {
  name: "Inception3/conv/conf2/flat"
  type: "Flatten"
  bottom: "Inception3/conv/conf2/perm"
  top: "Inception3/conv/conf2/flat"
  flatten_param {
    axis: 1
  }
}

layer {
  name: "Inception3/conv/priorbox2"
  type: "PriorBox"
  bottom: "conv5/incep"
  bottom: "data"
  top: "Inception3/conv/priorbox2"
  prior_box_param {
    min_size: 64
    aspect_ratio: 1
    flip: true
    clip: true
    variance: 0.1
    variance: 0.1
    variance: 0.2
    variance: 0.2
  }
}

layer {
  name: "Inception3/conv/loc3"
  type: "Convolution"
  bottom: "conv5/incep"
  top: "Inception3/conv/loc3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 4
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

layer {
  name: "Inception3/conv/loc3/perm"
  type: "Permute"
  bottom: "Inception3/conv/loc3"
  top: "Inception3/conv/loc3/perm"
  permute_param {
    order: 0
    order: 2
    order: 3
    order: 1
  }
}

layer {
  name: "Inception3/conv/loc3/flat"
  type: "Flatten"
  bottom: "Inception3/conv/loc3/perm"
  top: "Inception3/conv/loc3/flat"
  flatten_param {
    axis: 1
  }
} 

layer {
  name: "Inception3/conv/conf3"
  type: "Convolution"
  bottom: "conv5/incep"
  top: "Inception3/conv/conf3"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 2
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

layer {
  name: "Inception3/conv/conf3/perm"
  type: "Permute"
  bottom: "Inception3/conv/conf3"
  top: "Inception3/conv/conf3/perm"
  permute_param {
    order: 0
    order: 2
    order: 3
    order: 1
  }
}

layer {
  name: "Inception3/conv/conf3/flat"
  type: "Flatten"
  bottom: "Inception3/conv/conf3/perm"
  top: "Inception3/conv/conf3/flat"
  flatten_param {
    axis: 1
  }
}

layer {
  name: "Inception3/conv/priorbox3"
  type: "PriorBox"
  bottom: "conv5/incep"
  bottom: "data"
  top: "Inception3/conv/priorbox3"
  prior_box_param {
    min_size: 128
    aspect_ratio: 1
    flip: true
    clip: true
    variance: 0.1
    variance: 0.1
    variance: 0.2
    variance: 0.2
  }
}

layer {
  name: "conv6_1"
  type: "Convolution"
  bottom: "conv5/incep"
  top: "conv6_1"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 128
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 0
    kernel_size: 1
    stride: 1
  }
}

layer {
  name: "conv6/bn1"
  type: "BatchNorm"
  bottom: "conv6_1"
  top: "conv6_1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv6/bn_scale1"
  type: "Scale"
  bottom: "conv6_1"
  top: "conv6_1"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv6/relu1"
  type: "ReLU"
  bottom: "conv6_1"
  top: "conv6_1"
}

layer {
  name: "conv6_2"
  type: "Convolution"
  bottom: "conv6_1"
  top: "conv6_2"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 256
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 1
    kernel_size: 3
    stride: 2
  }
}

layer {
  name: "conv6/bn2"
  type: "BatchNorm"
  bottom: "conv6_2"
  top: "conv6_2"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv6/bn_scale2"
  type: "Scale"
  bottom: "conv6_2"
  top: "conv6_2"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv6/relu2"
  type: "ReLU"
  bottom: "conv6_2"
  top: "conv6_2"
}

layer {
  name: "conv6/loc"
  type: "Convolution"
  bottom: "conv6_2"
  top: "conv6/loc"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 4
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

layer {
  name: "conv6/loc/perm"
  type: "Permute"
  bottom: "conv6/loc"
  top: "conv6/loc/perm"
  permute_param {
    order: 0
    order: 2
    order: 3
    order: 1
  }
}

layer {
  name: "conv6/loc/perm/flat"
  type: "Flatten"
  bottom: "conv6/loc/perm"
  top: "conv6/loc/perm/flat"
  flatten_param {
    axis: 1
  }
}

layer {
  name: "conv6/conf"
  type: "Convolution"
  bottom: "conv6_2"
  top: "conv6/conf"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 2
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

layer {
  name: "conv6/conf/perm"
  type: "Permute"
  bottom: "conv6/conf"
  top: "conv6/conf/perm"
  permute_param {
    order: 0
    order: 2
    order: 3
    order: 1
  }
}

layer {
  name: "conv6/conf/perm/flat"
  type: "Flatten"
  bottom: "conv6/conf/perm"
  top: "conv6/conf/perm/flat"
  flatten_param {
    axis: 1
  }
}

layer {
  name: "conv6/priorbox"
  type: "PriorBox"
  bottom: "conv6_2"
  bottom: "data"
  top: "conv6/priorbox"
  prior_box_param {
    min_size: 256
    aspect_ratio: 1
    flip: true
    clip: true
    variance: 0.1
    variance: 0.1
    variance: 0.2
    variance: 0.2
  }
}

layer {
  name: "conv7_1"
  type: "Convolution"
  bottom: "conv6_2"
  top: "conv7_1"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 128
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 0
    kernel_size: 1
    stride: 1
  }
}

layer {
  name: "conv7/bn1"
  type: "BatchNorm"
  bottom: "conv7_1"
  top: "conv7_1"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv7/bn_scale1"
  type: "Scale"
  bottom: "conv7_1"
  top: "conv7_1"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv7/relu1"
  type: "ReLU"
  bottom: "conv7_1"
  top: "conv7_1"
}

layer {
  name: "conv7_2"
  type: "Convolution"
  bottom: "conv7_1"
  top: "conv7_2"
  param {
    lr_mult: 1.0
    decay_mult: 1.0
  }
  convolution_param {
    num_output: 256
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
    pad: 1
    kernel_size: 3
    stride: 2
  }
}

layer {
  name: "conv7/bn2"
  type: "BatchNorm"
  bottom: "conv7_2"
  top: "conv7_2"
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  param {
    lr_mult: 0
    decay_mult: 0
  }
  batch_norm_param {
    use_global_stats: false
  }
}

layer {
  name: "conv7/bn_scale2"
  type: "Scale"
  bottom: "conv7_2"
  top: "conv7_2"
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  param {
    lr_mult: 1.0
    decay_mult: 0
  }
  scale_param {
    bias_term: true
  }
}

layer {
  name: "conv7/relu2"
  type: "ReLU"
  bottom: "conv7_2"
  top: "conv7_2"
}

layer {
  name: "conv7/loc"
  type: "Convolution"
  bottom: "conv7_2"
  top: "conv7/loc"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 4
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

layer {
  name: "conv7/loc/perm"
  type: "Permute"
  bottom: "conv7/loc"
  top: "conv7/loc/perm"
  permute_param {
    order: 0
    order: 2
    order: 3
    order: 1
  }
}

layer {
  name: "conv7/loc/perm/flat"
  type: "Flatten"
  bottom: "conv7/loc/perm"
  top: "conv7/loc/perm/flat"
  flatten_param {
    axis: 1
  }
}

layer {
  name: "conv7/conf"
  type: "Convolution"
  bottom: "conv7_2"
  top: "conv7/conf"
  param {
    lr_mult: 1
    decay_mult: 1
  }
  param {
    lr_mult: 2
    decay_mult: 0
  }
  convolution_param {
    num_output: 2
    pad: 1
    kernel_size: 3
    stride: 1
    weight_filler {
      type: "xavier"
    }
    bias_filler {
      type: "constant"
      value: 0
    }
  }
}

layer {
  name: "conv7/conf/perm"
  type: "Permute"
  bottom: "conv7/conf"
  top: "conv7/conf/perm"
  permute_param {
    order: 0
    order: 2
    order: 3
    order: 1
  }
}

layer {
  name: "conv7/conf/perm/flat"
  type: "Flatten"
  bottom: "conv7/conf/perm"
  top: "conv7/conf/perm/flat"
  flatten_param {
    axis: 1
  }
}

layer {
  name: "conv7/priorbox"
  type: "PriorBox"
  bottom: "conv7_2"
  bottom: "data"
  top: "conv7/priorbox"
  prior_box_param {
    min_size: 512
    aspect_ratio: 1
    flip: true
    clip: true
    variance: 0.1
    variance: 0.1
    variance: 0.2
    variance: 0.2
  }
}

layer {
  name: "mbox_loc"
  type: "Concat"
  bottom: "Inception3/conv/loc1/flat"
  bottom: "Inception3/conv/loc2/flat"
  bottom: "Inception3/conv/loc3/flat"
  bottom: "conv6/loc/perm/flat"
  bottom: "conv7/loc/perm/flat"
  top: "mbox_loc"
  concat_param {
    axis: 1
  }
}

layer {
  name: "mbox_conf"
  type: "Concat"
  bottom: "Inception3/conv/conf1/flat"
  bottom: "Inception3/conv/conf2/flat"
  bottom: "Inception3/conv/conf3/flat"
  bottom: "conv6/conf/perm/flat"
  bottom: "conv7/conf/perm/flat"
  top: "mbox_conf"
  concat_param {
    axis: 1
  }
}

layer {
  name: "mbox_priorbox"
  type: "Concat"
  bottom: "Inception3/conv/priorbox1"
  bottom: "Inception3/conv/priorbox2"
  bottom: "Inception3/conv/priorbox3"
  bottom: "conv6/priorbox"
  bottom: "conv7/priorbox"
  top: "mbox_priorbox"
  concat_param {
    axis: 2
  }
}

layer {
  name: "detection_out"
  type: "DetectionOutput"
  bottom: "mbox_loc"
  bottom: "mbox_conf"
  bottom: "mbox_priorbox"
  top: "detection_out"
  include {
    phase: TEST
  }
  detection_output_param {
    num_classes: 2
    share_location: true
    background_label_id: 0
    nms_param {
      nms_threshold: 0.45
      top_k: 100
    }
    code_type: CENTER_SIZE
    keep_top_k: 100
    confidence_threshold: 0.5
  }
}

TRT doesn’t support flatten layer explicitly, but you can use reshape layer instead (like below).
And TRT 5.0 published caffe based SSD sample, you can get a complete reference there.

layer {
  name: "mbox_conf_flatten"
  type: "Reshape"
  bottom: "mbox_conf_softmax"
  top: "mbox_conf_flatten"
  reshape_param {
    shape {
      dim: 0
      dim: -1
      dim: 1
      dim: 1
    }
  }
}

Hello,
There is a new problem arose in the building trt engine,which link is in this website[url]https://devtalk.nvidia.com/default/topic/1045372/cuda-error-in-nchwtonchw/[/url], please have a look, thank you !