tensorRT segmantation fault when parse model

I want to implement tensorRT-mobileNet-ssd,In the function,I met a segmantation fault when parse model.
“Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)”
I do not know where is the problem

void TensorNet::caffeToTRTModel(const std::string& deployFile, const std::string& modelFile, const std::vector<std::string>& outputs,
                                unsigned int maxBatchSize)
    IBuilder* builder = createInferBuilder(gLogger);
    INetworkDefinition* network = builder->createNetwork();

    ICaffeParser* parser = createCaffeParser();

    bool useFp16 = builder->platformHasFastFp16();
    useFp16 = false;

    DataType modelDataType = useFp16 ? DataType::kHALF : DataType::kFLOAT;

    std::cout << deployFile.c_str() <<std::endl;
    std::cout << modelFile.c_str() <<std::endl;
    //std::cout << (*network) <<std::endl;
    std::cout << "Here : 1"<<std::endl;
    const IBlobNameToTensor* blobNameToTensor =	parser->parse(deployFile.c_str(),
    std::cout << "Here : 2" <<std::endl;
    assert(blobNameToTensor != nullptr);
    std::cout << "Here : 3" <<std::endl;

The segmentation fault is in the parser->parse,but I do not know how to fix it.
Anyone help me,thanks


Could you run it with cuda-memcheck to get more log information?

For example:
$ cuda-memcheck ./my_program


========= Error: process didn’t terminate successfully
========= The application may have hit an error when dereferencing Unified Memory from the host. Please rerun the application under cuda-gdb or Nsight Eclipse Edition to catch host side errors.
========= Internal error (7)
========= No CUDA-MEMCHECK results found

This is the informatation of cuda-memcheck./inference

I found the reason, There is an error in the prototxt file.There is a network layer without input,the name of input is incorrect.