Problem in Deserializing Engine(Segmentation Fault)

Hello Everyone

I have been trying to parse a network which has slice layer in tensorrt for image classification. There is no support for slice layer in tensorrt and I am using the follow git repo as reference
https://github.com/Goingqs/TensorRT-Prelu for implementing slice layer as a plugin.

I have tried to integrate the code with the code given in the following git repo https://github.com/dusty-nv/jetson-inference. The following code snippet was inserted in tensorNet.cpp

nvinfer1::IPlugin* PluginFactory::createPlugin(const char* layerName, const nvinfer1::Weights* weights, int nbWeights){ //int nbWeights){
    assert(isPlugin(layerName));
    //printf("we are in wrong call\n");
    std::string strName {layerName};
    std::transform(strName.begin(),strName.end(),strName.begin(),::tolower);
    if (strName.find("slice") != std::string::npos){
             _nvPlugins[layerName] = (IPlugin*)(new SliceLayer<2>({x,y}));//x and y are numbers 
                                                                       based on the slice parameters
             return _nvPlugins.at(layerName);
    }
    
}

The slice layer in implemented in following code snippet in tensorNet.cpp

PluginFactory pluginFactory;
      parser->setPluginFactory(&pluginFactory); //Plugin Factory
      const nvcaffeparser1::IBlobNameToTensor *blobNameToTensor =
      parser->parse(deployFile.c_str(),               // caffe deploy file
                    modelFile.c_str(),            // caffe model file
                    *network,                                      // network definition that the parser will populate
                    modelDataType);

When I try to run the code I am able to parse and serialize the engine. However I get a Segmentation fault when I try to deserialize the engine. The problematic section of code where the error occurs is shown below with some custom output

printf("trying to seek stream from gieModel\n");
        gieModelStream.seekg(0, std::ios::end);
        const int modelSize = gieModelStream.tellg();
        std::cout<< "model size is "<< modelSize <<std::endl;
        gieModelStream.seekg(0, std::ios::beg);
        printf("gieModel is read \n");
        void* modelMem = malloc(modelSize);
        std::cout<<"modelMem is built"<<std::endl;
        if( !modelMem )
        {
                printf(LOG_GIE "failed to allocate %i bytes to deserialize model\n", modelSize);
                return 0;
        }
        std::cout<<"going to read the gieModelStream to modelMem"<<std::endl;
        gieModelStream.read((char*)modelMem, modelSize);
        std::cout<< "Going to deserialize engine"<< std::endl;
        nvinfer1::ICudaEngine* engine = infer->deserializeCudaEngine(modelMem, modelSize, NULL);
        std::cout<<"Engine has been built"<<std::endl;

The output snippet printed is shown below

runtime logger created
trying to seek stream from gieModel
model size is 11340184
gieModel is read
modelMem is built
going to read the gieModelStream to modelMem
Going to deserialize engine
Segmentation fault (core dumped)

The problem occurs when the engine is being deserialized. I am not sure why.
It would be really helpful if I can get any inputs and help on this issue.

Thanks

hello:
i get the same error, how do you solve it finally? thanks!