Do I need to make multiple plugin if there are same kind of custom caffe layers which are used several times?

If there is a caffe custom layer “A” which takes 2 input blobs and output 2 blobs, and used 3 times repeatedly,

layer {
  bottom: "fpn1_ai" #input shape = [56,56,4]
  bottom: "fpn1_bi" #input shape = [56,56,1]
  top: "fpn1_ao"    # output shape = [?, 4]
  top: "fpn1_bo"    # output shape = [?, 1]
  name: "Custom_A"
  type: "Custom_Type"
  custom_param {
    alpha: 0.5
    beta: 0.7
  }
}

layer {
  bottom: "fpn2_ai" #input shape = [28,28,4]
  bottom: "fpn2_bi" #input shape = [28,28,1] 
  top: "fpn2_ao"    # output shape = [?, 4]
  top: "fpn2_bo"    # output shape = [?, 1]
  name: "Custom_B"
  type: "Custom_Type"
  custom_param {
    alpha: 0.25
    beta: 0.4
  }
}

layer {
  bottom: "fpn3_ai" #input shape = [14,14,4]
  bottom: "fpn3_bi" #input shape = [14,14,1] 
  top: "fpn3_ao"    # output shape = [?, 4]
  top: "fpn3_bo"    # output shape = [?, 1]
  name: "Custom_C"
  type: "Custom_Type"
  custom_param {
    alpha: 0.125
    beta: 0.1
  }
}
  1. Do I need to make 3 custom plugin codes as below?
{...
  ALayerPlugin1::initialize(){
    alpha=0.5;
    beta =0.7;
  }
		
  ALayerPlugin1::getOutputDimensions(int index, const Dims * inputs, int nbInputDims){

     input1_C=inputs[0].d[0];//it is expected to be 4 , right?
     input1_H=inputs[0].d[1];//it is expected to be 56, right?
     input1_W=inputs[0].d[2];//it is expected to be 56, right?

     input2_C=inputs[1].d[0];//it is expected to be 1  , right?
     input2_H=inputs[1].d[1];//it is expected to be 56 , right?
     input2_W=inputs[1].d[2];//it is expected to be 56 , right?

     if index=0
        return DimsCHW(???, 4); // I don't know the exact shape before running "enqueue"!
     else
        return DimsCHW(???, 1); // I don't know the exact shape before running "enqueue"!
  }
}

//another plugin code file

{...

  ALayerPlugin2::initialize(){
    alpha=0.25;
    beta =0.4;
  }
	
  ALayerPlugin2::getOutputDimensions(int index, const Dims * inputs, int nbInputDims){
     input1_C=inputs[0].d[0];//it is expected to be 4 , right?
     input1_H=inputs[0].d[1];//it is expected to be 28, right?
     input1_W=inputs[0].d[2];//it is expected to be 28, right?

     input2_C=inputs[1].d[0];//it is expected to be 1  , right?
     input2_H=inputs[1].d[1];//it is expected to be 28 , right?
     input2_W=inputs[1].d[2];//it is expected to be 28 , right?

     if index=0
        return DimsCHW(???, 4); // I don't know the exact shape before running "enqueue"!
     else
        return DimsCHW(???, 1); // I don't know the exact shape before running "enqueue"!
  }	
}

//another plugin code file

{...

  ALayerPlugin3::initialize(){
    alpha=0.125;
    beta =0.1;
  }

  ALayerPlugin3::getOutputDimensions(int index, const Dims * inputs, int nbInputDims)
    input1_C=inputs[0].d[0];//it is expected to be 4 , right?
    input1_H=inputs[0].d[1];//it is expected to be 14, right?
    input1_W=inputs[0].d[2];//it is expected to be 14, right?

    input2_C=inputs[1].d[0];//it is expected to be 1  , right?
    input2_H=inputs[1].d[1];//it is expected to be 14 , right?
    input2_W=inputs[1].d[2];//it is expected to be 14 , right?

    if index=0
       return DimsCHW(???, 4); // I don't know the exact shape before running "enqueue"!
    else
      return DimsCHW(???, 1); // I don't know the exact shape before running "enqueue"!
  }
}
  1. What if I cannot know what is the output shape as it is determined after calling enqueue? In caffe, I can handle it using “Reshape” or “Resize” blob’s API. But it seems, TensorRT must know about the output dimension before proceeding the actual layer computation.