Problem of running facenet with TensorRT Inference Server

hi, google facenet pre-trained model has two input tensor, one is the input image of dim[160, 160, 3], another is a flag called phase_train, indicates whether it is training phase or inference phase, and the type is bool, it’s a scalar.

The TensorRT Inference Server seems to force the input parameter to be a tensor array,and it also has a reshape option that can change the dim of input tensor when the tensor is submitted to the backend.

However, when I configure the config.pbtxt like below:

name: “facenet_savedmodel”
platform: “tensorflow_savedmodel”
max_batch_size: 128
input [
{
name: “input”
data_type: TYPE_FP32
format: FORMAT_NHWC
dims: [ 160, 160, 3 ]
},
{
name: “phase_train”
data_type: TYPE_BOOL
dims: [ 1 ]
reshape: { shape: }
}
]

output [
{
name: “embeddings”
data_type: TYPE_FP32
dims: [ 512 ]
}
]

I got the error below:
tensorrtserver.api.InferenceServerException: [inference:0 271] The second input must be a scalar, but it has shape [1]

How to solve this problem? thanks.