Object Detection in Triton Inference Server

I am trying to set up an object detection pipeline using triton inference server.

I was able to get the basic image_client.py (image clasification) working. But I am getting some errors for object detection.

This is the config file auto-generated:
{“name”:“tf_savedmodel”,
“platform”:“tensorflow_savedmodel”,
“backend”:“tensorflow”,
“version_policy”:{“latest”:{“num_versions”:1}},
“max_batch_size”:0,
“input”:[{“name”:“input_tensor”,“data_type”:“TYPE_UINT8”,“format”:“FORMAT_NONE”,“dims”:[1,-1,-1,3],“is_shape_tensor”:false,“allow_ragged_batch”:false}],
“output”:[{“name”:“detection_scores”,“data_type”:“TYPE_FP32”,“dims”:[1,100],“label_filename”:"",“is_shape_tensor”:false},
{“name”:“raw_detection_boxes”,“data_type”:“TYPE_FP32”,“dims”:[1,441936,4],“label_filename”:"",“is_shape_tensor”:false},
{“name”:“detection_boxes”,“data_type”:“TYPE_FP32”,“dims”:[1,100,4],“label_filename”:"",“is_shape_tensor”:false},
{“name”:“num_detections”,“data_type”:“TYPE_FP32”,“dims”:[1],“label_filename”:"",“is_shape_tensor”:false},
{“name”:“detection_classes”,“data_type”:“TYPE_FP32”,“dims”:[1,100],“label_filename”:"",“is_shape_tensor”:false},
{“name”:“detection_multiclass_scores”,“data_type”:“TYPE_FP32”,“dims”:[1,100,90],“label_filename”:"",“is_shape_tensor”:false},
{“name”:“detection_anchor_indices”,“data_type”:“TYPE_FP32”,“dims”:[1,100],“label_filename”:"",“is_shape_tensor”:false},
{“name”:“raw_detection_scores”,“data_type”:“TYPE_FP32”,“dims”:[1,441936,90],“label_filename”:"",“is_shape_tensor”:false}],
“batch_input”:,
“batch_output”:,
“optimization”:{“priority”:“PRIORITY_DEFAULT”,
“input_pinned_memory”:{“enable”:true},
“output_pinned_memory”:{“enable”:true},
“gather_kernel_buffer_threshold”:0,“eager_batching”:false},
“instance_group”:[{“name”:“tf_savedmodel”,“kind”:“KIND_GPU”,“count”:1,“gpus”:[0],“profile”:}],
“default_model_filename”:“model.savedmodel”,
“cc_model_filenames”:{},
“metric_tags”:{},
“parameters”:{},
“model_warmup”:}

But on --strict-model-config=false (which generated the above) gave me some errors regarding them image size when i executed image_client.py. So I used the below as my config file:

name: "tf_savedmodel"
platform: "tensorflow_savedmodel"
max_batch_size: 0
input [
  {
    name: "input_tensor"
    data_type: TYPE_UINT8
    format: FORMAT_NONE
    dims: [ 1, 720, 1280, 3 ]
  }
]
output [
  {
    name: "detection_scores"
    data_type: TYPE_FP32
    dims: [ 1,  100 ]
    label_filename: "coco_labels.txt"
  },
  {
    name: "detection_boxes"
    data_type: TYPE_FP32
    dims: [ 1,  100 , 4]
    label_filename: "coco_labels.txt"
  }

]

Here is how the response looks like:
response= {‘name’: ‘detection_boxes’, ‘datatype’: ‘BYTES’, ‘shape’: [100], ‘parameters’: {‘binary_data_size’: 1700}}
response= {‘name’: ‘detection_scores’, ‘datatype’: ‘BYTES’, ‘shape’: [100], ‘parameters’: {‘binary_data_size’: 2111}}

for detection scores and boxes.

But then I get an error at postprocessing stage (which worked for image classification in image_client.py) while deserializing binary to tensor (I guess) because ‘None’ is returned at response.as_numpy(name)

Any ideas?