Tensorflow Segmentation Model Deployment On Triton Inference Server

Hello everyone,

I have Jetson Nano, Jetpack 4.6 and I build Triton Inference Server from build by using latest release(Release 2.12.0). I was able to successfully serve classification models before.

Now, I am trying to serve the segmentation model I trained using Tensorflow DeepLab.

There is only one model file with .pb extension. Below, you can see the folder structure I created.

<model_repository>/
/
config.pbtxt
1/
model.savedmodel/
SavedModel.pb

I am sharing the content of my config file below.

name: “segmentation”
platform: “tensorflow_savedmodel”
max_batch_size : 0
input [
{
name: “ImageTensor:0”
data_type: TYPE_FP32
#format: FORMAT_NCHW
dims: [1300,1000,3 ]

}
]
output [
{
name: “SemanticPrededictions:0”
data_type: TYPE_FP32
dims: [1300,1000,3]
}
]

When I try to serve the model, I am getting error like this:

failed to load ‘segmentation’ version 1: Internal: Could not find SavedModel .pb or .pbtxt at supplied export directory path: ./model_repository/segmentation/1/model.savedmodel

I guess, Triton needs other file than .pb extension but Deeplab only generates .pb extension trained model.

What should I do to fix this problem to serve the model properly.

My other question is for my segmentation model config file

I would like to get segmented form of the input image as output. Is my config file is correct for that?

And If is correct, how should I edit the client inference script for that purpose? I couldnt find a similar project or resource, that’s why I am not sure how to do inference from served segmentation model.

Thanks

1 Like

If you would still like a response, please consider re-posting your question on: Triton Inference Server · GitHub , the NVIDIA and other teams will be able to help you there.
Sorry for the inconvenience and thanks for your patience.