Hello everyone,
I have Jetson Nano, Jetpack 4.6 and I build Triton Inference Server from build by using latest release(Release 2.12.0). I was able to successfully serve classification models before.
Now, I am trying to serve the segmentation model I trained using Tensorflow DeepLab.
There is only one model file with .pb extension. Below, you can see the folder structure I created.
<model_repository>/
/
config.pbtxt
1/
model.savedmodel/
SavedModel.pb
I am sharing the content of my config file below.
name: âsegmentationâ
platform: âtensorflow_savedmodelâ
max_batch_size : 0
input [
{
name: âImageTensor:0â
data_type: TYPE_FP32
#format: FORMAT_NCHW
dims: [1300,1000,3 ]
}
]
output [
{
name: âSemanticPrededictions:0â
data_type: TYPE_FP32
dims: [1300,1000,3]
}
]
When I try to serve the model, I am getting error like this:
failed to load âsegmentationâ version 1: Internal: Could not find SavedModel .pb or .pbtxt at supplied export directory path: ./model_repository/segmentation/1/model.savedmodel
I guess, Triton needs other file than .pb extension but Deeplab only generates .pb extension trained model.
What should I do to fix this problem to serve the model properly.
My other question is for my segmentation model config file
I would like to get segmented form of the input image as output. Is my config file is correct for that?
And If is correct, how should I edit the client inference script for that purpose? I couldnt find a similar project or resource, thatâs why I am not sure how to do inference from served segmentation model.
Thanks