Using Custom segmentation models with Deepstream

• Hardware Platform (Jetson / GPU): Jetson
• DeepStream Version: 5.1
• JetPack Version (valid for Jetson only): 4.5.1
• TensorRT Version: 7.2
• Issue Type: questions

We are trying to deploy a custom segmentation model which was developed using Tensorflow which takes in images of shape (900,672,3) and has an output shape of (900,672,4). This model was converted to Onnx and then to TensorRT successfully.

However, We are unable to integrate it into DeepStream due to the following problems:

  1. we tried replacing the model-file in the deepstream-segmentation config file provided with the python samples. But the output shape we receive is (900,) when we try inferencing, which makes no sense in our context.

  2. Even if we did perform segmentation, we need to send the cropped out ROI to a secondary image classifier. we are unsure where to put the code for this.

We would like to know whether a custom parser has to be written for this purpose, and if so, can it be written in python?

Hi @krishnarajr319 ,

Can you share the tensorflow model and dstest_segmentation_config file with us?

Hi @krishnarajr319 ,

Based on your query regarding the input output shape.

$python -m tf2onnx.convert --saved_model saved_model --output model.onnx

$trtexec --onnx=model.onnx --explicitBatch --minShapes=‘input_1:0’:1x224x224x3 --optShapes=‘input_1:0’:1x224x224x3 --maxShapes=‘input_1:0’:1x224x224x3 --fp16 --inputIOFormats=fp16:chw --outputIOFormats=fp32:chw --saveEngine=model_fp16.engine

Modify according to your Input names and shape.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.