• Hardware Platform (Jetson / GPU): Jetson
• DeepStream Version: 5.1
• JetPack Version (valid for Jetson only): 4.5.1
• TensorRT Version: 7.2
• Issue Type: questions
We are trying to deploy a custom segmentation model which was developed using Tensorflow which takes in images of shape (900,672,3) and has an output shape of (900,672,4). This model was converted to Onnx and then to TensorRT successfully.
However, We are unable to integrate it into DeepStream due to the following problems:
-
we tried replacing the model-file in the deepstream-segmentation config file provided with the python samples. But the output shape we receive is (900,) when we try inferencing, which makes no sense in our context.
-
Even if we did perform segmentation, we need to send the cropped out ROI to a secondary image classifier. we are unsure where to put the code for this.
We would like to know whether a custom parser has to be written for this purpose, and if so, can it be written in python?