How to run custom TensorFlow model in deepstream pipeline?

• Hardware Platform (Jetson / GPU) : jetson
• DeepStream Version : 5.0 GA
• JetPack Version (valid for Jetson only):4.4
• TensorRT Version:7.1

I trained an face recognition model with Tensorflow and converted to Tensorflow-TensorRT, I want to add this model to deepstream,
1- Is it possible to load converted model with triton-server plugin?
2- If so, the last output of model is 128-d tensort, Is it possible to pass the last tensor output to next element of gstreamer pipeline?
3- Is there another solution to use a custom TensorFlow model in deepstream pipeline

Hi

1. Yes. Please check the sample below:

2. Suppose yes.

3. You can also convert the model into the onnx or uff format, which can be supported by the Deepstream directly.

Thanks.

Thanks, @AastaLLL,
Ok I can convert the model into onnx/uff format, but How I can use this model in deepstream? using nvinfer plugin ? If so, Is it possible to use nvinfer for face feature extraction? the only difference of face extraction with classification is that the classification has extra layer in the last, softmax.

for classification models, nvinfer also apply softmax to get scores for classes, but I don’t want to apply to my last output model, I want to give me only output tensor to use in the next element.

The classification model has only one fix input size? If I trained a tensorflow model and then converted to onnx/uff format, Is it possible to use any classification model with varies input size in the nvinfer?

In the classification model, It’s used ppm file for mean subtraction, I don’t want to use mean subtraction, And I see in the documentation of nvinfer for prepossessing, It’s used :
y = net_scale*(x-mean), How I can skip mean subtraction?

Hi,

Do you want to use Triton server or native TensorRT?

For Triton server, you can pass the TensorFlow model to it directly.
For TensorRT, both onnx/uff formation are supported.
If the model is trained with TF-2.x, please use onnx as intermediate format for a better support.

And you can specify a layer as TensorRT output to get the face feature.
For ONNX model, you can find the corresponding name in this website:

For dynamic input shape, please check this document for more information:

To skip mean subtraction, you can just set the value to zero.

Thanks.