How to deploy tensorflow models in deepstream ?

Hi, Can I deploy tensorflow models to deepstream2? Or only caffe models can be deployed?
I would like to know how they can be deployed and if that requires a plugin(like the yolo plugin) , for which all models such plugins are available?

Thank you!

Tensorflow pb model needs to be transformed to uff format by tensorRT tool (refer to tensorRT doc)
TensorRT has uff parser to generate tensorRT engine stream as well as caffe parser and onnx parser.

  1. Deepstream2.0 only has caffe parser. Deepstream3.0 has all parsers like uff/caffe/onnx.

TensorRT has IPlugin mechanism (IPlugin factory and IPlugin creator) for customer to implement their own network layer.

  1. Deepstream 2.0 does not support tensorRT IPlugin. Deepstream 3.0 can support tensorRT IPlugin (IPlugin creator).
    You can also refer to to implement your own tensorRT gstreamer plugin.

  2. Deepstream 2.0 can support input tensorRT engine stream directly in config “model-cache” field. You can write tensorRT code offline to transform your model to tensorRT engine stream. But this methold can’t be used if your network has IPlugin for tensorRT.

Deepstream 3.0 will come soon maybe less than one month.