Hi, Can I deploy tensorflow models to deepstream2? Or only caffe models can be deployed?
I would like to know how they can be deployed and if that requires a plugin(like the yolo plugin) , for which all models such plugins are available?
Thank you!
Hi, Can I deploy tensorflow models to deepstream2? Or only caffe models can be deployed?
I would like to know how they can be deployed and if that requires a plugin(like the yolo plugin) , for which all models such plugins are available?
Thank you!
Tensorflow pb model needs to be transformed to uff format by tensorRT tool (refer to tensorRT doc)
TensorRT has uff parser to generate tensorRT engine stream as well as caffe parser and onnx parser.
TensorRT has IPlugin mechanism (IPlugin factory and IPlugin creator) for customer to implement their own network layer.
Deepstream 2.0 does not support tensorRT IPlugin. Deepstream 3.0 can support tensorRT IPlugin (IPlugin creator).
You can also refer to https://github.com/vat-nvidia/deepstream-plugins to implement your own tensorRT gstreamer plugin.
Deepstream 2.0 can support input tensorRT engine stream directly in config “model-cache” field. You can write tensorRT code offline to transform your model to tensorRT engine stream. But this methold can’t be used if your network has IPlugin for tensorRT.
Deepstream 3.0 will come soon maybe less than one month.