Use Tensorflow SSD Model in DeepStream & Custom Layer Issues (Solved)

I have Jetson TX2, JetPack3.2, DeepStream2.0

I would like to use Tensorflow ssd model in Deepstream.

However, I found only caffe model can be used in Deepstream from documentation. The model file defined in the config is:
"
model-file | The path to the caffemodel file. Not required if the model cache file is identified. | A string identifying the model file path. | model-file=/home/ubuntu/gie.caffemodel
"

Can you provide an idea how to inference TF SSD model with Deepstream? In the future, will Deepstream have more models?

Hi Guodebby,

  1. Our TensorRT/nvinfer plugin don’t support SSD directly, you need to use appsink to get yuv data, then use TRT + iplugin to do inference.

  2. we will support a open source dsexample plugin, which let you implement Iplugin to support SSD.

THanks
wayne zhu

Is DeepStream 2.0 already available for Jetson platform? When I go to downloads, all O can see are:

DeepStream_SDK_on_Jetson_1.0_pre-release.tbz2
DeepStream_SDK_on_Jetson_1.5_pre-release.tbz2

Where version 2.0 for Jetson can be downloaded from?

-albertr

I used 1.5 for Jetson

Hi Wayne,

Can you also provide an idea of whether custom layer in TF can be inferred through TRT and iplugin?

Debby

Hi guodebby,

Deepstream2.0 is still in plan, will be released in following few weeks.
Dsexample with iplugin support will be supported in next release.

Currently you can only do with method 1), → use appsink to get frame, then use TRT + iPlugin.

Thanks
wayne zhu

Hi Wayne, thank you for the reply. After looking over more documentation, I got more questions

Since Jeston Deepstream is version 1.5, so I looked over the Deepstream2.0 for Tesla, I still cannot find an example of implementing TRT and iPlugin. I guess there’s no example for TRT right now. For now, I used Gstreamer and opencv to infer my TF ssd model, but it’s really slow.

(1)
Will the encoding/decoding preform faster to use Deepstream instead of Gstreamer pipeline? (I guess they are same because Deepstream wrapped Gstreamer?)

(2)
Will Ds2.0 for Jetson have example for TRT engine implement?

(3)
For now , can you provide any information/link that can give me an idea of making the iPlugin for unsupported layer for TF model? (I want to change to use TRT to infer my model)

(4)
My basic idea is:
a) Gstreamer —> get frame or Deepstream → get frame
b) My frozen TF model → uff model
c) Create iplugin for unsupported layers
d) b+c → TRT engine
e) infer TR engine (maybe in Deepstream)
Is this correct?

Thank you very much!

(1)
Will the encoding/decoding preform faster to use Deepstream instead of Gstreamer pipeline? (I guess they are same because Deepstream wrapped Gstreamer?)

If your gstreamer pipeline use HW decode/encode, performance will be same.
(2)
Will Ds2.0 for Jetson have example for TRT engine implement?
Yes, but currently you can use appsink + TRT engine directly.

(3)
For now , can you provide any information/link that can give me an idea of making the iPlugin for unsupported layer for TF model? (I want to change to use TRT to infer my model)

(4)
My basic idea is:
a) Gstreamer —> get frame or Deepstream → get frame
b) My frozen TF model → uff model
c) Create iplugin for unsupported layers
d) b+c → TRT engine
e) infer TR engine (maybe in Deepstream)
Is this correct?

yes, currently you can go in this direction.

Thank you very much!

Hi, Is there any walk-through on use of Tensorflow mobilenet trained models within Deepstream infrastructure?

Reference:
https://devtalk.nvidia.com/default/topic/1055954/deepstream-for-tesla/deepstream-bounding-box-parser-function-for-samplessd-in-tensorrt-example/