How to replace unsupported layers with plugins in DeepStream?

I was trying to open a few UFF files with IDeviceWorker::addInferenceTask_uff and met with “UFFParser: Validator error: Unsupported operation”.

I questioned about this topic in “DeepStream for tesla” forum and an answer was “try to refine your model”.
But I need another solution to apply UFF or Caffe prototxt without modifying a model file.

According to this topic https://devtalk.nvidia.com/default/topic/1028471/deepstream-for-tesla/does-deepstream-support-plugin-layer-/#
DeepStream seems to allow to create plugins to process unsupported layers (if I understood correctly).

I have read DeepStream User Guide (3.2 PLUG-IN MECHANISM especially) and reviewed samples in DeepStream but I can’t find the way yet.
As you know, TensorRT provides with plugin samples (such as samplePlugin, sampleFasterRCNN.cpp…) to process unsupported layers with implementing classes derived from IPlugin, IPluginFactory.

Is there a way to process unsupported layers with implementing plugins in DeepStream like TensorRT samples?

Hi,

Sorry that if our previous reply cannot meet your requirement.

TensorRT API do have plugin for non-supported layer but DeepStream doesn’t wrap it yet.

It looks like you have a plugin implementation already. (Please correct me if it is not true.)
A possible workaround is to use DeepStream for decoding/converting only and use TensorRT API for inferencing.

Workflow should be like this:
Video DeepStream decode Raw DeepStream converter RGB TensorRT inference → Label

The key point is to link the output of DeepStream converter and the input of TensorRT.

Thanks.

Thank you for prompt response.

So you mean there is no official way to process non-supported layer with plugin class implementation in DeepStream framework even if TensorRT framework has a plugin mechanism.

I don’t have any plugin implementation yet, I am just looking for a way to apply UFF or caffe model file containing non-supported layer to my application based on DeepStream SDK without modifying model file.
I have converted some Tensorflow SSD frozen models from github, TensorFlow Official ResNet with TensorRT, MobileNet, Tensorflow model zoo into UFF and open them with IDeviceWorker::addInferenceTask_uff, non-supported layer parser error occurred in all these cases.

I have reviewed nvDecInfer_detection sample code in DeepStream SDK.
IDeviceWorker::addInferenceTask and IDeviceWorker::addCustomerTask for parser module are called in main.cpp.

To avoid non-supported layer parser error, say that I don’t call IDeviceWorker::addInferenceTask but IDeviceWorker::addCustomerTask only.
In this case, my problems are

  1. 5th and 6th arguments of IDeviceWorker::addInferenceTask are input layer name and output layers name, how can I link the output of DeepStream converter and the input of TensorRT?
  2. I should implement whole NN layers with TensorRT API because I can't use DeepStream inference mechanism processed by parsing model file.

Would you explain me how to link the output of DeepStream converter and the input of TensorRT in detail or show sample code based on DeepStream please if I understand wrong above?

Hi,

We don’t have DeepStream element with TensorRT plugin support yet.

Currently, it’s required to use TensorRT API directly for the plugin implementation.
It’s not achieved by calling addCustomerTask but with TensorRT low level API.

parser->setPluginFactory(&pluginFactory);

You don’t need to implement the whole model but for the non-supported layer.
Here is example to process plugin with TensorRT for your reference:

Just try to use the same data point for DeepStream and TensorRT to share the buffer.
DeepStream: getCpuData() of IStreamTensor
TensorRT: https://github.com/AastaNV/Face-Recognition/blob/master/tensorNet.cpp#L92

Thanks.

Hello and Thank you for your response again.

Here is my understanding about your answers above, please correct me if I am wrong.

  1. DeepStream doesn’t support TensorRT plugins mechanism for non-supported layer and DeepStream plugin description in User Guide (3.2 PLUG-IN MECHANISM) isn’t intended for non-supported layer plugin implementation use.

  2. So it’s not possible to integrate DeepStream inference related code with IDeviceWorker::addInferenceTask (or IDeviceWorker::addInferenceTask_uff) and a TensorRT plugin implementation code for non-supported layers with ICaffeParser::setPluginFactory.

  3. But it’s possible to use DeepStream for decoding/converting use only and pass RGB output of DeepStream converter to TensorRT inference framework with plugin implementations for non-supported layers.
    In this case,

  • Hard coding for whole NN layers isn't necessary, because ICaffeParser::parse method loads and processes caffe deploy and model file and what to do is just implementing classes for non-supported layers
  • This is unavailable way for UFF models because IUffParser has no setPluginFactory function - https://devtalk.nvidia.com/default/topic/1027385/deepstream-for-tesla/why-iuffparser-has-no-setpluginfactory-function/ - so that the only way to apply UFF models containing TensorRT non-supported layers to an app based on DeepStream SDK is modifying and removing all non-supported layers in UFF files.

If I am correct,
b please let me know a way how to link the RGB output of DeepStream converter and the input of TensorRT in the nvDecInfer_detection sample code of DeepStream SDK in detail assuming that inference is created by calling ICaffeParser::parse instead of calling IDeviceWorker::addInferenceTask.

(2) Is this inference way by calling ICaffeParser::parse require high cost than by calling IDeviceWorker::addInferenceTask if RGB data from DeepStream converter goes through conversion from GPU to CPU?[/b]

Hi,

Sorry for my unclear explanation.

Actually, you can write TensorRT plugin with addInferenceTask() but it requires you to wrap the whole TensorRT engine.
So, instead of implementing a plugin-enabled TensorRT, we recommended calling TensorRT API directly for simplicity.

b[/b]
You can get the data pointer of Deepstream with getGpuData() and pass it into TensorRT.
We don’t have a dedicated sample demonstrating for this.
It’s worthy to give it a try.

b[/b]
Sorry for my typo.
getGpuData() will return GPU buffer to you directly.

Thanks.

Hello, Thank you for your reply.

I am confused about your contradictory advices above and sorry for a lot of my questions below.

  1. Your comment: “you can write TensorRT plugin with addInferenceTask() but it requires you to wrap the whole TensorRT engine.” → calling addInferenceTask() (or addInferenceTask_uff()) may cause parser validator error if the model file contains non-supported layers
  1. How to link TensorRT plugin and addInferenceTask() (or addInferenceTask_uff) ?
  2. Your comment: "So, instead of implementing a plugin-enabled TensorRT, we recommended calling TensorRT API directly" --> What does it mean, I should implement whole NN layers with TensorRT API for simplicity instead of implementing a TensorRT plugin for non-supported layer ? (I guess implementing TensorRT plugins for non-supported layer is much simpler than implementing whole NN layers with TensorRT API)
  1. I agree with it’s worthy to give it a try about linking the RGB output of DeepStream converter and the input of TensorRT.
  1. Which way should I pick up, to use DeepStream for decoding/converting only and use TensorRT API for inferencing or to use DeepStream for both decoding/converting and inferencing with addInferenceTask() (or addInferenceTask_uff()) ?
  2. Can I get the data pointer to call vpInputTensors[0]->getConstGpuData() in override method execute(const ModuleContext& context, const std::vector& vpInputTensors, const std::vector& vpOutputTensors) of a class derived from IModule after calling addCustomerTask(the class derived from IModule pointer) ?

DeepStream SDK contains a good nvDecInfer_detection sample, Would you explain me about this with this sample please ?

Hi,

We will feedback your requirement to the internal team and may write a sample to demonstrate how to link DeepStream and TensorRT.

Our concern is that there is a NEW DeepStream package which goes through GStreamer pipeline.
This new SDK is totally different to the current API you used.
Do you mind to wait for our next release?

Thanks.

Hello.

I am developing a multi-channel video analytics app based on DeepStream SDK for Tesla.
I have already confirmed RTSP video stream decoding and inference from multiple IP surveillance cameras on my app based on DeepStream SDK 1.5 package’s nvDecInfer_detection sample works well.

I can’t wait to see the sample code to demonstrate how to link DeepStream and TensorRT mechanism for non-supported layers if an internal team of NVIDIA may provide with it.
Because I think this is the most flexible way to apply a lot of open source based NN models if codes based on DeepStream can integrate with TensorRT samples for non-supported layers and it would be a very helpful sample for developers who build multi-channel video analytics apps using DeepStream SDK.

I am sure new feature - GStreamer pipeline support - of new DeepStream SDK package would be great but this isn’t high on my list of priorities.

Thank you.

Hi,

We will discuss this internally and update information with you later.

Hi,

After internal discussion, we recommends you to wait for our NEW package.
(Current legacy SDK won’t be updated anymore.)

The new package will be released within a week.
There is also an open source DeepStrean plugin for implementing TensorRT engine to be released soon.

In the NEW package, here are two suggestions for your use case:
1. DeepStream → TensorRT
2. DeepStream with customized DeepStream plugin for TensorRT

Currently, you can start from developing your non-supported layer with TensorRT plugin first.
Once our new package released, you can integrate it into DeepStream directly.

Thanks.

Hello.

I hope that the NEW DeepStream package provides interfaces to enable to implement operations that TensorRT doesn’t support and sample code to demonstrate this feature.
The upcoming of NEW DeepStream SDK is great news for developers who build a video analytics app.

I really appreciate you taking the time to answer my questions.