Working with TensorRT engine in Deepstream

Implementing dynamic reshape in TensorRT, there are two engines one is PreprocessorEngine and another one is PredictionEngine.

How can I implement these two engines in deepstream for dynamic shape application?

DeepStream only support dynamic dynamic for onnx model,
About dynamic shape of onnx model on DeepStream, you first need to have a onnx model which was exported with dynamic dynamic, and then gibe this model to deepstream in configure file as other models, and set the “batch-size” also.
That is, you don’t need to implement the PreprocessorEngine and another one is PredictionEngine in deepstream

This onnx model will be secondary inference at my application. It doesn’t matter right?

yes, it doesn’t matter

My onnx model needs plugin. I have created plugin and tested in TensorRT with my onnx model. It was working.

So if I use onnx model for deepstream, how can I input plugin into onnx model? Any sample for this approach?

Looked at GST-nvinfer and it supports IPlugin. How about IPluginV2DynamicExt? IPluginV2DynamicExt is the plugin type used for one layer in my onnx model.

I think, need to create .so file for plugin and give path at custom-lib-path. IPluginV2DynamicExt type plugin is supported, right?

yes, you are right!

DeepStream will dlopen the lib pointed by custom-lib-path, and call the TRT plugin and DS post-processing function in it.