How to integrate TensorRT graph in deepstream pipline?

I want to integrate a custom tensorflow model that I converted to tensorrt in deepstream pipline.
I’ve tested this model on jetson nano using python script. Now, how can I integrate it in the pipeline?

Should I use nvinfer or nvinference server or make a custom plugin for it?

Hardware Platform : Jetson nano
DeepStream Version : 5.0
JetPack Version: 4.4DP

Hi,

To build TensorRT plugin, you can try to add the implementation into TenosrRT open source software directly.

  1. Git clone TensorRT OSS:
    $ git clone https://github.com/NVIDIA/TensorRT.git

  2. Add your implementation into ${TensorRT}/ plugin/

  3. Follow this steps to build and replace TensorRT plugin library
    deepstream_tao_apps/TRT-OSS/Jetson at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

Thanks.

What about using Iplugin Interface for adding support for custom layers instead of building nvinfer from source?
https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream%20Plugins%20Development%20Guide/deepstream_plugin_iplugin.html

Can you also ref some tensorrt example to add support for unsupported layers?

Hi,

Please follow our document to add a customized model with Deepstream:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/index.html#page/DeepStream_Development_Guide/deepstream_custom_model.html#

Thanks.

YES.
Please check the following samples for information:

/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_FasterRCNN
/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo
/opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD

Thanks.

1 Like