How to integrate TensorRT graph in deepstream pipline?

I want to integrate a custom tensorflow model that I converted to tensorrt in deepstream pipline.
I’ve tested this model on jetson nano using python script. Now, how can I integrate it in the pipeline?

Should I use nvinfer or nvinference server or make a custom plugin for it?

Hardware Platform : Jetson nano
DeepStream Version : 5.0
JetPack Version: 4.4DP


To build TensorRT plugin, you can try to add the implementation into TenosrRT open source software directly.

  1. Git clone TensorRT OSS:
    $ git clone

  2. Add your implementation into ${TensorRT}/ plugin/

  3. Follow this steps to build and replace TensorRT plugin library


What about using Iplugin Interface for adding support for custom layers instead of building nvinfer from source?

Can you also ref some tensorrt example to add support for unsupported layers?


Please follow our document to add a customized model with Deepstream:


Please check the following samples for information:



1 Like